Entries tagged 'feedmesh'
the folks at yahoo! sent a couple of blo.gs t-shirts to me a while ago. i held off on showing it off here before it would be a surprise when i sent one to albert, who designed the logo, and it took me way too long to get around to that.
now if only they’d fix the service. i’m sure the addition of the rss ping data has made it more useful (if overzealous) to those plugged into the feedmesh stuff and building services on top of that, but i’m finding it pretty useless for personal blog update tracking these days.
businessweek’s blogspotting noticed in a wall street journal article that technorati is making deals to get ping data first, or at least that’s the claim in the article. it would be interesting to know who those deals are with. i can’t imagine it’s with either blogger (google) or six apart.
of course, i’m not sure it helps for technorati to have the data first if nobody can ever actually get at it because of their persistent inability to handle their search load.
ben writes more about what sixapart is up to with the atom stream.
clearly sixapart is up to great evil, because they are not using rss and avoiding using the existing cloud and feedmesh names. i demand swift angry mob justice. i’ve even heard they have a special room in their new offices where they smash kittens with hammers.
brad implemented a stream of atom updates from livejournal, which is sort of like the blo.gs cloud interface on steroids. nifty.
foo camp is this weekend, so that means that the feedmesh thing is a year old now. and speaking of that, put me in the jason shellen “not invited, but whatever” camp and not the russell beattie “not invited, and they kicked my puppy” camp.
eweek has covered the feedmesh project and done a little editorializing about it.
that’s the sort of thing that sparks life on a project, of course.
congrats to mark fletcher and the bloglines team on being acquired by ask jeeves.
once upon a time, a business development person at ask jeeves called me to talk about blo.gs. as i recall, the gist of the conversation from my side was that the site made no money, i spent no time on it, and there really wasn’t much to it.
this is another of those obvious-in-retrospect services that i can now kick myself for not building. having the vision, courage, and patience to execute on the idea is the hard part, of course.
at last year’s foo camp, when the limping feedmesh thing kicked off, someone suggested that we set up a yahoo group for discussion, and i made some comment about preferring something “more real.” a funny thing to say when one of the guys who built it (mark) was in the room. but in hindsight, i’m glad nobody wasted the time trying to create anything more real.
who’s next? i would think pubsub.com would be a likely acquisition for someone.
i also find this acquisition funny because there was a time when i almost ended up working at ask jeeves because i knew the ceo at the time. i got an early-morning call (or what was early-morning for me those days) where i agreed to fly up to interview, but i turned around a few hours later and cancelled, once i was awake and realized that i had no desire to work at a place using iis and asp, or relocate to the bay area.
the bandwidth bump
blue is the outgoing bandwidth, green is the incoming bandwidth. i’m not sure why there was a big initial spike.
repeating myself
for the blo.gs cloud service, i had written a little server in php that took input from connections on a unix domain socket and repeated it out to a number of external tcp connections. but it would keep getting stuck when someone wasn’t reading data fast enough, and i couldn’t figure out how to get the php code to handle that.
so i rewrote it in c. it’s only 274 lines of code, and it seems to work just fine. it was actually a pretty straightforward port from php to c, although i had to spend some quality time refreshing my memory on how all the various socket functions work.
there’s a visible bump in the graph of my outgoing bandwidth from after this was fixed.
no comment
from “rss subscription central?” at internetnews.com:
"Dave [Winer] knows a lot about getting technology people to work together," [Jupiter Research analyst Gary] Stein said. He suggested Winer could tie his efforts to Feedmesh, an initiative that aims to tie all feeds together into a single data source that all aggregators could use.
feedster now open for pinging
feedster now has a ping interface. no changes.xml or any other way for others to get the ping data, naturally. i almost feel guilty for the whole feedmesh thing not getting any traction. but blo.gs is currently hobbled by a bug in mysql, and i have next to no energy or enthusiasm for dealing with computers outside of work. which isn’t to say that it is much greater inside of work. i did finally make blo.gs self-regulating to the point that when the server crashes and has to rebuild, it doesn’t just grind to a halt.
one thing jeremy noted about web 2.0 is that he plugged feedmesh at the “dialing on the app tone” session. too bad the effort seems stuck. no other sites appear to have moved any further along on sharing the ping data that they are currently receiving. the only change is that i have introduced a stream of changes that nobody is using.
decentralized web(site|log) update notifications and content distribution
this is something that has been on my mind lately, and hope to talk about with smart people this weekend. (“the first rule is...”)
in a bit of interesting timing, this little software company in redmond recently hit the wall in dealing with feeding rss to zillions of clients on one of their sites.
in preparation, i’ve been digging into info on some of the p2p frameworks out there. the most promising thing i’ve come across is scribe. the disappointing thing (for me) is that it is built with java, which limits my ability to play with it.
while it would be tempting to think merely about update notifications, that just doesn’t go far enough. even if you eliminated all of the polling that rss and atom aggregation clients did, you would have just traded it for a thundering-herd problem when a notification was sent out. (this is the problem that shrook’s distributed checking has, aside from the distribution of notifications not being distributed.)
the atom-syntax list has a long thread on the issue of bandwidth consumption of rss/atom feeds, and bob wyman is clearly exploring some of the same edges of the space as me.
maybe it’s useful to sketch out a scenario of how i envision this working: i decide to track a site like boing boing, so i subscribe to it using my aggregation client. when it subscribes, it gets a public key (probably something i fetch from their server, perhaps embedded in the rss/atom feed). my client then hooks into the notification-and-content-distribution-network-in-the-sky, and says “hey, give me updates about boingboing”. later, the fine folks at boing boing (or xeni) post something, and because they’re using fancy new software that supports this mythical decentralized distribution system, it pushes the entry into the cloud. the update circulates through the cloud, reaching me in a nice ln(n) sort of way. my client then checks that the signature actually matches the public key i got earlier, and goes ahead and displays the content to me, fresh from the oven.
another scenario: now when i subscribe to jeremy zawodny’s blog, who has been slow to update his weblog software (in my hypothetical scenario) because he’s too busy learning how to fly airplanes, i don’t get updates whenever he publishes. but there’s enough other readers running this cloud-enabled aggregation software that when they decide they haven’t seen an update recently, they go ahead and poll his site. but when they notice an update, they inject it into the cloud. or they even notify the cloud that there hasn’t been an update.
obviously that second situation is much less ideal: there’s no signature, so some bozo could start injecting postgresql is great!
entries into the jeremy zawodny feed space. or someone could just feed nothing changed
messages, resulting in updates not getting noticed. the latter is fairly easy to deal with (add a bit of fuzzy logic there, where clients sometime decide to check for themselves even when they’ve been told nothing is new), but i’m not so sure about the forgery problem in the absence of some sort of signing mechanism.
in addition to notification, a nice feature for this cloud to have would be caching. that way when i wake up my machine in the morning, the updates i’ve missed can stream in from the network of peers who have been awake, and i don’t have to bother the original sites.
i don’t think there is going to be a quick and easy solution to this, but i hope to aid in the bootstrapping. if nothing else, blo.gs can certainly gateway what it knows about blog updates into whatever system materializes. (it certainly can’t scale any worse than the existing cloud interface, which is pretty inefficient given the rate that pings are coming in.)
a footnote on the signing mechanism: there’s the xml-signature syntax and processing specification that covers this. i haven’t really looked at in detail to know what parts of the problem it solves or does not solve.
(anybody who suggests bittorrent as a key component of the solution will have to work much harder to get a passing grade.)