Tuesday, December 16, 2008
What if I were wrong about edge-caching?
Nicholas Thompson at Wired Blog sums up yesterday's Wall Street Journal piece on Google. To summarize his summary,
So I've been assuming that the edge cache that Whitt is describing is at the edge of the Internet, and that it is connected to "the cloud" just like any server or customer. Whitt as much as says that this is what Google is doing.
But what if there's more going on?
What if Google were attempting to put a server in, say, a head end or a central office in such a way that it faced the local customers connected to that head end or central office. In this case, Google would be in a very privileged position. It would be communicating with the cableco's or telco's customers NOT via the Internet, but via a single wire, the Last Mile.
The ability to put a caching server this close to the customer is powerful, especially with a fiber or VDSL or DOCSIS 3 distribution network. There's no bottlenecks between cache and customer. The content arrives NOT from the Internet but straight from the provider.
The advantage to the telco or cableco is that the incremental costs of Internet traffic to its end-user customers would be lower. Popular videos would not need to travel over the Internet every time one of the local customers attached to the central office or head end requested it. Instead, it would be sent over the Internet to the cache once, then distributed to customers connected to the cache via central office or head end many times.
The disadvantage would be that the telco or cableco that owned the central office or head end would need to share a putatively proprietary advantage. It might be risking losing that proprietary advantage altogether.
This would be dangerous, especially to a cableco.
Does a cableco or telco have a duty to let a co-locator into its head-end or central office? Yes for telcos under the telecom act of 1996, but the whole notion of unbundled elements has been so trashed that I don't know where it stands. And the situation is even murkier for cablecos. Of course any of this could change, depending on how issues like structural separation (between infrastructure and applications) play out.
Would such an arrangement be a Net Neutrality issue? Hmmm. Yes if the pipes connecting the cache to the Internet had special properties, or if the telco/cableco dictated what kinds of apps could be cached, or yes if the telco/cableco wouldn't let other players co-locate caching servers in central offices and head ends, but my feeling is that once the content is cached and served out to the customer at the other end of the Last Mile, it's video, not the Internet.
I am not sure what I think here. It's a plausible scenario, but hypothetical. I invite the readers of this post to think this through with me . . .
- Google's edge caching isn't new or evil
- Lessig didn't shift gears on NN
- Microsoft and Yahoo have been off the NN bandwagon since 2006
- The Obama team still supports NN
- Amazon's Kindle support is consistent with its NN support
So I've been assuming that the edge cache that Whitt is describing is at the edge of the Internet, and that it is connected to "the cloud" just like any server or customer. Whitt as much as says that this is what Google is doing.
But what if there's more going on?
What if Google were attempting to put a server in, say, a head end or a central office in such a way that it faced the local customers connected to that head end or central office. In this case, Google would be in a very privileged position. It would be communicating with the cableco's or telco's customers NOT via the Internet, but via a single wire, the Last Mile.
The ability to put a caching server this close to the customer is powerful, especially with a fiber or VDSL or DOCSIS 3 distribution network. There's no bottlenecks between cache and customer. The content arrives NOT from the Internet but straight from the provider.
The advantage to the telco or cableco is that the incremental costs of Internet traffic to its end-user customers would be lower. Popular videos would not need to travel over the Internet every time one of the local customers attached to the central office or head end requested it. Instead, it would be sent over the Internet to the cache once, then distributed to customers connected to the cache via central office or head end many times.
The disadvantage would be that the telco or cableco that owned the central office or head end would need to share a putatively proprietary advantage. It might be risking losing that proprietary advantage altogether.
This would be dangerous, especially to a cableco.
Does a cableco or telco have a duty to let a co-locator into its head-end or central office? Yes for telcos under the telecom act of 1996, but the whole notion of unbundled elements has been so trashed that I don't know where it stands. And the situation is even murkier for cablecos. Of course any of this could change, depending on how issues like structural separation (between infrastructure and applications) play out.
Would such an arrangement be a Net Neutrality issue? Hmmm. Yes if the pipes connecting the cache to the Internet had special properties, or if the telco/cableco dictated what kinds of apps could be cached, or yes if the telco/cableco wouldn't let other players co-locate caching servers in central offices and head ends, but my feeling is that once the content is cached and served out to the customer at the other end of the Last Mile, it's video, not the Internet.
I am not sure what I think here. It's a plausible scenario, but hypothetical. I invite the readers of this post to think this through with me . . .
Technorati Tags: Google, NetworkNeutrality, video, WSJ
Comments:
Here's what Rick Whitt told me about this system yesterday: "The program involving cache servers within the broadband network is still only in beta, and (as of now) has no takers in the U.S. In my view, access to central offices (or cable headends) is not an NN issue, but should be a Title II functionality (regardless of what the FCC now says) governed by traditional common carriage principles, as well as any TA of '96 CLEC-style requirements."
The plan is for Google to install equipment that will bypass the public Internet and ensure that their content, e.g. YouTube, will be delivered to ISP customers faster and more reliably than competing services such as Netflix Instant Watch that depend on the public Internet for delivery. If there weren't a performance advantage, there would be no reason to do this.
Whether this violates NN depends on whose definition you take, and from what era. Current Lessig says it's fine as long as any (rich) company has access to the CO, but historical Lessig said such arrangements ("access tiering") are not fine because only a few large players can enjoy their benefits.
It seems to be that this system bundles expedited delivery with proprietary content, which I see as a benefit to consumers, but which the vast majority of NN supporters have historically opposed. I'd love to see Netflix' reaction to this situation.
To me, this deal illustrates the trouble of regulatory debates based on vague neologisms with no generally-agreed-upon meaning.
And they also illustrate something about Stupid Networks: we love stupidity at the IP layer because it makes so many things possible. But stupid networks are inefficient ones, so they're an invitation to optimize for speed and cost. Each form of optimization makes it harder for the little guy to compete, but each also makes certain things work better.
I'd hate to see too many rules about good and bad optimizations, because they'd be a mess of special cases. So that's why I stay away from regulating router behavior and stick to the disclosure and anti-trust angles. I suggest other friends of the Internet do likewise.
The plan is for Google to install equipment that will bypass the public Internet and ensure that their content, e.g. YouTube, will be delivered to ISP customers faster and more reliably than competing services such as Netflix Instant Watch that depend on the public Internet for delivery. If there weren't a performance advantage, there would be no reason to do this.
Whether this violates NN depends on whose definition you take, and from what era. Current Lessig says it's fine as long as any (rich) company has access to the CO, but historical Lessig said such arrangements ("access tiering") are not fine because only a few large players can enjoy their benefits.
It seems to be that this system bundles expedited delivery with proprietary content, which I see as a benefit to consumers, but which the vast majority of NN supporters have historically opposed. I'd love to see Netflix' reaction to this situation.
To me, this deal illustrates the trouble of regulatory debates based on vague neologisms with no generally-agreed-upon meaning.
And they also illustrate something about Stupid Networks: we love stupidity at the IP layer because it makes so many things possible. But stupid networks are inefficient ones, so they're an invitation to optimize for speed and cost. Each form of optimization makes it harder for the little guy to compete, but each also makes certain things work better.
I'd hate to see too many rules about good and bad optimizations, because they'd be a mess of special cases. So that's why I stay away from regulating router behavior and stick to the disclosure and anti-trust angles. I suggest other friends of the Internet do likewise.
David, I think your initial assumption was incorrect and Google is in fact aiming to colocate the servers. Google's Public Policy Blog says the following:
Google has offered to "colocate" caching servers within broadband providers' own facilities...
In other words, as you put it, Google is aiming to place servers in head/central offices in such a way that the servers face the local customers of the colocation ISPs. This would put Google in "a very privileged position... communicating with the cableco's or telco's customers NOT via the Internet, but via a single wire, the Last Mile."
This may not raise NN concerns in the strictest sense (since Google is sidestepping the Internet through the deal), but perhaps it raises them in broader constructions of NN, to the extent that less-powerful "content providers" have only the Internet by which to connect to customers. It would seem to raise a host of other anti-competitive issues, as well.
Google has offered to "colocate" caching servers within broadband providers' own facilities...
In other words, as you put it, Google is aiming to place servers in head/central offices in such a way that the servers face the local customers of the colocation ISPs. This would put Google in "a very privileged position... communicating with the cableco's or telco's customers NOT via the Internet, but via a single wire, the Last Mile."
This may not raise NN concerns in the strictest sense (since Google is sidestepping the Internet through the deal), but perhaps it raises them in broader constructions of NN, to the extent that less-powerful "content providers" have only the Internet by which to connect to customers. It would seem to raise a host of other anti-competitive issues, as well.
To me, the crucial distinction is would the Google cache be located (a) within striking distance of the Last Mile or (b) somewhere else at the edge of the Internet cloud (like Akamai and Amazon).
I agree with "BubbaDude" that regulating activities such as Internet routing and bandwidth management is a can of worms we don't want to open. (So is trying to define "network neutrality," which each proponent, including Google, defines in the way that serves its own self interest.)
The only rational sort of regulation, and the only regulation that would actually help consumers in any way, is to prohibit anticompetitive behavior (thus ensuring consumer choice) and require disclosure of business practices so that consumers can make informed choices. See my document at http://www.brettglass.com/principles.pdf for my specific proposals. Note that Google's placement of servers at ISPs might be considered anticompetitive if ISPs did not allow others to co-locate servers or if Google was charged a lower price than others who wished to co-locate.
The only rational sort of regulation, and the only regulation that would actually help consumers in any way, is to prohibit anticompetitive behavior (thus ensuring consumer choice) and require disclosure of business practices so that consumers can make informed choices. See my document at http://www.brettglass.com/principles.pdf for my specific proposals. Note that Google's placement of servers at ISPs might be considered anticompetitive if ISPs did not allow others to co-locate servers or if Google was charged a lower price than others who wished to co-locate.
Akamai already places some of their systems in ISP CO's, and that's what Google wants to do as well. They would presumably be connected to the Last Mile through and Ethernet switch, but it could be a Layer 3 IP switch. It's not a full-function IS-IS router, it's an "edge router" or switch.
Richard Bennett
Richard Bennett
David,
Brent and BubbaDude have it right. This is the foundation of Akamai's original business model, and their architecture puts many many servers as close to the edge as possible (vs Limelight which has fewer large servers farther away).
This is in fact what I assumed Google was doing once I heard "edge caching".
Brent and BubbaDude have it right. This is the foundation of Akamai's original business model, and their architecture puts many many servers as close to the edge as possible (vs Limelight which has fewer large servers farther away).
This is in fact what I assumed Google was doing once I heard "edge caching".
I should have added that Akamai, at least, is most definitely within striking distance of the last mile. Always has been.
I don't necessarily view this as a problem, since Akamai is a distributor not a content generator. This is consistent with horizontality/structural separation.
I don't necessarily view this as a problem, since Akamai is a distributor not a content generator. This is consistent with horizontality/structural separation.
Tim Lee at Freedom to Tinker offers a pretty good description of edge caching.
http://www.freedom-to-tinker.com/blog/tblee/journal-misunderstands-content-delivery-networks
http://www.freedom-to-tinker.com/blog/tblee/journal-misunderstands-content-delivery-networks
The real problem here, as mentioned by someone in an earlier thread, is that Google has a tremendous amount of economic power that it can leverage.
Let's suppose that Google comes to me, an ISP, and says, "We want you to host our application server for free, because we're Google and hosting our server will save you a lot of bandwidth." Realizing that they're right (and because our bandwidth costs really are very high), we waive our usual fees for hosting and rack space. (We don't really have a choice; after all, not having the Google server in house would put us at a disadvantage relative to our competitors and cost us a lot in backbone bandwidth. We'd likely host Google's server even if we weren't normally in the hosting business.)
But then, some smaller content providers -- maybe startups -- come along and also want to place servers with us. What's the fair thing to do? What is it reasonable to expect us, especially as a small company to do? We want to be fair, but hosting servers costs money. We can't give away all of our rack space and electricity for free, or have the owners of dozens of third party servers tromping through our machine room at odd hours to maintain them; we're not set up for that. And what do we do about liability if there's a fire or something else damages their servers? What if they have proprietary software on their servers that we have to protect from corporate espionage? Do we have to invest in a "locked down" room with video surveillance, etc. You can see how this opens a whole can of worms here -- all due to Google's immense clout, market power, and bandwidth consumption.
Let's suppose that Google comes to me, an ISP, and says, "We want you to host our application server for free, because we're Google and hosting our server will save you a lot of bandwidth." Realizing that they're right (and because our bandwidth costs really are very high), we waive our usual fees for hosting and rack space. (We don't really have a choice; after all, not having the Google server in house would put us at a disadvantage relative to our competitors and cost us a lot in backbone bandwidth. We'd likely host Google's server even if we weren't normally in the hosting business.)
But then, some smaller content providers -- maybe startups -- come along and also want to place servers with us. What's the fair thing to do? What is it reasonable to expect us, especially as a small company to do? We want to be fair, but hosting servers costs money. We can't give away all of our rack space and electricity for free, or have the owners of dozens of third party servers tromping through our machine room at odd hours to maintain them; we're not set up for that. And what do we do about liability if there's a fire or something else damages their servers? What if they have proprietary software on their servers that we have to protect from corporate espionage? Do we have to invest in a "locked down" room with video surveillance, etc. You can see how this opens a whole can of worms here -- all due to Google's immense clout, market power, and bandwidth consumption.
To me, this smells like tilting at a windmill. Akamai is already at last mile providers facilities, and will cache any content a customer wants in exchange for a business deal. Presumably Google looked at the existing options and have concluded there are efficiencies in doing the same themselves and/or the existing options didn't meet their platform requirements. Rather than regulation, it seems to me that perhaps there is a business case & a start-up opportunity here: set up co-location facilities that provide for very high speed local connectivity to last mile providers networks. Let the market decide in other words. Regulation, if you follow what's going on with AWS-3 as an example, can easily drag in a whole political ideology which would be arguably worse than any perceived threat of NN abuse. Or at the very best, regulation starts out looking the way you want before all the political compromise, ideology, and lobbying mutate the rules into a form that serve no ones interest.
Brett: you might have a published traffic threshold in place. When a provider meets threshold, then discussion can begin. Alternatively, a last mile provider could work with a co-lo facility to get a high speed link in place, then let the co-lo deal with all the physical access issues. Seems like a solvable problem at the cost of some planning.
Rick, a "published traffic threshold" means discriminating against startups and small businesses... in a "chicken and egg" way. (If they can't get the acceleration, will they ever develop the traffic?) Is this good policy?
Brett: that's the nature of the market, an operator who has achieved scale can avail themselves of efficiencies that might be otherwise unavailable to a smaller operator. To me, it doesn't make any sense at all to ask the government to get into the business of choosing business models for us. This may well mean that once a "YouTube" type idea has been "done" once, then the cost of entry is so prohibitive that any start-up with a feature set so similar to the incumbent that they can't differentiate themselves (in a marketable+finance sense) then they're doomed. Doomed either to never getting launched, or having to sell the idea to the incumbent. But the situation isn't so dire as that - what this boils down to, IMO, is a clever technologist will solve for any artificial barriers to entry and find a way to overlay. This amounts to a form of darwinism for those not so clever to route around entry problems ;-)
Anyways, again, use of Akamai is only a deal away, so getting content closer to a customer is really a non problem. It may well be that some platform of choice compromises have to be made, but this seems to me to be a solvable technical problem.
Anyways, again, use of Akamai is only a deal away, so getting content closer to a customer is really a non problem. It may well be that some platform of choice compromises have to be made, but this seems to me to be a solvable technical problem.
So, what you're saying is that barriers to entry don't matter to you. I suppose that's good, in a way, because it means that you must consider similar arguments in favor of "network neutrality" regulation to be bogus as well.
It may sound like that (barriers), but I do believe that an open Internet fosters innovation. What I'm not a fan of at all is regulation. The other side of the argument is that we rely on private investment in delivering the bits from point A to point B. There has to be some reasonable balance between NN needs and those making the decisions to build the networks in a profitable way. The risk being the obvious one - that we end up with a set of laws that really are anti-business, and thereby scare off future investment. A really slow dialup speed network that was totally open in NN sense isn't my idea of heaven either.
David,
Thanks for your thoughtful commentary on this issue.
As we described in our post, Google places caching servers at the edge of our network. Generally, the servers are in our data centers and POPs, and we then peer with broadband providers or pay for connectivity to them.
As part of Global Cache, Google offers to provide caching servers to broadband providers to place in their facilities. We leave the placement of the cache up to the ISP, and the ISP can select the best location for the cache based on its own network architecture and space constraints.
We agree that broadband providers should not offer services like colocation in a discriminatory fashion. If broadband providers were to leverage their unilateral control over consumers' connections and offer colocation or caching services in an anti-competitive fashion, that would threaten the open Internet and the innovation it enables.
All of Google's colocation agreements with ISPs are non-exclusive, meaning any other entity could employ similar arrangements. Also, none of them require (or encourage) that Google traffic be treated with higher priority than other traffic.
Thanks for your thoughtful commentary on this issue.
As we described in our post, Google places caching servers at the edge of our network. Generally, the servers are in our data centers and POPs, and we then peer with broadband providers or pay for connectivity to them.
As part of Global Cache, Google offers to provide caching servers to broadband providers to place in their facilities. We leave the placement of the cache up to the ISP, and the ISP can select the best location for the cache based on its own network architecture and space constraints.
We agree that broadband providers should not offer services like colocation in a discriminatory fashion. If broadband providers were to leverage their unilateral control over consumers' connections and offer colocation or caching services in an anti-competitive fashion, that would threaten the open Internet and the innovation it enables.
All of Google's colocation agreements with ISPs are non-exclusive, meaning any other entity could employ similar arrangements. Also, none of them require (or encourage) that Google traffic be treated with higher priority than other traffic.
How interesting.
When net neutrality became a big public issue in 2006, advocates were upset about two statements from telco CEOs: Ed Whiteacre saying Google and Vonage couldn't use his pipes for free, and Bill Smith saying that Bell South would like to offer an acceleration service. The latter one in particular created the firestorm, as Stupid Networks people said "wait a minute, you can't go around privileging some deep-pockets player on the Internet and sending everyone else down a dirt road." I'm sure we all remember that.
Now here comes Google offering to pay ISPs to put their servers a thousand feet from the consumer, and the NN advocates are OK with it. I'm amazed.
Mr. Slater can say "[Google doesn't] require (or encourage) that Google traffic be treated with higher priority than other traffic." Take a close look at that statement and weigh it for integrity and deception.
Internet engineers know that the mere fact that these servers are located inside ISP facilities causes their traffic to "be treated with higher priority than other traffic." Google's traffic goes right from the disk drive to the ISP first mile without having to compete with all the traffic on the public Internet. Location and bandwidth dictate Quality of Service on the Internet, and this arrangment optimizes both for Google. No additional boosting is needed.
So here we have Google doing a 180 on their public interest fans and encouraging ISPs to sell a fast lane service to them.
Hats off to Google's spin doctors, this takes the cake.
Post a Comment
When net neutrality became a big public issue in 2006, advocates were upset about two statements from telco CEOs: Ed Whiteacre saying Google and Vonage couldn't use his pipes for free, and Bill Smith saying that Bell South would like to offer an acceleration service. The latter one in particular created the firestorm, as Stupid Networks people said "wait a minute, you can't go around privileging some deep-pockets player on the Internet and sending everyone else down a dirt road." I'm sure we all remember that.
Now here comes Google offering to pay ISPs to put their servers a thousand feet from the consumer, and the NN advocates are OK with it. I'm amazed.
Mr. Slater can say "[Google doesn't] require (or encourage) that Google traffic be treated with higher priority than other traffic." Take a close look at that statement and weigh it for integrity and deception.
Internet engineers know that the mere fact that these servers are located inside ISP facilities causes their traffic to "be treated with higher priority than other traffic." Google's traffic goes right from the disk drive to the ISP first mile without having to compete with all the traffic on the public Internet. Location and bandwidth dictate Quality of Service on the Internet, and this arrangment optimizes both for Google. No additional boosting is needed.
So here we have Google doing a 180 on their public interest fans and encouraging ISPs to sell a fast lane service to them.
Hats off to Google's spin doctors, this takes the cake.