"co-ed" is defined as (1) attended by members of both sexes, or (2) a female student at a coeducational college or university. This second definition illustrates a troubling feature of activism: the terminology captures precisely the inequality that is meant to be defeated. I am a student at a coeducational university, yet I am not a co-ed. Only women can be co-eds. Usage reflects history. The same goes for race and gender history and philosophy. If I say that I am studying gendered history of science, that connotes (to some) that I am studying women in the history of science. The fact is, I study mostly men in the history of science. That doesn't make gender history a less relevant approach; if anything, it makes it more necessary. Saying "gender" implies "women" for specific historical reasons: gender history arose as part of a larger movement to understand history as more than the story of elites (who were typically rich white men). In the early years of such a movement, that means a lot of topics on The Role of [a non-elite group] in the History of [whatever]. These topics are the low-hanging fruit--they aren't necessarily easy to do (elites tend to keep better records than non-elites), but they are easy to think up. What's harder to do (because of the imbalance in source materials, among other things) is to create a balanced account.
sci‧ence [sayh-UHns] n: the study of deviant behavior; why things are not as we expect them to be.
what does that make the philosophy of science?
Tuesday, 19 December 2006
a mighty wind
I take it as a given that there can never be a complete history of anything. There's just too much out there to talk about; everything is eventually related to everything else and too many people are involved. Nevertheless, I take it the goal of history is to come to some approximation of completeness. The fundamental contribution of the past few generations was to realize that history shouldn't just be about elites. But then, what should it be about? It can't be "just" anything--not just men, not just women, not just Europeans, not just non-Europeans, not just oppressed, not just oppressors,.... Some general organizing principles have been suggested. Perhaps history is primarily about power differentials, for example.
If I were to offer my own single organizing principle for history, it might go something like this: history is about tipping points; how accretions of individual actions eventuate global change. Since I study science, it's about how various scientific modes or practices or ideas become dominant and change and get replaced. But isn't this a study of elites? Ideas that catch on, rather than ideas that get discarded? I'd like to think that an appropriate study of any given episode in the history of science contextualizes the notions that eventually dominate within the sea of ideas that don't. In other words, it tells the story of the elite, but does so by explicitly examining the nature of its elite-hood. Newton's gravitational theory is instantly credible because it comports with observational data. At the same time, it is mysterious, because it doesn't seem to explain anything, at least under the current notion of explanation. The story of science in the eighteenth century is the story of how Newton's theories grow to dominate natural philosophy. That's not the only story, of course, but if history is like a bunch of air molecules moving around separately, then Newtonianism is the gust of wind they inhabit.
If I were to offer my own single organizing principle for history, it might go something like this: history is about tipping points; how accretions of individual actions eventuate global change. Since I study science, it's about how various scientific modes or practices or ideas become dominant and change and get replaced. But isn't this a study of elites? Ideas that catch on, rather than ideas that get discarded? I'd like to think that an appropriate study of any given episode in the history of science contextualizes the notions that eventually dominate within the sea of ideas that don't. In other words, it tells the story of the elite, but does so by explicitly examining the nature of its elite-hood. Newton's gravitational theory is instantly credible because it comports with observational data. At the same time, it is mysterious, because it doesn't seem to explain anything, at least under the current notion of explanation. The story of science in the eighteenth century is the story of how Newton's theories grow to dominate natural philosophy. That's not the only story, of course, but if history is like a bunch of air molecules moving around separately, then Newtonianism is the gust of wind they inhabit.
Sunday, 17 December 2006
aggregation or agravation?
As an undergraduate, I first articulated the question that still drives much of my research: how does the way we think affect what we know? The transition from mytho-poeic to proto-scientific explanation in Thales still floats at the back of my mind as the paradigmatic example underlying the significance of such a change. But my interest is not historical; it's personal. How can I change my mind? If I succeed, how do I understand the difference? Can I switch back and forth? How do I communicate a novel idea to someone else? Will they understand it in the same way I do? How do new ideas catch on? I was just then becoming enthralled with chaos theory and fuzzy logic, and I was convinced that these ideas stood outside of our usual ways of thinking. Would it be possible to internalize these ideas? How would it change the way we saw the world? What problems would loom larger, and which ones would disappear into the background?
Philosophy of science has similar aims. Kuhn's Structure of Scientific Revolutions is paradigmatic (hah!) of the philosophy of science. It's about a collective venture called science, and as a result begins to sound hollow and false the more I know about any given episode in the history of science. Structure is about aggregate behavior; it describes the causes and mechanisms of scientific change. In the opening pages, Kuhn says:
I wanted to be able to talk about the two roles of a schema (mental and social) separately, to distinguish how individuals can change schemas and show how science as a collective venture can do the same thing. This was the piece that was missing from my first (second-hand) introduction to Kuhn--a clear understanding of what a paradigm is for an individual scientist. It's been close to a decade since that first introduction, and my views have surely grown more sophisticated generally, yet this same tension continues to frustrate me. I'm still trying to reconcile the psychology of theory change with the sociology of theory change. I'm still trying to reconcile the specific trickles of history with a workable general idea of science.
It strikes me that this problem is exactly the problem with Bob Batterman's two different explications of breaking behavior. One is detailed, contingent, and right, the other is idealized, universal, and explanatory. How can one description be right and not be explanatory? And how can the other be explanatory without also being right?
Philosophy of science has similar aims. Kuhn's Structure of Scientific Revolutions is paradigmatic (hah!) of the philosophy of science. It's about a collective venture called science, and as a result begins to sound hollow and false the more I know about any given episode in the history of science. Structure is about aggregate behavior; it describes the causes and mechanisms of scientific change. In the opening pages, Kuhn says:
[A paradigm is] sufficiently unprecedented to attract an enduring group of adherents away from competing modes of scientific activity. Simultaneously, [a paradigm is] sufficiently open-ended to leave all sorts of problems for the redefined group of practitioners to resolve.When I was writing my undergraduate thesis, I used the word "schema" to refer to a similar notion, but I distinguished conceptual from descriptive schemas. Conceptual schemas are mental objects we use in the manner of shorthand to organize our thoughts about the world. Descriptive schemas are conceptual schemas with an additional social component. We treat descriptive schemas in much the way we treat language--they stand in for ideas about the world, and we usually assume that translation is perfect. I made the distinction out of an unarticulated discomfort at the social-mental interaction. I was trying to walk the tightrope between coming up with a notion that is true (but too complicated to state) and a notion that is simple (but false enough to collapse under scrutiny).
I wanted to be able to talk about the two roles of a schema (mental and social) separately, to distinguish how individuals can change schemas and show how science as a collective venture can do the same thing. This was the piece that was missing from my first (second-hand) introduction to Kuhn--a clear understanding of what a paradigm is for an individual scientist. It's been close to a decade since that first introduction, and my views have surely grown more sophisticated generally, yet this same tension continues to frustrate me. I'm still trying to reconcile the psychology of theory change with the sociology of theory change. I'm still trying to reconcile the specific trickles of history with a workable general idea of science.
It strikes me that this problem is exactly the problem with Bob Batterman's two different explications of breaking behavior. One is detailed, contingent, and right, the other is idealized, universal, and explanatory. How can one description be right and not be explanatory? And how can the other be explanatory without also being right?
Thursday, 14 December 2006
the more things change...
Every branch of science I’ve examined has an equilibrium principle. This is the principle that says, essentially, if nothing changes, then nothing will change. It sounds trite, but it’s essentially the claim that when things stay the same, they don’t require explanation. It’s only when things change that we need to start paying attention. Inertia is an equilibrium principle, and applied forces explain changes in inertia. Inertia itself doesn't require an explanation--it's axiomatic. This is what I mean when I say that science is the study of deviant behavior—it’s all about how things change.
Power laws as emergent behavior
Power laws describe relationships with an exponential scaling effect. The party game 6 degrees of Kevin Bacon works on the basis of this principle--some actors have been in only a few movies or co-starred with only a few other actors. Others, like Kevin Bacon, have been in a lot of movies and co-starred with a lot of other actors. There are many more people with few connections, but only a few people with many connections. In fact, the distribution by number of relationships follows a power law: y=x^k.
One intriguing feature of power laws emerge automatically from random connections. Imagine a board with N nails sticking out of it and K strands of yarn ties between various pairs of nails. We can count the number of strands tied to each nail. If the pairs of nails are selected randomly, it will just happen that some nails get selected more often than others, and in fact, the number of connections to each nail follows a power law. Suppose there are 128 nails. Then perhaps 64 have just 1 connection, 32 have 2, 16 have 3, 8 have 4, 4 have 8, 2 have 16, and 1 lucky nail has 32 connections! The point is that there's nothing magical or mysterious about power scaling; it emerges naturally. It says something genuine and interesting about the sort of phenomenon you're examining, and functions as an "explanation" of sorts, but it's an unusual explanation: it's actually an equilibrium condition, an assertion that this behavior is normal and doesn't require detailed explanation.
One intriguing feature of power laws emerge automatically from random connections. Imagine a board with N nails sticking out of it and K strands of yarn ties between various pairs of nails. We can count the number of strands tied to each nail. If the pairs of nails are selected randomly, it will just happen that some nails get selected more often than others, and in fact, the number of connections to each nail follows a power law. Suppose there are 128 nails. Then perhaps 64 have just 1 connection, 32 have 2, 16 have 3, 8 have 4, 4 have 8, 2 have 16, and 1 lucky nail has 32 connections! The point is that there's nothing magical or mysterious about power scaling; it emerges naturally. It says something genuine and interesting about the sort of phenomenon you're examining, and functions as an "explanation" of sorts, but it's an unusual explanation: it's actually an equilibrium condition, an assertion that this behavior is normal and doesn't require detailed explanation.
Devil in the Details - Robert Batterman's odd notion
Bob Batterman gave a talk a week ago, and I've been meaning to say something about it. Here's the abstract:
As usual, things turn out to be more complicated than we philosophers would like. Some idealizations do not reflect convergence behavior in the complete model, but Bob argues that they are nevertheless genuinely explanatory. An example (one of Bob's) may help.
If we compare two models of a pole breaking under strain (one from molecular dynamics, the other with a continuum idealization), we find that breakage in the continuum model comes from a singularity. On the other hand, in the molecular models, breaks arise from the contingent details of the system's evolution. There is no convergence on singularity. In either model, the pole breaks under the increasing strain, but the continuum model wipes out precisely the detailed initial conditions on which the molecular model depends. Batterman says that the imperfections in the molecular system are explanatory of single events, but not of the class of breaking behavior. Singularities in continuum models fill that explanatory role. How can two fundamentally different kinds of explanations still count as genuine explanations?
This paper discusses the nature and role of idealizations in mathematical models and simulations. In particular, it argues that sometimes idealizations are explanatorily essential--that without them, a full understanding of the phenomenon of interest cannot be achieved. Several examples are considered in some detail.Bob says that the traditional philosophy behind models is that an idealization is justified when the behavior of a "complete" model (say, molecular dynamical model) converges on the idealized model. This is a natural idea; essentially, the idealization captures some pattern that does exist in the complete model but is perhaps too complicated or too subtle to notice.
As usual, things turn out to be more complicated than we philosophers would like. Some idealizations do not reflect convergence behavior in the complete model, but Bob argues that they are nevertheless genuinely explanatory. An example (one of Bob's) may help.
If we compare two models of a pole breaking under strain (one from molecular dynamics, the other with a continuum idealization), we find that breakage in the continuum model comes from a singularity. On the other hand, in the molecular models, breaks arise from the contingent details of the system's evolution. There is no convergence on singularity. In either model, the pole breaks under the increasing strain, but the continuum model wipes out precisely the detailed initial conditions on which the molecular model depends. Batterman says that the imperfections in the molecular system are explanatory of single events, but not of the class of breaking behavior. Singularities in continuum models fill that explanatory role. How can two fundamentally different kinds of explanations still count as genuine explanations?
Monday, 4 December 2006
environmentalism
James Lovelock, one of my favorite eccentrics, is at it again:
My point, back before visions of MacBooks began dancing in my head, was that there are some elements of my lifestyle that are likely here to stay. One of them is my addiction to rare-earth-element-containing electronics. Stipulating this, and also allowing for my tiny financial resources, I still face a large number of green tradeoffs every day. It’s great that I am made aware of a lot of them without much effort on my part, but it’s still hard to choose which ones I should focus on.
A jingle from my childhood goes, “brown eggs are local eggs, and local eggs are fresh!” I’m not sure what color has to do with locality, or whether the same holds true for a major urban area like Toronto, but I am certainly aware of the “eat local” movement. The argument is that the environmental cost of bringing apples from New Zealand is high enough that we should just give up on apples in the off-season and stick to local orchards and farmer’s markets for our apples. How can we possibly evaluate the environmental cost of an apple? It’s a massive systemic issue leaking into nearly any imaginable area. Large-scale efficiencies of agribusiness are eliminated when we rely on farmer’s markets. Local blights are magnified rather than absorbed. I might use more fossil fuels driving my car (if I had one) to the farmer’s market for my four apples and three tomatoes than is expended on all the ships and trains and trucks that bring bushels into supermarkets (I’m missing a citation, here—I know I read this somewhere, I just don’t know where). Certainly transportation is just one cost to consider. There are similar tradeoffs associated with globalized markets, monocultures, fertilizers, pesticides, recycling, public transit, and anything else you might think of. Which are really better: paper bags, plastic ones, or those fabric types that greens reuse every trip? Paper bags are biodegradable, possibly recyclable, and certainly renewable. Plastic bags stick around for thousands of years except where recyclable, and they are decidedly not renewable. Fabric bags are reusable, biodegradable, and renewable. Seems like we have a winner... but we haven’t yet considered manufacturing costs. All three require substantial amounts of water, with attendant heat pollution and trace chemical pollution. It's too much work to find out the answers for all such questions given the vast number of products I touch or consume every day.
And so, I am mistrustful of pat answers, or of absolutes of any kind. Environmentalists who favor one-issue solutions are just as guilty as their opponents of separating human beings from our environment--they just want to replace one "unnatural" system with another--one that satisfies a certain green aesthetic.
Lovelock's most compelling point is his critique of environmentalism as a new urban religion, composed of elitism and a misplaced longing for a simpler life mixed in with a neo-Luddite fear of technology. The greens, and he still claims to be one, proffer the "illusion that if the whole earth was farmed organically all would be well." --from Scientific AmericanI agree with the sentiment. Not that I’m not an adherent to this “urban religion,” minus the neo-Luddite portion. It’s just that much of my present unease with the environmental movement stems from the conflict between the overwhelming number of green choices I can make every day and the limited time and resources I have to make them. There are some sacrifices I would be quite unwilling to make to green the planet—giving up my computer, for example. It is conceivable that I will put off replacing my current model for an additional year. My five-year-old titanium PowerBook still runs great, it has more power than I need (adventures with Mathematica notwithstanding), and besides, it's quite distinctive now than no one else has one (and since I scraped all the white paint off). Unfortunately, restarting is becoming a bit nerve-wracking--it often takes several tries and several minutes. I’m starting to eye other computers these days, and even Windows laptops--particularly tablets--are looking mighty tasty. Once Leopard is out... I'm not sure I'll be able to help myself.
My point, back before visions of MacBooks began dancing in my head, was that there are some elements of my lifestyle that are likely here to stay. One of them is my addiction to rare-earth-element-containing electronics. Stipulating this, and also allowing for my tiny financial resources, I still face a large number of green tradeoffs every day. It’s great that I am made aware of a lot of them without much effort on my part, but it’s still hard to choose which ones I should focus on.
A jingle from my childhood goes, “brown eggs are local eggs, and local eggs are fresh!” I’m not sure what color has to do with locality, or whether the same holds true for a major urban area like Toronto, but I am certainly aware of the “eat local” movement. The argument is that the environmental cost of bringing apples from New Zealand is high enough that we should just give up on apples in the off-season and stick to local orchards and farmer’s markets for our apples. How can we possibly evaluate the environmental cost of an apple? It’s a massive systemic issue leaking into nearly any imaginable area. Large-scale efficiencies of agribusiness are eliminated when we rely on farmer’s markets. Local blights are magnified rather than absorbed. I might use more fossil fuels driving my car (if I had one) to the farmer’s market for my four apples and three tomatoes than is expended on all the ships and trains and trucks that bring bushels into supermarkets (I’m missing a citation, here—I know I read this somewhere, I just don’t know where). Certainly transportation is just one cost to consider. There are similar tradeoffs associated with globalized markets, monocultures, fertilizers, pesticides, recycling, public transit, and anything else you might think of. Which are really better: paper bags, plastic ones, or those fabric types that greens reuse every trip? Paper bags are biodegradable, possibly recyclable, and certainly renewable. Plastic bags stick around for thousands of years except where recyclable, and they are decidedly not renewable. Fabric bags are reusable, biodegradable, and renewable. Seems like we have a winner... but we haven’t yet considered manufacturing costs. All three require substantial amounts of water, with attendant heat pollution and trace chemical pollution. It's too much work to find out the answers for all such questions given the vast number of products I touch or consume every day.
And so, I am mistrustful of pat answers, or of absolutes of any kind. Environmentalists who favor one-issue solutions are just as guilty as their opponents of separating human beings from our environment--they just want to replace one "unnatural" system with another--one that satisfies a certain green aesthetic.
hard science is "easy" science
Language is sneaky. Words have connotations and origins of which we are unconscious. One such: “hard science.” Most people would probably assent that “hard science” means at least physics and chemistry. Most people would probably also agree that “hard science” does not include the “social sciences”—economics and sociology. Opinions vary on which other sciences to include where, and if pressed, people might press the sub-disciplines of a “single” branch of science into opposite camps (cellular biology versus population biology, say, or cognitive neuroscience versus psychoanalysis). There are reasons for the divide, I suppose. Hard science is supposed to be quantitative, while the other kind (and here the lurking connotations appear: do we really want to call population biology “soft”? Or “easy”?) is qualitative. But is it really? Actually, the Other sciences tend to be at least as highly mathematized as Hard science. Perhaps the relevant distinction is the firmness of the entities under investigation—electrons are easier to pin down than populations because electrons have definite behaviors which are essential to their electron-hood and populations have behaviors that are incidental to their being in a population. But doesn’t that make Hard science easier than Other science?
geekvision: seeing the world as information
I don’t know when it became commonplace to see the world as information. I’m sure the idea predates computers, probably by a few millennia. But I think it has only become popular—wildly popular—recently, maybe as recently as just the past few decades. Seeing the world as information is still not a majority thing, it’s a quirk reserved for the qwerty crowd: the geeks (nerds, not so much. Just the geeks). Back in the early 1800s, there was a vogue of information gathering beyond anything seen before—this was the age of the rise of the actuarial table, the science of statistics and of sociology. The scientist D’Alembert, upon entering a room on a visit abroad would whip out a pocket rule with which to gather measurements of paintings (subject and artist were apparently irrelevant). This is not the sort of “everything is information” I’m talking about, though. I mean that geeks today use the very concept of information as their primary metaphor for describing. I suspect that attempting to describe this concept of information will necessarily fail, and indeed will be misleading, because my description would be necessarily formal (I would cite Shannon, Church, Turing, and Godel), while the idea is purely intuitive and possibly not conscious. Oh, sometimes the concept floats to the surface—the study of heritable traits is about decoding DNA, finding out the information hidden in the genes. Indeed, this aspect of biology is so emphasized that most of us ignore development and morphology. But in general, I think, the information conceptual schema stays in the background.
Nevertheless, it has had a significant impact on our way of life. This is in part because of who geeks are and what they do. Geeks have had a pretty successful couple of decades lately. They’re rich and powerful even if they’re not obvious about it (though they’re getting pretty obvious now that they can replace their model space ships with real ones).
Nevertheless, it has had a significant impact on our way of life. This is in part because of who geeks are and what they do. Geeks have had a pretty successful couple of decades lately. They’re rich and powerful even if they’re not obvious about it (though they’re getting pretty obvious now that they can replace their model space ships with real ones).
torrent morality
Morality and etiquette are not coincident. This thought occurred to me while I was thinking about seeding ratios in unofficial bittorrent clients. Unofficial clients extend the official client by adding new features or, in the case of seeding ratios, by giving users control over an otherwise hidden aspect of the transfer protocol. This is where etiquette enters the picture.
The unofficial client I use allows me to set a seeding ratio. The default was 1:1, which means that the client uploads as much as it downloads—in other words, I give as much as I get, making my net impact to the system negligible. 1:1 is the setting that was programmed into the original client, and it remains the setting that etiquette dictates. Folks who set their seeding ratio low (especially those who don’t seed at all) are known as parasites because they are a drain on the collective; they use up resources without providing their own resources as replacements.
For folks sharing home movies or public linux distributions, there’s no interesting moral problem with low seeding ratios—such individuals are unthinking, jerks, or—potentially—are managing their upload resources to favor another file for which they are a seed. Where things get really murky is in the case of illegal file sharing. Here, there is an added element of risk pooling that seeders take on but which parasites do not. Although RIAA in the US likes to go after anyone who downloads music illegally, the real evildoers are the uploaders, not the downloaders. Most laws reflect that distinction. Buying a pirated DVD, for example, is a much lesser crime (if it is a crime at all) than selling a pirated DVD. Similarly, downloading pirated music is a much lesser crime than uploading it. With seeding ratios set at 0:1, parasites are stealing music, but they’re not distributing stolen materials. The risk they share is far smaller than the honest music pirates who leave their seeding ratio at 1:1.
The unofficial client I use allows me to set a seeding ratio. The default was 1:1, which means that the client uploads as much as it downloads—in other words, I give as much as I get, making my net impact to the system negligible. 1:1 is the setting that was programmed into the original client, and it remains the setting that etiquette dictates. Folks who set their seeding ratio low (especially those who don’t seed at all) are known as parasites because they are a drain on the collective; they use up resources without providing their own resources as replacements.
For folks sharing home movies or public linux distributions, there’s no interesting moral problem with low seeding ratios—such individuals are unthinking, jerks, or—potentially—are managing their upload resources to favor another file for which they are a seed. Where things get really murky is in the case of illegal file sharing. Here, there is an added element of risk pooling that seeders take on but which parasites do not. Although RIAA in the US likes to go after anyone who downloads music illegally, the real evildoers are the uploaders, not the downloaders. Most laws reflect that distinction. Buying a pirated DVD, for example, is a much lesser crime (if it is a crime at all) than selling a pirated DVD. Similarly, downloading pirated music is a much lesser crime than uploading it. With seeding ratios set at 0:1, parasites are stealing music, but they’re not distributing stolen materials. The risk they share is far smaller than the honest music pirates who leave their seeding ratio at 1:1.
standing in line
Bittorrent is a good answer to a very basic problem in computer science: what is the most efficient way to deliver data from a server to some number of clients? The simplest solution is FIFO (first in, first out): simply have the N clients queue up, and the server sends the data to each in turn. Websites with only a small amount of traffic or small amounts of data can often use this model without incident, but increase the clients or the size of the package sufficiently and things can quickly get out of control.
In queuing theory, we assume that clients “arrive” in a random distribution after the original server makes a file available while service rate will nominally follow a Poisson distribution:
The real math is hard, but the math of averages is pretty easy. If λ is the average arrival rate and μ is the service rate (note λ must be < μ, which we ensure by taking a large enough time slice so that all the clients have been served), then traffic intensity is:
average line length is:
which makes the average wait time:
An example: suppose λ = 3 and μ = 4. Then ρ=3/4, LQ=3, and TQ=1. If we double μ to 8, ρ=3/8, LQ=3/5, and TQ=1/5. Add 4 again, μ = 12, ρ=1/4, LQ=1/3, and TQ=1/9. Add 4 again, μ = 16, ρ=3/16, LQ=3/13, and TQ=1/13.
Okay, enough math—we all know what it’s like to be stuck in line. In the simple FIFO model, there are many clients and just one server—kind of like the lineup at a store with just one checkout lane. Traffic varies widely, and major pileups can occur. There are obvious solutions to this problem: multiple queues, like at the supermarket, or single lane, multiple cashiers, like at a bank. Bittorrent takes this idea one step further. It takes advantage of the fact that, in the computer world, digital packages can be split up into lots of smaller packages for convenient delivery to the client (the airlines are testing out this approach with luggage, but so far travelers don’t seem to appreciate the increased efficiency). Next, the server or “seed” sends out a different piece of the file to each of the clients, along with a tracker that allows the whole community of clients to know who has what pieces. Now the pieces are spread out amongst all of the participants in the torrent (the “swarm”), and if a given client is missing a given piece, it can now be found in several locations. Thus, every participant in the swarm is a “client” just for the pieces it doesn’t yet have, and a “server” for those it does. Before long, some of the clients have complete copies and become seeds. Some toy examples can show how well this system can work. Suppose that there are 32 clients. Suppose that each client can transfer (or receive) the file in 32 minutes.
In a FIFO situation, it takes 32 x 32 = 1024 minutes to serve all of the clients—one of whom gets the whole package in 32 minutes, and another of whom gets nothing until the 993rd minute! A very unfair situation. Even in a situation with mirrors (multiple servers), whether implemented as a supermarket or bank queue, we improve matters only by a factor of 1/#mirrors—that is, the improvement is a function of the number of mirrors, not the number of clients—because the clients don’t upload.
Suppose we improve matters by asking clients to become servers after they get the whole file. Then we have 1 copy in the first 32 minutes, or 2 servers at the start of the second session. 2 more copies, or 4 servers after 64, 4 and 8 after 96, 8 and 16 after 128, and all 32 copies are done in just 160 minutes—a vast improvement—overall speed increases as 2N, making efficiency of order log N (in FIFO, order is just N). Notice, though, that the clients are still spending most of their time waiting to get started—in the first 32 minutes, only 2 computers are active, in the second just 4. It’s not until the last 32 minutes that the whole swarm is involved.
We solve this problem by cutting up the original file into 32 chunks, each of which can be transferred in just one minute. In the first minute, the server uploads one piece to one client. In the second minute, the server uploads a second piece to a second client while the first client transfers the first piece to a second client. In the third minute, a third piece comes out, 3 have the first piece, and 2 have the second. In the tenth minute, a tenth piece goes out, 10 have #1, 9 #2, 8 #3, 7 #4, and so on. In the thirty-second minute, the thirty second piece goes out, and everyone has the first piece, one client is missing the second piece, two clients are missing the third, and so on. More importantly, everyone is fully involved in the torrent. In the fortieth minute, everyone has the first through eighth pieces, one person is missing the ninth, and 24 are missing the last piece. After just the 64th minute, everyone has all the pieces.
There’s a lot of simplification in this model. I’ve neglected the processor time to work out the transfer orders and the bandwidth used up in negotiating new connections, I’ve idealized my system so that all the clients show up at the same time, the components have equivalent capabilities, and everyone plays nice, but it is nevertheless a good indication of the sort of vast improvement bittorrent is over FIFO. What’s the order of the bittorrent solution? It’s log N, just like the earlier version. But it’s a much more efficient version because by dividing up the file, the whole swarm gets in on the torrent earlier—there’s simply less wasted time inactively waiting in line.
Subscribe to:
Posts (Atom)