You are here: Home / Documents / 
2017-03-27 - 14:41
Documents

Andrew Morton at the Linux Symposium 2004

Key Note

I'll be mainly talking about the increasing interest in Linux from large IT corporations - what effect this has had thus far, what effect I expect it will have in the future, and how we should as a team react to this changing circumstance.

The question might be asked: "is what I am about to say an official position"? Well, there's no such thing as an official position in Linux. Linus has opinions, I have opinions, everyone else has opinions. The only consistency here is that most of us are wrong most of the time. Everyone is free to disagree, and because all of us have so little invested in a particular position, we are always open to argument. As usual the only limiting factor here is that ultimately we all run out of bandwidth to listen to opinions, and the person with the simplest argument which requires least bandwidth to communicate often wins out.

Most of the pressure for change is upon the kernel, and a lot of the changes are happening in the kernel. My talk will be kernel-centric, but is presumably applicable to and generalisable to other parts of the Free software stack.

We need to find ways in which the pressures to add new features and developers to the kernel project do not end up impacting the way in which we have historically gone about our development and release processes. True? NO! WHY should we fail to adapt to changed circumstances? Such a conservatism would be very wrong, and would eventually lead us into all sorts of trouble. We need to recognize the changes which are occurring with open eyes, and at the very least, react to them appropriately. Ideally, if possible, we'll do even better than that and actually anticipate events so that at least we won't be surprised by them.


• The most important and successful Free s/w is what I will call "system software": software which other software builds upon to provide a service. There is some successful end-user software, and I guess most people would point at desktop software, most notably StarOffice, but end-user software is not where the bulk of the attention and the buzz lies at present, and I expect it never really will be.

Successful system software includes the kernel, low-level software libraries, web, email and other servers, databases, many system tools, compiler toolchain, X server, windowing systems, gui toolkits, .NET workalikes, etc.

It is not a coincidence that the most successful Free software is system software. I'm going to take a shot at explaining why this is the case.


• And I have to start out with some economic theories.

Normal processes of market competition do not work with system software, due to what economists call high substitution costs. The cost of switching from one set of system software to a competing set is high. Let's walk through costs to the major players.

• End consumers: need to retrain admin staff and users. Need to obtain new versions of their applications to run on the new OS (or go to completely different products if their current applications are not available on the new OS). If, as is very commonly the case, the end consumers use applications which were developed in-house, those applications will need additional development, testing and deployment work, but the organization still must support the version which runs on the old system software, because nobody migrates their IT systems in a single day.

In all cases, end consumers can only run their software on the hardware platforms for which their new OS vendor has chosen to make the system software available. If the end consumer wishes to move to a hardware platform or peripheral hardware which offers better price/performance or offers some features they particularly want, they may not be able to do this.

• ISVs: In a competitive system software world the ISVs need to develop, maintain and support their products on various versions of 5, 10 or more different vendors' system software suites.That's a lot of cost which, when compared with a world in which all customers use the same suite of system software doesn't bring in a single new sale and this additional development and support work inevitably impacts the quality and feature richness of their overall product line.

ISVs would much prefer that all the machines in the world be running the same system software so they only need to develop, maintain and support a single version.

So of course what happens is that they will develop for the most popular OS amongst their target customers, thus starving off minor players in the system software marketplace and creating a near unsurmountable barrier for new entrants.

• The third party who is seriously affected by system software choice is the hardware vendors: they need to ensure that their hardware works well with system software which is in the field, going back over 5 or 10 years worth of releases of that system software.

Often hardware vendors develop and maintain the device drivers for the hardware which they manufacture, which puts them in same boat as ISVs wrt support and development costs.

End result is that hardware vendors only support the most popular system software and minor or new entrants have to take on the cost of supporting that hardware themselves. Sometimes without adequate access to hardware documentation.


• As a consequence of these high substitution costs, all the major players in the industry tend to gravitate toward a single suite of system software. Which is great if you happen to be the provider of that software - you get to make an 85% margin on its sales. But this situation obviously places that provider in a monopolistic position, and leaves the users of that software with a single source for a vital component, often from a direct competitor.

• To get around this fundamental tension between a single-provider and the industry's need for a uniform system software layer, the industry is doing an amazing thing. As we know, many IT companies are congealing around a suite of Free s/w which nobody owns. Or, if you like, which everybody owns.

This allows many industry players to use the same basic set of system software but without relinquishing control to the provider of that software.

This adoption of Free software to resolve the incompatibility between the economic need for provider diversity and the engineering need to avoid product diversity is, I think, fairly unique across all industry. There are similar situations, such as the adoption by competitors of written specifications, but thats not really the same thing - here we're talking about the sharing of actual implementations - end products.

The uniqueness of this industry response derives from the fact that software is an exceptional product. It is uniquely different in several critical ways from bridges and cars and pharmaceuticals and petrochemical products, etc.

We all know that software is an exceptional product, but still, we continually hear people trying to draw analogies between software versus the output of other engineering activities. And often, the people who say these things are making serious errors, because software is exceptional. Mainly because of the low cost of reproduction versus production, but also because of the high cost of substitution versus initial acquisition. Sometimes people compare the software industry with the publishing industry - say, writing books. And yes, with a book the cost of reproduction also is lower than the cost of production. But nobody bases their entire business on one particular book, and nobody is faced with huge re-engineering costs if they need to base their product line on a different book. So if you do hear people drawing analogies between programming and other forms of industry, be very cautious and be prepared to poke at the holes in the analogy. Software is exceptional.

• Although people go on about the sticker price, the use of Free system software by IT providing companies does have costs to them: they must employ staff to maintain it, staff to support internal deployment and other support teams, staff to add features which their employer requires. Often these features are fed back into the mainline Free software product and this could all be viewed as part of the acquisition cost of Free software.

These companies pay the costs of training engineers and other technologists to become familiar with and productive with our software, thus increasing our base of developers and support staff.

These companies also bear part of the cost of making end-users familiar with and comfortable with not just the software itself, but also with the very concept of using Free software - this also is in our interests.

One point I like to make occasionally is that Free software is not free: when a company chooses to include Free software in their product offerings they are obliged (by both the license and by their self interest) to make any enhancements available for inclusion back into the public version that software, which can be viewed as a form of payment in kind. You get to use our software, but the price you pay is in making that software stronger, more feature-rich, more widely disseminated, more widely understood.

I don't know if this self-regenerating consequence of the GPL was a part of Richard Stallman's evil plan all those years ago. But it wouldn't surprise me - he's a clever guy, and he's thought a lot about these things. We owe him so much for his consistency of purpose and for his uncompromising advocacy for us and for our work.

• As large IT corporations with diverse customer bases adopt our software to run end-user applications they, and we, do come under pressure to add new features.

The first requirement which we've seen is to be able to scale up to really large and expensive systems which we the developers have never even seen, let alone owned. The 2.6 kernel and the new glibc threading code have pretty much satisfied this requirement. We scale OK to 32 CPUs and beyond, 1000's of disks, lots of memory. Enterprise features such as CPU hotplug have been added, and many others are inching forward.

New features which are currently under consideration for the kernel are mainly in the area of reliability and serviceability - crash dump collection and analysis, fault hardening, fault recovery, standardized driver fault logging, memory hot add/remove, enhanced monitoring tools etc.

All of this stuff is coming at us, and we need to be aware of where we are, where we're heading, what forces are driving us there, and how we are going to handle it all.

It mainly affects the kernel.

• We'll come back and tie these thoughts together in a few minutes. Let's dive off for a while and review the traditional Free Software requirements gathering and analysis process. What are the traditional sources for our requirements?

  1. To a large extent the process has been what out friends at MS in the first Halloween document called "following taillights" - doing what earlier unix-like systems have done. And that's fine - we're heavily committed to standards, whether they be written or de facto as a matter of principle. We want to be compatible with other system software so that we can be as useful as possible to as many people as possible.
  2. Another, and very strong source of requirements input to Free software projects is the personal experience of the individual developers who are doing the work on the project.
  3. Also requirements come from those users who are prepared to contact the development team directly via email.
  4. Requirements come from distributors (RH, SuSE) who are in contact with end-users.

To date, those have been our major sources of requirements. (We're pretty bad at turning requirements into project teams and then into end product, but that's off-topic, and it hasn't really been a crushing problem thus far)

• As the big IT companies become committed to our Free software, they are becoming a new source of requirements.

The IT providers who are now adopting Linux bring on board new requirements which are based on their extensive contacts with possibly more sophisticated consumers of system software - banks, finance companies, travel companies, telecommunications, aerospace, defense, etc. All those people who wouldn't have dreamed of using Linux just a few years ago.

These tend to be mature requirements, in the sense that the features existed in other Unixes. Downstream consumers and the field staff who support them found the features useful and continue to require them in Linux.

This all constitutes a new source of requirements for the incumbent team of system software developers and in some cases it is a little hard for us to understand the features, how they will be used, how important are the problems which they solve, etc. And it is sometimes hard for us old-timers to judge how valuable that feature is to end users.

We regularly see people from IT corps coming to "us" (Linus, me, everybody else) with weirdo features, and we are frequently not given sufficient explanation of what the feature is for, who requires it, what value it has to the end users, what the usage scenarios are, what the implications of not having it are, etc.

As the people who are responsible for reviewing and integrating the code, this makes things hard for us - if the feature is outside our area of personal experience and if the proposers of the feature present us with an implementation without having put some effort into educating us as to the underlying requirement, it becomes hard for us to judge whether the feature should be included. And this is important: because the feature provides something with which we have no personal operating experience it is hard for us to judge whether the offered implementation adequately addresses the requirement.

So. You (the patch submitters) are the people with the field experience which drove the development of the feature. Please put more effort into educating others as to the requirement.

• It could be asked: why do their requirements affect us? ("us" being the developers who aren't employed by "them").

If the features are to be shipped to end users, all parties do prefer that the features be merged into the mainstream Free software project because:

The submitter:

  • Other people fix their bugs.
  • Other people update their code in response to system-wide changes.
  • Other people will add new features.
  • Broader testing and review.
  • Don't need to maintain external patchsets,

Us:

  • Keep the various vendors' kernels in sync.
  • Offer a uniform feature set to all users

• Alternative to merging everything is ongoing mini-forking - we would have lots of different implementations of the Linux software stack out there, all slightly different. Bad because it fragments the external appearance of Linux - different versions have different features and the whole thing starts to get a bad rep.

• Maintaining a mini-fork is expensive for the maintainer. Gets more expensive the more the fork diverges from mainline.

• This is why forks (of the kernel at least) cannot happen. Unless the team splits.

• In practice, a full fork is less likely than alternate trees which diverge a lot from mainline, but which continually track mainline, periodically re-syncing with it. Bad for the maintainer, bad for everyone else due to feature and bug divergence between the various streams. Reintroduces substitution costs - probably not by much for the end user, but ISV's and hardware providers are again in the situation of having to test and certify their products against multiple kernel versions. (OK, they've always been in that situation - ISVs do tend to re-certify their products against new versions of the underlying software but we should work to minimize the pain by minimizing fragmentation).

• Getting back to the new requirements which we are seeing. My inclination when I am unsure about the feature's value to the users is to trust the submitter's assertions - they know their customers and have experience with other Unixes. They don't usually write code just for fun.

• Which means that we're pressed with features which we the developers have no interest in. But as long as the feature is well-encapsulated, has a long-term maintainer who will regression test it and if the feature can be reasonably easily ripped out if the developer/maintainer goes away, OK.

There might be a bit of a tendency for the IT and hardware companies to develop and test their new feature within the context of a partner Linux vendor's kernel. This is understandable - it's easier to do, you're working with someone who is contractually motivated to help you out, and the feature will get to end users more quickly. But doing this does cause feature set divergence, and can mean that something which was acceptable to a Linux vendor is deemed unacceptable for the mainstream kernel. So either changes are made to it during a mainline merge which makes it incompatible with a version which has already been shipped, or we end up putting a substandard feature into the main kernel just to remain compatible with a fait accomplis. Neither outcome is nice. So I would ask that corporations target their development against the mainstream kernel rather than vendor trees.

• When considering a new feature submission, one factor which we look at is "how many downstream users need this". But adding features which only a small number of users need is OK as long as the cost on everyone else is low.

We have code in-kernel now which virtually nobody uses, and if the kernel team was really hard-headed, these things just wouldn't have been included in the first place, or we'd be ripping them out now. (drivers, file systems, entire architectures). There is little pressure to rip these features out because generally the cost of these is low. They are well encapsulated and if they break, well just don't compile them.

Some features tend to encapsulate poorly, and to have their fingers into lots of different places. Memory hot-unplug is an example which comes to mind. We may end up not being able to accept such features at all, due to their expected long-term impact upon the maintainability of those parts of the software which they touch, and to the fact that very few developers are likely (or even able) to regression test them.

• If this is an accurate overview of where the base Linux components are headed, what lessons do we learn from it, and how, if at all, should we change our existing processes so that we can respond to this increasing, and wider use of Free s/w and the broadening requirements which we are seeing?

• Keep the code maintainable.
Techniques for this are well-established: flexible and powerful configuration system, minimize interaction between subsystems via careful interface design, consistent coding and commenting style, etc.

 

• Keep the code comprehensible.
The kernel is becoming more and more complex - more subsystems, large subsystems, more complexity in the core. Despite rml's sterling efforts, I'd have to say out-of-tree documentation such as books and websites don't really work because the subject matter is so large and the documentation is so much work to produce, yet it goes out of date so quickly. Although I am acutely aware of the comprehensibility problem, and am concerned about it, I really don't have a magical answer, apart from

a) Keep the code clean
b) comment it well
c) good changelogging
d) keep the discussions on the mailing lists
e) recognize that people do need help coming up to speed, and recognize that time spent helping other developers has its returns.
f) Recognize that the centralized maintainer's role will continue to weaken - that top-level subsystem maintainers will continue to gain more responsibility across more code, and that the top-level maintainer's role will increasingly move away from nitty-gritty code-level matters in favor of functions such as timing of releases, timing of feature introduction, acceptance of feature work at a high-level, coordination with distributors, tracking bugs and general quality issues, etc.

Fortunately, as the complexity and size of the code base increase, so to does the number of people and companies which use that code and have an interest in seeing it work well. The kernel development team is getting larger - I have no statistics on this, but empirically it's hard to identify many people who have abandoned kernel work in recent years and easy to point at many newcomers. The kernel developers tend to self-organize into teams in their (or their employers') area of interest and that resource allocation algorithm does seem to be working well. Developer resource allocation does have some gaps: there are abandoned drivers, there are various project-wide maintenance tasks which should be undertaken but it is hard to identify an appropriately skilled developer who will do that work.


• What else can we do to help to accommodate all these new requirements and all this new code?

Be able to accommodate within the stable kernel large changes and a high rate of change without breaking the code base. Across the lifetime of the 2.6 kernel we will see many changes as features are added and as we support new hardware.

The rate of change has sped up:
In the first six months of 2.4 devel: -220,000 lines, +600,000 lines
In the first six months of 2.6 devel: -600,000 lines, +900,000 lines

That's 1.5M lines changed in a 6.2M line tree.

A 64 MB diff in six months - and that's the stable kernel.

I expect that we need to change our mindset a bit. Traditionally, once we declare a stable branch people seem to think that we need to work toward minimizing the amount of changes to that tree. That the metric of success is how little change we're introducing, rather than how much.

This assumption is proving unrealistic and we need to challenge it. In the 2.4 series it may been a contributing factor to a large divergence between the public kernel and the vendor trees. It seems to me that this model has led to large gaps in time where the development kernel tree is unusable because it's under furious development while the stable branch is too static, causing vendors to add huge amounts of their own changes.

We should look at maintenance and development of the stable kernel tree in a new way: yes, the stable kernel should stay stable as much as possible, but the super-stable kernels are the vendor trees - vendors pick a kernel.org kernel at a time which suits them and go through an extensive stabilization cycle and then ship it. Meanwhile, the public kernel forges ahead. It may not be quite as stable as the vendor kernels, but still suitable for production use (by Debian, Gentoo, whoever).

This is the model we've been following since 2.6.0 was released. It has fallen into a 4-to-5 week cycle wherein patches are pummeled into the tree for the first two weeks and then we go into a two week stabilization period. At the end of that we do a release and people again pummel in all the code which they had been saving up during the two stabilization weeks.

We're currently showing a 16MB diff between the 2.6.7 and 2.6.8 kernels.

Is this a sustainable model? I think so. Has it caused any problems thus far? I haven't heard any complaints - people are getting their work into the tree and it is getting better. We do need to help external people to understand that there is indeed a new paradigm, and that the size of the diff between the 2.6.7 and 2.6.8 is not indicative of any particular problems in 2.6.7 - it's a good kernel. But we have hundreds of developers who are continuing to advance the kernel and we have found processes which permit their changes to keep flowing into the main tree.

• Something else we should do is to come to a general recognition that there is pressure to add enterprise features to Linux - in a few years Linux can, and I believe should, offer a similar feature set to the big proprietary Unix systems. We should recognize up-front that these things are going to happen. All those weirdo features in Solaris/HPUX/AIX/etc. will, I suspect, end up in Linux in some form, and they should do so - if there is a proven user need for these features and we end up being unable to find an acceptable way to integrate these changes then it is we, the kernel development team, who have failed. So we need to plan for these changes, make sure that we can introduce large new features into the kernel which the current developers don't even understand the need for, let alone understand the code.

• Another small process change which we need is that the IT corps who are developing and contributing the new features need to be careful to explain both the new requirement and its offered implementation - treat this as an educating exercise - the better they can communicate the need for the feature and its implementation, the smoother its ride will be.

• Now, Linux vendors.

It is understandably tempting for Linux vendors of various forms to seek to differentiate their products by adding extra features to their kernels. OK. But moving down this path does mean that there will be incompatible versions of Linux released to the public. This will, by design, lock some of our users into a particular vendor's implementation of Linux. And this practice exposes the entire Linux industry to charges of being fragmented. And it exposes us to the charge that we are headed along the same path as that down which proprietary Unixes are deservedly vanishing. And I think we all know where these charges are coming from. And it is undeniable that the charges do have some merit.

Let me be very frank here. I don't view it as a huge problem at this time. But as a person who has some responsibility for Linux as a whole, I see the perfectly understandable vendor strategy of offering product differentiation as being in direct conflict with the long term interests of Linux. It is not for me to tell vendors how to run their business, but I do urge them to find other ways to provide value to their customers. I strongly oppose the practice and I will actively work to undermine it. Please. Work the features into the mainline kernel *first*.

• The hardware and system software vendors. Even though they have partnerships with the Linux vendors, they should work with their partners to target the public tree wherever possible. Yes, you can do all the QA and certification within the vendor's kernel, but the public kernel should always be kept up to date. Doing this has a number of benefits for the device driver developers - all users of Linux are able to use your hardware. Nobody has to carry patches. Your code gets wider review, and wider testing. Other people will fix bugs for you, and will add features for you, and will ensure that your driver doesn't get broken by external or kernel-wide changes.

Yes, I must say that the acceptance criteria for the public kernel are more stringent than for vendor trees, and you may have to do additional work to get your code merged. But getting the code into the public kernel avoids the terrible situation in which you manage to get your driver into the vendor tree, but then all the staff are sent off to do other things and you never have the resources to do the additional work which is needed to make the feature acceptable for a mainline merge. So the driver just skulks along in vendor trees for the rest of its life. Or at least, until you're sick of paying 100% of the cost of its maintenance.

And getting your code into the main tree avoids the even worse situation wherein a competing implementation of whatever it is that your code does is merged into mainline instead of your code. This leaves all the users of your feature, and your vendor partners bent over a barrel. Because either the vendor will need to carry the duplicated feature for ever, or your users will need to implement some form of migration.

• Still with the hardware and system software vendors: they need to educate their own internal teams regarding Free software development practices and avoid the temptation when time pressures are high to regress into cathedral-style development wherein the rest of the public development team don't get to see the implementation of the new feature until it is near-complete, by which stage original developers are too wedded to the work they have thus far done and rework to make the feature acceptable to the public tree becomes harder and more expensive for them. Tell us what you're up to, keep us in the loop, get your designs reviewed early, avoid duplication of effort and this way we'll minimize any unpleasant surprises which might occur further down the track. Use our processes.

We do have processes. They're different, but they're pretty simple. We can explain them to you, and your kernel vendor partners know the Free software processes intimately, and can help you with them.

• As the various strands of the 2.6 kernel outside kernel.org approach their first release milestones I do sometimes perceive that the communication paths which we use are becoming a bit constricted. Whether it is for competitive reasons, confidentiality, or simply time pressure, it appears to me that the flow of testing results and the promptness of getting fixes out to the rest of the world are slowing down. I would ask that the people who are involved in this release work remain conscious of this and try to keep the old golden goose laying her eggs. You may need to lean on management and customers and partners and others to get the necessary resources allocated to keep the rest of the world on the same page, but it's better that way. We send you our testing results and patches. Please send us yours.

To top

Summary

Apparently I'm too focussed on the server side of things. I know this, because I read it in a comment thread on slashdot. It seems that all those megabytes which were shaved off the kernel memory footprint and all that desktop interactivity stuff was done by one of the other Andrew Morton's out there.

The 2.6 development cycle has led to large changes in the kernel - large increases in kernel capability. Due to the care we have taken I do not believe that this progress has compromised the desktop and embedded applications of Linux - they too have advanced, although not to the same extent as the server stuff.

The emphasis on server performance in 2.6 was in fact not principally in response to the interest from the three-letter-corps. We knew it had to be done, simply because the 2.4 kernel performs so poorly in some situations on big machines. It was a matter of pride and principle to fix the performance problems. And, believe it or not, this is one area in which our friends at SCO actually said something which was slightly less than wholly accurate: the performance increases in 2.6 would have happened even if IBM had disappeared in a puff of smoke. Because we knew about the problems and wanted to fix them up.

In this respect we need to distinguish between server performance work and all the other enterprise requirements - we did the speedup work for fun and because it was cool. All the other enterprise requirements which we've been discussing will be implemented for other reasons.

We should expect further large changes in the enterprise direction and we should find the technical and procedural means to accommodate them, because the alternative (that is, not applying the patches) will be a slowdown in the progress of Free software across the industry. (And here I'm assuming that increased adoption is a good thing). Our failure to adapt to this new interest in Linux could even lead to a degree of fragmentation of the feature set which Linux offers and, when you take all Linux development effort into account, our failure to skilfully adapt to the new requirements will cause additional programmer effort which could have been applied elsewhere.

What I've said today is to a large extent a description of changes which have already happened, and which are continuing to evolve today. There are no radically new revelations or insights here. I believe that we should always attempt to understand the environment in which we are operating and, if possible, come to some consensus about the direction in which we are heading, and about how we should collectively react to our changing circumstances.

To top

Summary of the Summary

Free software (or at least this project) is moving away from something which gifted, altruistic individuals do "just for fun" toward an industry-wide cooperative project in which final control is conditionally granted to a group of independent individuals. There is a lot of goodwill on all sides, but we do need to understand and remain within the limits of that goodwill.

But you should not take all of this to mean that Linux is going to become some sort of buttoned-down corporate quagmire. I don't think it will. I expect that the Free software ethos - that very lofty set of principles and ethics which underlies our work will continue to dominate. And I suspect that the Linux-using IT corporations want it to stay that way, as our way of operating tends to level the playing field and fends off the temptation for any particular group to engage in corporate shenanigans. We provide neutral ground for ongoing development.

End-users with their desktop machines will continue to be a very important constituency for Linux developers. At times, we don't serve these people as well as I would like - I see that random driver X has done a face-plant yet AGAIN, but I don't have anyone who I can reliably turn to get the problem fixed. One promising sign here is that desktop Linux is becoming increasingly prominent in the words and plans of Novell, various OSDL partners, new distributions, old distributions, maybe even Sun. The desktop is coming. Or at least, the simple desktop is. So more resources will become available in the desktop area soon.

It's funny to watch when corporations make their developers available to work on a Free software project. Some of them, to varying degrees, tend to become subverted. They begin to "get it". They gain an allegiance to the project and its general quality and cultural goals. It's doubtful that their loyalty to the project often comes into conflict with their obligations to their employer. But when such conflict does come, I expect these Linux developers end up standing in their bosses' office imparting a few clues, explaining why there's "no way, we're not going to do that". When they do this they are standing up for the project's interests because they have become Free software programmers. This is all very good.

There's an analogy we can draw here: We know that the way Linux has progressed over the past five years or so is to enter organizations at the bottom - initial entry was via an individual sysadmin or programmer who is sick of resetting or reinstalling Windows boxes. He brings in a Linux server. Linux works, so a few more boxes are brought in. Soon Linux becomes acceptable to decision makers and ends up propagating throughout the organization. We've all heard the stories. I did it myself at Nortel. Well, I think a similar process is coming into play as corporate programmers are assigned to sit in their cubicles and work on Linux. They come in as good little corporate people but some of them are subverted. We end up taking over their brains and owning them. They become members "of the community". They become Free software developers. Some of these guys are going to be promoted into management, so in a few years time we'll have lots little preprogrammed robotic penguins infiltrating the corporate hierarchy imparting clues in upward, downward and sideways directions. So this is the real reason why we're applying their patches. Rest assured: world domination is proceeding according to plan.

To top