What you need to know about the open versus closed software debate

Few debates have raged longer and with more controversy in the computer industry than one: Is “open source” better than “closed source” when it comes to software development?

This debate has been reignited as companies like Google, Meta, OpenAI and Microsoft have diverged on how to compete for supremacy in artificial intelligence systems. Some choose a closed model while others take an open approach.

Here’s what you need to know.

Source code is the building blocks of the applications you use. Developers can write tens of thousands of lines of source code to create programs that will run on a computer.

Open source software is any computer code that can be freely distributed, copied, or modified for a developer’s own purposes. The non-profit association Open Source Initiativean industry organization, sets other stipulations and standards for what software is considered open source, but much of it is about the code being free and open for anyone to use and improve.

Some of the best-known software systems are open source, such as Linux, the operating system on which Google’s Android mobile system was built. Well-known open source products include Firefox, the free downloadable web browser created by the Mozilla Foundation.

Tech companies like Google, OpenAI, and Anthropic have spent billions of dollars creating “closed” or proprietary AI systems. People who are not employed by these companies cannot see or modify their underlying source code, nor can customers who pay to use it.

For a long time, this wasn’t the norm. Most of these companies have opened up their AI research so that other technologists can study and improve their work. But when technology executives began to realize that research into more advanced AI systems could bring in billions, they began to isolate their research.

Tech companies argue that this is for the good of humanity, as these systems are powerful enough to potentially cause catastrophic societal harm if put in the wrong hands. Critics say the companies simply want to keep the technology accessible to hobbyists and competitors.

Meta took a different approach. Mark Zuckerberg, CEO of Meta, has decided to open source his company’s large language model, a program that allows you to acquire skills by analyzing large quantities of digital texts produced from the Internet. Mr. Zuckerberg’s decision to open source Meta’s model, LLaMA, allows all developers to download it and use it to create their own chatbots and other services.

In a recent podcast interviewMr. Zuckerberg said no organization should have “a truly superintelligent capability that is not widely shared.”

It depends who you ask.

For many technologists and those who embrace hardcore hacker culture, open source is the way to go. World-changing software tools should be distributed for free, they say, so that everyone can use them to create interesting and exciting technology.

Others believe that AI has advanced so quickly that it should be closely monitored by the makers of these systems to protect it from misuse. Developing these systems also costs a huge amount of time and money, and closed models should be paid for, they say.

The debate has already spread beyond Silicon Valley and computer enthusiasts. The legislators of European Union and Washington held meetings and took steps toward AI regulatory frameworks, including the risks and rewards of open source AI models.

Related Posts