In the world of artificial intelligence (AI), a battle is raging. On one side are companies that believe in keeping the data and algorithms behind their advanced software private. On the other side are companies that believe in allowing the public to see what’s under the hood of their sophisticated AI models.
Think of it as the battle between open source AI and closed source AI.
In recent weeks, Facebook’s parent company Meta has made a major push toward open-source AI by releasing a new collection of large AI models. Among them is a model called Llama 3.1 405B, which Meta founder and CEO Mark Zuckerberg describes as “the first frontier-grade open-source AI model.”
For anyone who cares about a future in which everyone can access the benefits of AI, this is good news.
The Danger of Closed-Source AI – and the Promise of Open-Source AI Closed-source AI refers to models, datasets, and algorithms that are proprietary and confidential. Examples include ChatGPT, Google’s Gemini, and Anthropic’s Claude.
Read also : WhatsApp will make it easier to use Meta AI with this new feature, check out the details here
Even though anyone can use these products, there is no way to know what dataset and source codes were used to create the AI model or tool.
While this solution is a great way for companies to protect their intellectual property and profits, it also risks undermining public trust and accountability. Making AI technology closed source also slows innovation and makes a company or other users dependent on a single platform for their AI needs. That’s because the platform that owns the model controls changes, licenses, and updates.
There are a number of ethical frameworks that aim to improve the fairness, accountability, transparency, privacy, and human oversight of AI. However, these principles are often not fully respected in the case of closed-source AI due to the inherent lack of transparency and external accountability associated with proprietary systems.
In ChatGPT’s case, its parent company, OpenAI, doesn’t disclose the dataset or code for its latest AI tools to the public, making it impossible for regulators to audit them. And while the service is free to access, there are concerns about how user data is stored and used to retrain models.
In contrast, the code and dataset behind opeTECHn-source’s AI models are available to everyone.
Read also : WhatsApp and Instagram users can now access Meta AI in Hindi, chatbot supports 7 new languages
This promotes rapid development through community collaboration and allows the involvement of small organizations and even individuals in AI development. It also makes a huge difference for small and medium-sized businesses, as the cost of training large AI models is colossal.
Perhaps most importantly, open source AI makes it possible to examine and identify potential biases and vulnerabilities.
However, open source AI creates new risks and ethical concerns.
For example, quality control of open source products is generally weak. With hackers having access to the code and data, models are also more vulnerable to cyberattacks and can be adapted and customized for malicious purposes, for example by retraining the model with data from the dark web.
A pioneer of open source AI
Among the leading AI companies, Meta has established itself as a pioneer in open source AI. With its new suite of AI models, it is accomplishing what OpenAI promised to do when it launched in December 2015: advancing digital intelligence “in ways that are most likely to benefit humanity as a whole,” as OpenAI said at the time.
Llama 3.1 405B is the largest open-source AI model in history. It is a so-called large language model, capable of generating human-language text in multiple languages. It can be downloaded online, but due to its enormous size, users will need powerful hardware to run it.
Although it does not outperform other models on all measures, Llama 3.1 405B is considered very competitive and performs better than existing commercial and closed-source language models on some tasks, such as reasoning and coding tasks.
But the new model isn’t entirely open, because Meta hasn’t released the huge dataset used to train it. This is an important “open” element that is currently missing.
Nonetheless, Llama de Meta levels the playing field for researchers, small organizations, and startups because it can be leveraged without the immense resources required to train large language models from scratch.
Shaping the Future of AI
To ensure the democratization of AI, we need three key pillars:
governance: regulatory and ethical frameworks to ensure that AI technology is developed and used responsibly and ethically
accessibility: affordable computing resources and user-friendly tools to ensure a fair environment for developers and users
openness: Datasets and algorithms used to train and build AI tools should be open source to ensure transparency.
Achieving these three pillars is a shared responsibility between governments, industry, academia and the general public. The general public can play a vital role by advocating for ethical AI policies, staying informed about AI developments, using AI responsibly and supporting open source AI initiatives.
But several questions remain about open source AI. How can we reconcile intellectual property protection and the promotion of innovation through open source AI? How can we minimize ethical concerns around open source AI? How can we protect open source AI from potential misuse?
By answering these questions correctly, we can create a future in which AI is an inclusive tool for all. Will we rise to the challenge and ensure that AI serves the common good? Or will we let it become a new tool of exclusion and control? The future is in our hands.
One more thing! We are now on WhatsApp channels! Follow us there to never miss any updates from the world of technology. To follow the HT Tech channel on WhatsApp, click on here to join us now!