When Nicolas Gertler, a freshman at Yale University, wanted to create an artificial intelligence (AI) chatbot based on his professor’s research on AI ethics, his professor advised him to moderate your expectations.
“I said, ‘Be prepared to be disappointed,’” said Luciano Floridi, Yale professor of practice in the cognitive sciences program. “You never know if people are really interested in this subject, from a Yale professor who works on digital technology.”
He added: “Apparently they are. »
Two weeks after the launch of Bot LuFlot, there were 11,000 requests from more than 85 countries. The bot is not intended to replace more general ChatGPT-like bots, which can seemingly answer any question under the sun. LuFlot Bot focuses specifically on the ethics, philosophies, and uses of AI, answering questions like “Is AI harmful to the environment?” and “What are the regulations on AI?” »
“I didn’t think the technology would reach people in so many parts of the world,” Gertler said. “This is what happens when you break down the barriers to this technology.”
Gertler and Yale join the ranks of institutions creating their own major language models (LLM). Building your own AI has taken off in recent months as concerns about intellectual property, ethics, and fairness swirl around leading generative AI tools like ChatGPT.
Yale Chatbot overcomes intellectual property concerns
Gertler started tinkering with artificial intelligence five years ago, when he was 14 and had a penchant for technology. He deployed his own AI chatbot last fall, during his first year at Yale, as a sort of study guide for his cognitive science midterm exam. He built it with lecture slides and study guides. He then asked the chatbot to ask questions similar to those that would be seen in an exam.
“I just saw it as a really cool experience,” Gertler said.
Gertler started the spring semester by discussing her chatbot with Floridi, who was immediately interested. Floridi is the founding director of Yale’s Digital Ethics Center, a well-known philosopher, and has dozens of research articles and books studying AI ethics.
Gertler, who co-founded an edtech startup called Mylon Education with Rithvik Sabnekar, wanted to create the LuFlot Bot to educate users about the ethics of AI.
“He thought that because of the topics I’m researching, it would be natural for all of this work on the philosophy and ethics of AI to be made available to the general public,” Floridi said.
One of Gertler’s main goals with the chatbot was to bridge the digital divide that has widened with iterations of ChatGPT, many of which charge subscription fees. LuFlot Bot is free and accessible to everyone.
“Giving people a source directly from academia is really important because having access to literature is a privilege,” he said. “There are tons of paywalls and usually ideas are conveyed in a way that uses advanced language that the general public would not be required or able to understand.
“The fact that they are now able to understand just through this website is vitally important to me,” he said.
For Floridi, there was an added benefit to securing intellectual property rights. Many higher education leaders have opposed the formation of LLMs, which are often murky when it comes to intellectual property and copyright protection. With a local LLM, it is clear what the professor’s research will be used (and not used) for.
“It’s the difference between buying something at the store and cooking it yourself; you know the ingredients,” Floridi said. “It may not be better than what you buy, because you cook it, but you know exactly what we put in it. »
Several other higher education institutions, including Harvard University, University of Washington, University of California Irvine, and University of California San Diego.turned towards create their own internal LLMs for use across campus, ensuring the security of faculty intellectual property within the institution.
And like familiarity with technology continues to grow, as does the trend of universities building their own internal models.
Gertler and Floridi recognized that while not all professors can create their own chatbots based on their teachings (given that this requires a large body of documentation to rely on), it could be useful for both professors and to students in the future.
“This project is symbolic of what can be done in a relatively short time frame to create a safe, secure and accessible chatbot, so think about the possibility if professors could create similar robots,” Gertler said. “Gather lecture slides, study guides, and a question bank; they have so much rich data that just plugging it in makes it more accessible to students.