With an increasing reliance on AI for tasks such as data analysis and decision support, how does Moderna ensure responsible and ethical use of AI technology within its operations?
This is an important issue, which we always place first in the order of priorities. We are not responsible for designing the technology itself. We either use APIs or products made by other companies, which we call hyper scalar. And they have their own responsibility. Ours mainly focuses on how we use it, the right use. So the first thing is we have published a code of conduct, which, by the way, is a public code of conduct, you’ll find it on the Miranda result in our corporate policies. And it’s a living document, we learn along the way what the right code of conduct should be. But it is of course a combination of what is legal, but also what is desirable in the responsible way that we use AI. For example, we want to make sure that we respect human integrity and that we respect human diversity. In all areas of company life. This is extremely vital for us, especially for a life sciences company. This means providing universal products that can serve people and save lives in all walks of life. And in every country in the world. We have an even greater desire and ambition for diversity than any other company I know. It is the quintessence for us. And AI can come with biases inherited from its training datasets, this is not a new topic. It’s part of business. And so the way that we use AI, the way that we think about AI, is the buffer between that training layer and that corpus and that dataset that has inherent biases. And the way we exploit it with a constant spirit of respect for human integrity and respect for human diversity. RIGHT.
There are a series of principles that are extended and layered in how we construct our code of conduct and then translated into a more granular usage policy, which is what you need to read, understand, learn and demonstrate that you have understood and learned . before having access to AI products here. So even though we make them universally accessible, they require training before granting you access to the usage policy part of this right to understand what you should and shouldn’t do with AI today. Today we grant it to you, because with power comes responsibility. At even higher levels of management, we talked about code of conduct, we talked about usage policy. And the third level is governance. Granular use case governance, if you’re doing a GPT for yourself, that contributes to the core unit. If you are doing a GPT for your team, you want your manager to approve this GPT and for the team to have their own say in how the GPT is built and organized and review how you update, and therefore if you create a GPT, it’s going to be critical to the business and it’s going to impact the business, so we’re going to have governance of that GPT, I’m looking at this, we don’t I don’t want to do that for thousands of GPT, then again, it could be using a hammer to put a nail in the wall, right? Like it doesn’t make any sense.
But we want to keep in mind a few dozen GPTs that could potentially become critical in the future to who we are and how we work as a company. And providing the right level of governance to those who determine how they are designed, how they are trained and how they are updated. who owns them, we need to update them. Because they are products. Even though we call chatGPT a product, it is true. An AI agent is also a product. This is why my dental is present on products and platforms. Because you can also think of ChatGPT as a platform that delivers products, each AI agent is a product. And then we have to apply the product mentality to it and be as demanding of that product as any other piece of technology in our business. But we can’t do that for everything anyone does, we need to be mindful of priorities and also give people a sense of experimentation and freedom in their own personal use cases. And so, we’re learning as we go, it still takes a lot of research and thought. But at these three levels, the code of conduct, the user policy, the granular governance, we are learning every day. What makes sense, keeps us safe, keeps the business safe, and how we can make sure that our growing reliance on AI, which is, you know, more obvious than the momentum that I don’t don’t think we’re short in any given future, we still are in a safe learning environment with him. And we always make the most of it. So this is not a small subject. And it is very important for us to reduce the risks associated with AI and to stay safe with it, both in our understanding of it in the ways we work and in the use we make of it in various parts of the business.