It’s only been a year and nine months since OpenAI made ChatGPT available to the public, and it’s already had a huge impact on our lives. AI will undoubtedly reshape our worldThe exact nature of this revolution is still being developed. With little to no experience, security administrators can use ChatGPT to quickly create Powershell scripts. Tools like Grammarly or Jarvis can turn average writers into confident editors. Some people have even started using AI as an alternative to traditional search engines like Google and Bing. The applications of AI are endless!
Generative AI in Cybersecurity: The New Gold Rush?
Powered by The versatility of AI and transformative potential, a new gold rush is gripping businesses across every sector. From healthcare and finance to manufacturing In the retail and distribution sector, companies are racing to stake their claim in this technological frontier. The adoption of generative AI in cybersecurity is accelerating, with many companies actively adding or having already added these capabilities to their platforms. But this raises an important question: Are we doing too much, too soon?
I recently attended a think tank with a focus on generative AI in security. The event began with a vendor and its MSP partner presenting how the vendor’s generative AI capabilities are helping the MSP streamline threat mitigation for its clients. They boasted about significant time savings, which allowed the MSP to optimize its analyst team. This included adjusting their hiring strategy from recruiting seasoned professionals to extending opportunities to junior analysts, while leveraging AI to help train and onboard new analysts, potentially accelerating their path to cybersecurity mastery. They also touted how they reduced their analyst staff from 11 to 4. The reduction in operational overhead has resulted in reduced costs for both the MSP and its clients. There are many pros and cons to this claim. The impact of AI on existing jobs is a topic best left for another time, as the extent of its positive job-creation potential remains unknown.
How much can we trust AI?
Discussions about trust and generative AI often focus on who owns the data users provide, how that data helps train AI models, and whether AI can share and recommend proprietary data with other users. A key aspect that is often overlooked is the significant threat posed by inaccurate data.
I recently suggested to my son that he use ChatGPT to break down the order of operations for his math homework. After a few hours, he told me that he still couldn’t solve the problem. I sat down with him to review the AI’s guidance, and while the answer was well-articulated and beautifully worded, it was far from accurate. The poor kid was going around in circles using a flawed method to solve a math problem. This situation immediately came to mind when the MSP manager explained that they were relying on generative AI to guide junior security analysts.
Two crucial questions arise regarding generative AI: who is responsible for ensuring the accuracy of its data, and who bears responsibility for the consequences arising from inaccurate results?
According to Google’s Gemini, data accuracy in AI is a shared responsibility, involving different stakeholders.
- Data providers:These entities collect and provide the data used to train AI models. Their responsibility is to ensure that the data they provide is accurate, complete, and unbiased.
- AI Developers:Developers who design and train AI models have a role to play in assessing the quality of the data they use. They must clean and preprocess data to minimize errors and identify potential biases.
- AI Users:Those who deploy and use AI models also share some responsibility. It is essential to understand the limitations of the model and the data it was trained on (we need transparency in this area).
The answer to the liability question wasn’t as clear. There isn’t always a single party held liable. Depending on the jurisdiction and specific use case, there may be legal and regulatory requirements that dictate liability, but the legal landscape for AI liability is still developing and will likely evolve as new incidents and case law emerge.
Looking to the past to see the future:
Looking at the past can often provide insights into the future. The potential of AI may have some similarities with the history of search engines. Google’s page ranking methodology is a prime example. The algorithm has significantly improved the relevance of search results. Personalization and localization features have increased the benefits to the user. However, the addition of personalization has led to unintended consequences, such as filter bubblewhere users only encounter information that reinforces their beliefs. SEO Manipulation And privacy concerns have also affected the benefits and relevance of search engines.
In the same way that search engines can combat bias, generative AI models trained on massive datasets can also reflect these biases in their results. Both platforms will be a battleground for misinformation, making it difficult for users to distinguish fact from fiction. In both cases, users must always validate the accuracy of the results. From a personal and professional perspective, anyone using generative AI must create a process to validate the information they receive. One thing I like to do is ask the AI to provide reference links to the sources where it pulled the information provided in its answer. Depending on the topic, I may even check other sources.
Another aspect that has affected search relevance is advertising. While I don’t think generative AI related to cybersecurity platforms will include ads, I can see a world where the generative AI platform will upsell and cross-sell other products. Want to improve your visibility? Try our new widget or one from our partner. Another factor to consider is whether the AI will be able to identify their technology as the problem, and if so, will it tell you?
Endnote
Whether you’re using AI to build a macronutrient-based diet or guide your cybersecurity strategy, it’s essential to remain aware of its flaws and limitations. Always be critical when evaluating AI outputs and never base your decisions or contributions solely on the information it provides.
Living in the age of AI is an exciting experience, full of potential, but also a little scary. While the future is bright, it is essential to ensure that we are well strapped in. Transparency from vendors and a strong regulatory framework from legislators are essential safeguards. These measures will help us navigate the twists and turns of AI, minimize the risks, and maximize the benefits of AI. However, one concern remains. Are we moving too fast? Open dialogue and collaboration between developers, users, and policymakers is essential. By working together, we can establish responsible practices and ensure that AI becomes a force for positive change, not just a wild ride.
To learn more about the challenges and opportunities of generative AI with Fortra, you can read this blog by Antonio Sanchez.