Currently, the current cycle of hype around artificial intelligence (AI) is causing much concern among educators and organizations, whose claims about the dangers and benefits are often wildly exaggerated. As usual, the truth lies in the middle. Like any technology, new or old, AI requires government acquisition professionals to educate themselves and take a balanced, informed approach to new systems. This requires new policies to develop, maintain and evaluate training systems.
AI has the potential to transform the way we train and educate government personnel. AI-based systems can make staff more agile, innovative and responsive to the changing needs of government organizations. With the advent and adoption of AI, we have a unique opportunity to reset the way we train and educate. We can eliminate passive, dull, click-through learning experiences in favor of dynamic, engaging and more effective methods.
However, we must take a moment to reflect on where we are today and where we want to go. Training and education may become less effective if AI is not developed and implemented carefully and ethically. AI systems have already been shown to increase workplace disparities when trained with biased or low-quality data. In addition to using this moment to carefully and intentionally apply sound learning engineering principles, we must recognize that prioritizing underlying algorithms and data quality is essential and an immediate concern.
Data is the fuel that powers AI. If the data or algorithms are biased or of poor quality, the results will also be flawed and ineffective. Without proper design and oversight, AI systems can reflect and amplify existing biases in data, algorithms, and prior human judgments. Bias at any level can lead to unequal outcomes for certain groups of learners, such as women, minorities, neurodiverse people, and people with disabilities. As we begin to feed our AI applications with data, we need to ensure that it is of high quality and reflects requirements, while developing an efficient and fair working environment. Organizations must take an ethical, human-centered approach to AI and ensure that AI systems are transparent, explainable, accountable and fair.
Specifically, government agencies must be informed consumers of AI training and education systems. First, we need to apply effective learning engineering and data science principles in training systems to provide engaging and effective workforce experiences. The data used to train these systems should reflect the diversity of the workforce and the diversity of situations in which the training will be applied.
One way to combat AI bias is to continually monitor and audit these AI systems and their outcomes. Education outcomes should be regularly measured to ensure systems are effective, unbiased, and updated with new data and improved algorithms as they become available. Most importantly, we need to solicit feedback from learners, employees, and other relevant stakeholders, such as trainers, managers, or experts, to evaluate the effectiveness of AI systems. Likewise, when selecting developers for new training platforms, we must be attentive to their development practices and their ability to understand workforce diversity to avoid bias.
As AI reshapes the training and workforce development landscape, organizations must proactively adapt and innovate, developing new best practices and policies for acquiring training systems . The journey to fully harnessing AI in education requires immediate action and strategic planning, laying the foundation for a future where technology and learning engineering transform the workforce.