The financial and manufacturing sectors are the most advanced in deploying industrial artificial intelligence (AI) technologies, Fujitsu believes. Wireless CPRfollowing a wave of news about its AI initiatives – including, recently, a new Generative AI framework to help businesses manage and regulate large volumes of data in unwieldy large language models (LLM) and a deal with US data security and privacy firm Cohere to develop localized LLMs for businesses in Japan – the Japan-based company highlighted the growing role of AI in the Industry 4.0 market and presented key applications, challenges and measures for businesses to make the most of it.
“AI adoption is progressing (well) in the financial sector, an industry with a fair amount of available data and relatively little analog and unstructured data compared to other industries,” the company said in an email exchange. It continued: “Fujitsu has introduced (more) AI solutions in the financial sector than in any other industry. It also has great potential to be used in manufacturing where a large amount of non-structural data (e.g. charts) is processed and data accuracy tends to fluctuate due to the factory environment. Fujitsu is also focused on developing offerings in this area.”
Fujitsu offers a “broad range of AI services,” he said, including third-party LLMs to develop bespoke AI for custom enterprise use cases. “For example, we are currently working on a Google Gemini-based solution for use cases with a high number of I/O tokens,” he said, also referring to providing “routing technologies to deliver unique models.” The engineering sector, adjacent to the Industry 4.0 market, is a clear target, he said – where Fujitsu is “most excited about enabling LLMs to reference business data for AI adaptation.” He explained: “Standardization of operations is key, and combining (our) IS expertise with core technologies is important.”
For the core technologies, read here: “The expansion of enterprise-specific LLMs and the evolution of ‘Recovery Augmented Generation’ (RAG).” RAG bridges the algorithmic techniques used for AI inference and fine-tuning of core models to create digital assets for generative AI to build connections between generative AI systems and ultimately increase the accuracy and reliability of the systems – as discussed here. This is a crucial, relatively new technique if generative AI is to find a foothold in critical Industry 4.0 sectors. Fujitsu is looking to make this RAF bridge automatic – to “automatically generate… an optimal combination of LLM and RAG,” he replied.
“In this system, customers operate from a single user interface, and generative AI combines data and AI models without the need for data scientists to intervene. In this way, we ultimately hope to significantly improve work efficiency by enabling AI to provide rapid and autonomous recommendations.” More generally, in response to a direct question about “main use cases,” it was suggested that some form of industrial AI will be commonly used in factories and administrative offices – to “respond to customer inquiries, detect defective products, make maintenance and repair recommendations, provide quotes, and various types of assessments.”
The company points the finger at a reference page (in Japanese) of examples of a generative chatbot’s responses to a series of customer inquiries at a Mazda call center. It states: “The role of generative AI in Industry 4.0 is that AI effectively sublimates and organizes enterprise data as knowledge across all business areas, including R&D, estimating, design, sourcing, manufacturing, shipping, maintenance, and functions – as a reliable partner for management decisions and business executors. Beyond Industry 4.0, people are advocating for a human-centered approach, where AI helps people focus on decision-making and idea generation, rather than taking their jobs away.”
“For example, there is a field called ‘materials informatics’ within the development of innovative materials in R&D. We believe that computational science, AI and generative AI could be combined to expand ideas and advance development without the need for experiments and prototypes. In the future, generative AI will evolve into artificial general intelligence and artificial superintelligence (AGI and ASI), asserting itself as a human assistant through autonomous learning. We expect the diffusion of job-specific LLMs to increase. However, when it comes to emotions and intuition, we will still have to rely on experienced humans.”
But what? What about all the challenges of generative AI in Industry 4.0 – in terms of deployment and infrastructure readiness, appropriate domain-specific reference data, hallucination and accuracy (to name just three)? Fujitsu answered each question in turn, summarizing the first challenge (deployment) as: “the need for secure real-time data processing, low latency, high computing power and the right infrastructure to connect business processes and data to cloud-based solutions for effective AI training.” In short, it simply said: “It will be important for customers to have free access to cloud-based HPC solutions.”
Regarding reference data, the company responded that “data quality and model diversity impact reliability.” It said, “Business professionals need to create working models and use the resulting data as reference data. Thus, AI in Industry 4.0 will require such business professionals.” The discussion of so-called AI “hallucinations” (inexplicable brain farts by AI that disrupt data analysis/insights, and with them, potentially business systems) has been broader, but the ultimate goal is to keep humans in the loop and have AI explain itself. “Humans need to oversee the instructions/prompts given to the AI and review the responses given by the AI model,” it wrote.
“Business processes are created that involve human judgment of AI inputs and outputs… Fujitsu has developed technologies to protect conversational AI from hallucinations, which it offers through its Kozuchi AI platform. Fujitsu has (also) started a strategic partnership and joint development with… Cohere to provide generative AI to businesses… (and) improve the reliability of LLMs themselves. Cohere’s LLM provides a clear and reliable data set for creating LLMs. This allows us to provide more accurate answers. Second, we can minimize hallucinations in customer operations by refining customer operations based on Takane, Fujitsu’s Japanese-language LLM.”
Make of it what you will, but the general logic seems clear. So how should Industry 4.0 source and process domain-specific models to train its generative AI tools? Fujitsu responded: “The trend toward collecting data and building and refining models in collaboration with customers will continue. (But) collecting data within a company has its limits. By collaborating with many companies, we can collect data across industries and we anticipate a future where the value of generative AI will grow faster than ever before.” The point here is that companies cannot train LLMs alone, and Fujitsu has been doing this for ages (during the lifetime of generative AI) – on abundant complementary datasets.
It can also integrate company-specific data, and the RAG time that elapses between all these data will make the recommendation process even smoother. “Fujitsu has accumulated knowledge while promoting company-specific LLMs and will continue to offer the most suitable datasets for customers’ operations, including consulting services. It is also promoting the development of generative AI fusion technology that combines existing machine learning models. Rather than simply creating LLMs, this approach aims to create the LLMs best suited to customers’ needs by combining different existing LLMs.”
And finally, what steps should Industry 4.0 take to harness generative AI? Fujitsu outlined three, which are “not very different” across companies and industries. “First, standardize; standardize operations and standardize data within those operations. Second, introduce business metrics; companies need to identify not only the efficiency increase (they want to achieve with) generative AI, but also how it (will) contribute to business growth and value. Third, start small; companies need to create AI introduction roadmaps based on a usage hypothesis, and start small but also fast to move things forward.”