Spotlight Interview With Dr. Walden "Wally" Rhines
Dr. Walden "Wally" Rhines, President & CEO of Cornami and AI Hardware & Edge AI Summit's 2024 speaker, recently answered our burning questions including...
Given the advancements in both AI and quantum computing, what do you see as the most critical challenge facing the AI hardware industry in terms of ensuring long-term data security?
- "Without a doubt, the ability to protect proprietary information is the primary limitation to the use and deployment of large language models and generative AI. For users of LLMs, there is a critical need to be able to interact with the model with encrypted queries and responses. For creators of models that incorporate fine-tuning data, there is a need to encrypt the fine-tuning data portion of the model so that plain text proprietary data is never exposed. Both of these capabilities are becoming available now."
How do you think this will shape the development of AI infrastructure over the next decade?
- "It will cause revolutionary change. Instead of concealing useful models within a corporate firewall, model owners will now be able to securely host their models in the cloud, share them with partners or customers and even offer them to unknown users for a fee. Sharing of the intelligence that is embodied in LLMs will accelerate innovation as the cumulative available insights are securely available to parties who can use them."
How do you see the balance between security and efficiency evolving in AI systems?
- "Secure sharing of generative AI models upsets the balance. Today, generative AI is growing rapidly but the proprietary information and insights are restricted to a limited group of people who can be trusted behind a firewall. As viable post-quantum encryption becomes available for the proprietary information, the whole information industry changes and accelerates. Information and model sharing can become an enormous business that will create new opportunities, new businesses and accelerated innovation."
Why can't we just use currently available forms of encryption, and existing techniques for data security, to protect the information in LLMs?
- "Even if data is encrypted, it must currently be DECRYPTED in order to service queries or perform computation. If this is done in an open environment like the cloud, the data will be exposed to hackers or other theft. Even the most secure data centers can be hacked. The only real solution is to keep the data and models encrypted at all times in a post-quantum secure form of unbreakable encryption. Today, that means fully homomorphic encryption, or FHE. FHE technology has evolved slowly because it is so computationally complex and requires a next generation of computer architectures. Those computer architectures are now being revealed and practical FHE is coming to market this year."
Do we really need the extremely secure form of security provided by FHE? After all, we've been living with lesser forms of security for a long time.
- "Generative AI requires a new level of security. Data that is incorporated in LLMs now includes the most valuable assets of corporations – all of their corporate know how is built into these models. Any leak threatens the competitive advantages and even the survival of companies. This information must be protected in a highly secure form of encryption at all times. In addition, many companies forbid their employees to use publicly available LLMs because of the proprietary information that the models will collect. Unless we have a secure way to query these models, much of the benefit of generative AI will be lost."
Want to hear more from Wally? Register here for your ticket to the AI Hardware & Edge AI Summit