The result is a multi-modal digital experience powered by OpenAI, giving signage content providers enormous power for communicating with and engaging modern audiences.
At ISE 2024, visitors to Intuiface Booth 6K820 will be the first to see two significant product developments that take advantage of the AI revolution.
First, any Intuiface customer can have their Intuiface experiences interact with the OpenAI models GPT-4, DALL-E 3, and Whisper. This means predefined and user-generated prompts can be submitted to the latest and most popular LLMs (large language models) and have the responses displayed in real time. The result is a multi-modal digital experience giving signage content providers enormous power for communicating with and engaging modern audiences.
The GPT-4 integration processes prompts of any length and returns responses for immediate onscreen display. Intuiface experience designers can modify the response before display or allow a multi-prompt conversation to refine the response. An example usage demonstrated at ISE is an intelligent wayfinder that processes user questions entered by an integrated microphone (enabled by Whisper) and determines the appropriate museum location to visit.
With Vision, an extension of GPT-4, users can select an image – such as a snapshot taken with an integrated camera or the result of a DALL-E generation - and then ask questions about the content of that image or have that image modified by DALL-E. For example, Vision + DALL-E could be combined to create a funny photobooth experience.
With DALL-E, custom prompts - predefined by experience designers or specified by users in real-time - result in the generation and then optional display of the requested images. Examples include the creation of contextually meaningful background images or avatars.
All of these custom prompts can be created ahead of time or modified in real-time to accommodate environmental variations and restrictions required for the deployment. Intuiface's OpenAI Whisper speech transcription support makes it possible to collect user prompts via an integrated microphone. In all cases, user-generated prompts can be pre-checked by a "hidden" GPT-4 prompt to ensure no inappropriate content has been requested.
Intuiface has trained the Intuiface Coding Assistant GPT to understand the entirety of Intuiface's TypeScript-based Interface Asset libraries and associated Component Development Kit (CDK). Natural language inputs to Intuiface Coding Assistant generate IAs ready for use in any Intuiface experience. These IAs could range from processing input – such as converting EUR to USD using the day's exchange rates – to integration with third-party cloud-hosted services and device peripherals. All would be accessible to non-developers and usable in any experience.
Use of Intuiface's support for OpenAI models requires an OpenAI account with an API Key and available tokens for prompt processing / image generation. Use of the Intuiface Coding Assistant GPT requires a ChatGPT Plus subscription. The GPT-4, DALL-E, and Whisper integrations are publicly available today. Vision will be available in the next month. The Intuiface Coding Assistant GPT is now available in the GPT Store.
To try the OpenAI model integrations, you need a Trial or paid Intuiface account. All accounts can use these integrations across Intuiface's wide range of supported platforms, including the web. To start your trial, visit https://my.intuiface.com/register.aspx.