According to Rob Girling, “In the next 10 years, all visual design jobs will be augmented with algorithmic visual approaches.”
Modern design tools should make it easy to incorporate external data. Everything from the weather and currency conversion rates to NASA photos, Rijksmuseum art collections, Twitter feeds, Spotify albums and proprietary in-house systems, there is a flood of useful content and services located in the cloud - aka Web Services - just waiting for designers to leverage their value. Designing with real data means we can move faster, surface problems and additional constraints sooner, and ultimately create better experiences for our users.
A person’s technical skill shouldn’t matter with interactive machine learning, it should be as simple as pasting a URL or drag-and-dropping a file. Unfortunately, the language spoken by each of these Web services – their Application Programming Interface, or API – requires technical skill to understand and use. The resulting “Great API Wall” was only scalable by software developers, creating an unfair advantage for a privileged few and limiting the full potential of this available content and information.
To learn more about Web Service APIs, read this.
API Explorer takes an Web Service API's request URL as input and generates a non-technical, user-friendly display of the usually esoteric response that is delivered. In parallel, API Explorer automatically preselects the returned API properties (the content and services located in the cloud) it considers most likely to be useful in an interactive experience (e.g. images, videos, websites, text, etc.).
All of this is facilitated by a machine learning engine within API Explorer which gets “smarter” by observing users world-wide as they identify their preferred API properties.
What is the optimal way to display content? Even a seasoned designer can struggle beneath the weight of countless options. Design assistance would function to reduce the noise and highlight options well-suited to the data and personalized for the historical preferences of the user, avoiding a ‘kitchen sink’ approach.
Key to this assistance is to avoid low level features, an endless array of tweakable properties that provide no direction, only endless, mind-numbing variations on a theme. The assistance should present workable, “final” options, simplifying adoption and avoiding long learning curves.
A visual layout bound to the properties selected by a user in API Explorer is “intelligently” produced - thanks to adaptive machine learning - in the targeted interactive experience. The overall layout can be rearranged as needed and combined with all other Intuiface design capabilities to produce modern, engaging multi-touch experiences for Windows, iPad, Android, Chrome OS and Samsung SSP devices. At runtime, the visual display will always remain up-to-date, refreshing information acquired via the API on the fly.
Throughout this entire process, the Intuiface user doesn’t write a single line of code despite gaining access to thousands of available Web Services at no additional cost.
We’ve only just begun. Here are our guideposts: