On-Device Artificial Intelligence: A Game Changer
Artificial Intelligence has changed our lives in sweeping and subtle ways. It has automated and enhanced business processes, customer experiences, healthcare, and activities of daily living. But cloud-based AI has its limitations. It consumes vast amounts of data and computing resources. It is reliant on connectivity, which creates real-time delays and bottlenecks, and is too expensive for emerging economies. With the launch of the new AI chips like Google’s Tensor SoC (System on a Chip), the AI paradigm is about to change.
An Edge Computing Revolution
In October 2021, Google launched its new AI chip, the Tensor SoC, which embeds Google’s powerful Tensor Processor Unit (TPU) in its Pixel 6 and Pixel 6 Pro smartphones. While Google’s TensorFlow AI platform makes artificial intelligence and machine learning available as a web-based self-service tool, the Tensor SoC houses AI and ML capabilities directly in the handheld device, eliminating the need for continuous communication with the cloud. AI accelerators like GPUs (Graphics Processing Units, developed for gaming but also useful for AI) and NPUs (Neural Processing Units, designed for AI computations) have already moved some AI functions off of the cloud and onto edge computing servers. Now AI is moving even closer to its sources of data; directly into personal devices.
Mobile Applications for AI
AI chips offer advanced AI applications for everyday smartphone use. While these AI features provide significant enhancements to our daily experiences and tasks, they also open up next-level use cases for a variety of industries.
Here are some examples of AI-based smartphone functions (these are on the Pixel 6):
- Real-time voice transcription and language translation. Speech can be transcribed and translated into different languages directly in a chat window. The user speaks into the phone in one language, and the text emerges in another language in the chat box. The same occurs for incoming messages, allowing two speakers of different languages to converse in real-time. This feature can also be used to directly translate other media such as videos.
- Smarter voice assistants. Automated voice assistants can answer and conduct calls on behalf of the user. The AI assistant will speak to callers directly and provide a transcription of the conversation.
- Computational photography and videography. An AI feature called Face Unblur uses computer vision to detect faces before a photo is taken. It then captures images using two different cameras and combines the two images using AI so that the final product is a motion shot taken from one camera, with a sharp facial image taken from another camera.
Advantages of On-Device AI
AI-on-a-chip, when fully contained in a device, delivers AI power to a new world of users and use cases. Here are some key benefits of offline, on-device AI:
- Increased speed and lower latency. Because on-device AI eliminates the need for back-and-forth communication with the cloud, AI functions like speech processing and autonomous vehicle navigation can be performed smoothly in real-time, without communication- or connectivity-related lags. For example, users can access language translation functions directly, without entering data into separate web-based apps.
- Increased data security and privacy. Since neural networks are embedded in devices, there is no need for sensitive data to be sent to the cloud for processing; meaning, personal data stays on personal devices. Apple and Snapchat have successfully used on-device AI for facial recognition functions, while Facebook used cloud-based AI, which exposed users’ personal data to potential hacking and misuse.
- Increased accessibility. With AI and ML functions available fully offline, these functions can now be accessed from anywhere, at any time. AI-enabled diagnostics and healthcare can be utilized regardless of internet connectivity, and users in rural or remote areas can still access AI-powered apps.
- Lower costs. Businesses can save on data-processing and bandwidth costs by running on-device ML rather than using cloud processing. Cloud access is still required for training the base models, but the fine-tuning and inferencing (running of the already-trained model to process new data) is done on-device, without the need for web access or servers.
- Reduced power consumption and prolonged battery life. Previously, AI functions drained battery power and slowed down other applications. Next-generation AI chips have been designed to use dramatically less power (often cutting power usage in half), and can even work while the device is asleep. This preserves the speed of other functions and maintains battery life.
- Personalized, customized AI models. AI chips come with pre-trained, data-rich models. With on-device AI, the models get fine-tuned based on user inputs, and are thereby optimized for individual users. Devices periodically link to the cloud to receive global updates. They can also upload local updates, based on users’ ground truth data, to the cloud to improve the global model. This may be done via a secure process called federated learning, which shares only processed data (not raw, personal data), and therefore maintains user privacy and security.
Next-Level Use Cases for Offline AI
In the broader landscape, these locally-housed AI and ML capabilities can have even more impact. They can streamline and automate small businesses, refine human-machine interactions, increase physical safety and data security, and bring AI to underserved communities.
For example:
- Autonomous vehicles could process machine vision data internally, without the increased risks related to latency and communication interruptions.
- Security systems could use computer vision and facial recognition to keep premises safe without sending any personal data to the web.
- Smaller-scale human-machine interfaces like fast food drive-through windows could be customized for regional dialects and accents, making the ordering experience smoother for customers.
- Smart wearables and home products could facilitate activities of daily living for users with physical and other challenges.
- AI-driven medical diagnostics and other healthcare applications could work quickly and seamlessly without the need for continuous internet access.
- Retail businesses could provide more immersive online shopping experiences using virtual and augmented reality features.
- Smart factories and machines could increase industrial safety and productivity.
- Communication opportunities and AI-powered learning could be made available to developing and underserved communities all over the world. Recent advances in AI and ML model training such as transfer learning and PARP (Prune, Adjust, and Re-Prune) may soon make it possible to preserve rare and obscure languages and deliver AI capabilities to speakers of these languages. In transfer learning, a pre-trained model is adapted to special use cases using small amounts of additional, specialized data, while in PARP, pre-trained language models are strategically pruned down to manageable sizes and then fine-tuned for rare languages.
- AI and ML can also expand communication opportunities for users with limited literacy and education (who often speak rare languages and dialects) by enabling them to navigate written text and chats solely through voice commands.
Implications of Moving AI Offline and Local
Mobile phones have become a ubiquitous feature of global life, and artificial intelligence has infiltrated this massive market. From here, on-device AI will soon expand into business and industry. As AI models become more local and specialized, the need for customized, smaller-scale datasets will increase dramatically. Region-specific, dialect-specific, industry-specific, and business-specific data will be required to train and refine local AI and ML models. Data from a variety of settings, in a variety of languages, will need to be collected and annotated in order for the models to function optimally. Companies that provide customized data annotation solutions and SaaS data annotation platforms, that enable subscale annotation, will help fuel this shift from cloud-based AI to on-device AI. Forward-thinking businesses and individuals who seek to lead this AI revolution should start preparing now.
Accelerate AI with Annotated Data
Check Out this Article on Smart Data for Autonomous Vehicles
follow us