The Future of Remembering, Article II: Building Human AI

May 10, 2024

This is the second piece in a four-part series from the Personal AI (personal.ai) Team. The Personal AI team is enabling individuals like you to create a Personal AI that always remembers, so you never forget. Just how the team is making that happen will be explored through four distinct lenses: Scientific (article here), Technological, Design, and Entrepreneurial. The second article is from Sharon Zhang, Chief Technology Officer at personal.ai and AI expert. The second part of the series is focused on personal.ai’s approach to solving the problem of memory retention and recall by building your private Memory Stack to retain and your personal Prime AI to recall.

A diagram showing the progress of AI models build on public data v.s. a personal AI build on an individuals data.

We are driven by the principles of Personal AI at our startup personal.ai — private, authentic, and personal. This permeates how we build our technology. In practice, this translates into driving ourselves to make the technology choices to do right by each individual and break the common norms of how AI is built today. Here’s my view on our AI journey.

The past decade has witnessed the rise of AI

Breakthroughs in data, algorithms, and infrastructure facilitated the evolution of AI, pushing its boundaries for both performance and accuracy from academia to industry applications.

  • For Computer Vision, the seminal moment came in 2012 — when AlexNet outperformed the next runner-up by 10% using a combination of deep learning techniques on ImageNet corpus. In 2015, the efficacy of facial recognition reached 95% and products such as Google Photos began to leverage its capabilities for the consumers.
  • For Natural Language Processing, a few years back in 2018, Bert achieved state-of-the-art performance on a diverse set of language tasks on the GLUE benchmark using transfer learning and attention techniques, with pre-trained models based on the Wikipedia and Brown corpus. Since then, the efficacy of language tasks from Speech Recognition to Machine Translation is nearing human level.
  • More recently, AlphaZero and AlphaFold revolutionized our perceptions of what AI can accomplish — deriving novel strategies by learning and playing against itself and becoming world champions, and solving the computationally intensive and complex problem of protein folding at superhuman speed.

The past decade witnessed the increased adoption of AI

Tech giants, enterprises, startups, and practitioners driven by both the availability of GPUs and the open source community enabled the democratization of AI.

  • GPU computing performance has been following the trends of Moore’s Law, increasing by 1.5x per year over the past decade while the cost dropped by a factor of 10. The increase in the availability of cloud infrastructure enabled an exponential number of startups to adopt and take advantage of GPUs — applying deep learning techniques to a wide range of applications.
  • Natively grown open source communities such as huggingface gained tremendous traction by fostering an AI ecosystem by facilitating the sharing of models, algorithms, and pipelines. Tech giants are also actively giving back to the community by open sourcing their deep learning frameworks such as PyTorch (Facebook) and pre-trained models such as T5 (Google).

This decade the rise and the adoption of AI to benefit consumers is inevitable.

At Personal.ai, we have three challenges that are unconventional to the AI industry:

  1. AI is private, no data sharing. We continuously gather user feedback to measure and improve model accuracy for the specific individual.
  2. AI for the individual, less data per user. Our focus is to build user specific models that learn from one individual’s thoughts instead of aggregated data from many individuals.
  3. AI is personal, utility is for the individual. We create user pods, an isolated infrastructure to make user specific models more effective to build and manage.

AI is private, no data sharing.

Memories are private and so are users’ Stack. Our core principle is that our users own and control their data, this data is not shared across users. The feedback the users give is used to learn and adapt to their individual goals.

  • Both implicit (e.g. accepting the recall output) and explicit feedback (e.g. correcting predicted output) that is built into our data structure serve as ground truth to assess and improve model performance. Collecting user level feedback allows us to measure model effectiveness and detect model drift on a per user level.
  • Active learning techniques reduce the amount of actions the users need to take to achieve greater accuracy. We do that by engaging the users to give feedback on the subset of predictions that would help the model the most. For example, we actively surface concepts from the user with low confidence scores for the user to correct or update. The model output and the user input therefore works in tandem to improve the algorithm.
Hybrid feedback learning with user and system outputs informing each other to evolve.

AI for the individual, less data per user.

Our vision is to build a personal AI for each individual. Therefore our focus is not to create generic models based on the internet of data, but to create user specific models using their intranet of data.

  • Domain adapted models have been shown to increase the efficacy of machine learning models over generic models, particularly in the field of language application. Each individual possesses a unique language pattern, user adapted modeling tailors to the vocabulary and style of the specific individual. On the input level, adapting generic speech models with user specific terminologies to improve the accuracy of ASR. On the output level, generating written styles and voice consistent with the user’s language models.
  • Advances in pre-training techniques such as self-supervised learning and transfer learning decreased the need for large amounts of pre-collected user data, by learning representation without the need for labeled data. We create transformers using self-supervised learning on the users’ data, applying encoder-decoder architectures for both learning user specific data representations, and controlling the generation of text based on user concepts.
Aggregated data for internet and domain in the world vs Individual data for intranet and human at Luther.

AI is personal, utility is for the individual.

In support of both our principle of preserving user data privacy and our vision to build individuals’ personal AI, our infrastructure is centered around building user specific pods. Each of the user pods can be thought of as a personal computer running in the cloud with the users’ own data and models.

  • The wide adoption of containerized technology and GPU-enabled resources on the cloud platforms support the need for variable load and scaling of models at inference time. We develop our infrastructure and encapsulate each of the transformer models in its own container, giving us the ability to rapidly develop and deploy models and giving our users the flexibility to choose a custom set of transformers useful to them.
  • The architecture of encapsulating users’ data and models in user pods solves for our principle of Privacy First. User’s personal pods guarantee the separation and security of data by leveraging Oasis Labs’ blockchain technology for encrypted data storage and secured model execution within TEE (i.e. trusted execution environment).
Transformer architecture with standard inputs and outputs encapsulates AI models.

We believe that a personal AI for every individual is inevitable. As the application of AI has moved from solving macro problems (e.g. indexing the whole internet) to micro problems (e.g. personalized recommendations of news feeds), the convergence of user centric principles and technological trends set the stage for us to create the next generation of AI to maximize individual value with individual model built on individual data.

Stay Connected
More Posts

You Might Also Like