Building a Unified Long Term Memory

January 24, 2024

As key players in the technological landscape advance towards achieving the goal of developing Artificial General Intelligence (AGI), Personal AI is committed to empowering users in the training of their personalized models. These models enable users to encapsulate their business acumen, expertise, personality, and essence. However, the constraints of individual capacity often impede the collection of high-quality data necessary for effective training.

The Problem of Fragmented Memory

We can keep simplifying and guiding the training process to enhance the user's experience, but without a source of well-put-together personal data, that process means nothing. Online personal data itself is a growing, living concept that often lies beyond our control. It's scattered across the web, thriving on social media, cloud storage, and messaging applications. 

To use this vast array of knowledge as a benefit for our user means having a centralized way to authenticate into third-party applications and collect the relevant data for the user, all while adhering to our strict standards of privacy and security.

In our pursuit of a solution, we recognized the need for trade-offs in three critical domains:

  • First, seamless integration into our current system, encompassing everything from authentication to data syncing for developers. 
  • Second, ideally requiring minimal to no effort on the side of our users. 
  • Lastly, the reliability and resilience of these solutions were paramount. After rigorous testing and exploration of various integration providers, both open source and proprietary, we identified the best partner for our use case: Carbon.ai
The image displays a diagram for a "Unified Long Term Memory" data system. It shows "Continuous Data" from social interactions and personal records, "Discrete Uploads" like websites, files, and audio, and "Historical Data" which could represent archived information. These elements feed into the central system, suggesting consolidation of diverse data types. Additionally, there are "Integrations" with services such as Google Drive, OneDrive, and Gmail, indicating external data sources. A "Carbon" logo suggests the name of the platform or company associated with the system.
Fragmented Data Feed into Long Term Memory

Why Personal AI x Carbon AI?

As an early actor in harnessing personal data for individuals, most integration providers caused too much friction with workflows, poor user experiences, or unnecessary costs. Carbon arrived as a solution to utilize external data in groundbreaking AI models. With Carbon.ai came a smooth user authentication flow, automatic file synchronization, and endless support for every single relevant file type under the sun, all wrapped within an intuitive developer ecosystem. 

Privacy and Security

Security is on the forefront of every user’s mind when it comes to trusting an application with their personal data. We take security very seriously, and with that comes a strict filter with which companies we work with. Their values, practices, and polices all are intensely analyzed to ensure we deliver on our security and privacy promises. Carbon assuredly meets all our standards for our users by delivering on Enterprise grade privacy/security

GDPR/SOC2 Compliant

  • Enforcing isolation to prevent unauthorized access
  • Implementing robust authentication and authorization controls
  • Employing encryption for data in transit and at rest
  • Conducting regular auditing and monitoring for detection of anomalies
  • Ensuring compliance with industry regulations (GDPR and SOC2)
  • Addressing data residency and jurisdiction concerns
  • Prioritizing timely security updates and patching.
Seamless Authentication

Managing third party authentication may seem like a simple task. Have the user sign in, receive an authentication token, use that token to call APIs. In reality it feels like jumping through a million hoops just to land in a muddy puddle. Each provider has an entirely different authentication flow, requirements for developers, limits on apis, vast array of documentation, refresh policies, and on and on and on. 

Having Carbon.ai manage our third party data integrations such as Google Drive, OneDrive (live), Gmail and Outlook (coming soon) means we don’t have to handle redirects, refresh APIs and user tokens, and consistently devote resources to adding new applications and maintaining current ones. Startups should focus on their promised value proposition to the customer, in our case that’s building high quality models that enhance our user’s lives, not managing and maintaining access tokens between a million different data sources. Carbon takes this tedious task and transforms it into a seamless process, taking our time of delivering for new integration from weeks to hours. 

The image outlines a user authentication sequence involving three entities: the user, an authentication service, and an API. Initially, the user submits their credentials to the authentication service. Upon successful validation, the authentication service issues an authentication token to the user. Subsequently, the user makes an API call, incorporating the received token for verification. The API, in turn, checks the token's validity with the authentication service. Once the token is verified as authentic, the API completes the cycle by delivering the requested data back to the user. This diagram is indicative of secure authentication mechanisms like OAuth, which are commonly employed in web and mobile applications to safeguard user credentials and data.
Integration Authentication Flow
Use Cases and Reliability

Consistent file synchronization is a must have and a huge priority for our users. To be able to have their AI constantly learn from ever evolving board reports, documentation, or emails means users don’t have to lift a finger as their AI draws in information and grows with their owner. To implement consistent file updates we would need to constantly manage each and every individual file, check to see if there had been any updates and make the appropriate changes while juggling each integration’s unique file structure, documentation, and api requirements. 

Carbon just happens to be the SAAS service that abstracts all the barriers of integrations and data syncing away. We cannot say enough about how intuitive their system is, they’ve struck the perfect balance between simplification and yet still making that process and data flexible enough to fit our use case perfectly. By eliminating the tedious work of authentication, data syncing, and file management they’ve given us, Personal AI, the tools we need to catalyze the development of personal, individualized models. 

The image is a screenshot of a user interface for configuring integrations between various applications and a personal AI's memory stack. Under "Memory Integrations," Google Drive, One Drive, and Gmail are listed as connected, allowing them to save data to the AI's memory stack. There is an option to manage these connections. However, the Clipboard is shown as disconnected with an option to connect it. In a separate section titled "Message Integrations," the applications for daily messaging integration are listed. Facebook Messenger is connected and there's an option to manage it, while Instagram and a service called Zapier are marked as disconnected with a note indicating "Coming soon," suggesting future functionality. These integrations imply that the AI can access and utilize data from these services to enhance its interactions with the user.
Memory and Message Integrations

With any new concept, there's always going to be a learning curve. In our case it's instructing users to train their AI, prompt their AI for their use case, and grow with their AI. It should not involve erecting barriers in harnessing already existing data for their AI. Fortunately integrating third party apps to train your AI is as simple as it comes. Authenticate, select, and then watch your AI adapt to your data and material. 

Your memory, your data, your AI. 

About Carbon.ai

Carbon is the fastest way to connect external data to LLMs, no matter the source. We’re purpose-built for multi-tenant use cases, and our software development kits (SDKs) simplify access controls, file synchronization, and third-party authentication, requiring minimal effort from developers. Our universal retrieval engine allows Large Language Models (LLMs) to search for relevant content across multimedia file formats, websites, and 15+ data sources, including Dropbox, Google Drive, OneDrive, Gmail, and Notion. Read more

Stay Connected
More Posts

You Might Also Like