Skip to main content
Data ManagementMachine Learning

What Is Salesforce AI?  

By February 26, 2026March 2nd, 2026No Comments

Updated from the original April 2024 article

what is salesforce ai

Introduction to Artificial Intelligence

Artificial intelligence, or AI, is not just the rage anymore — it is the expectation. In late 2023 when this article was first published, generative AI was the headline. Consumers were engaged because for the first time it felt like the machines were understanding our normal speech patterns and having “real” conversations. That novelty drove the fastest consumer adoption of any new software technology in history. Developers also saw the potential of using these generative large language models (LLMs) in their applications and started to experiment – with mixed results. Less than two years later, when LLMs are now seemingly built into every software application, the conversation has shifted again. We are now in the age of agentic AI — autonomous systems that don’t just generate content but plan, reason, and execute tasks on their own. The machines aren’t just talking back. They’re doing the work. And what can be even frightening to some, learning on their own, without much guidance, as they go.

To understand where we are, it helps to know how we got here. What we now broadly call “AI” was built on decades of earlier forms of machine learning that the software industry has retroactively pulled under the AI umbrella. As always, the industry has latched on to the latest buzzword in order to better market its wares.

The reality is that AI has been evolving for more than 60 years. The field was effectively founded in 1956 when John McCarthy and Marvin Minsky organized the Dartmouth Conference, which gave artificial intelligence its name and established it as a discipline. The basic elements of neural networks followed shortly after. Since then there have been four epochs in the development of AI, with large gaps of apparent inactivity in between until some new technological breakthrough occurred. But AI, as we understand today’s innovations, really evolved into commercial markets in the early 2000’s (Figure 1).

Statistical learning came first, becoming prevalent as the volume of data in online services from Google, Amazon, Facebook, Yahoo!, and others exploded into petabytes. This provided enough data to begin to make predictions at scale. Statistical learning turned into machine learning in the late aughts (the decade between 2000 and 2009), which turned into deep learning around 2014, which subsequently turned into reinforcement learning around 2018, and became what we began calling AI with the evolution of generative AI in 2022/2023. By late 2024, the industry moved past generative AI into agentic AI — systems that combine large language models with reasoning engines, long-term memory, planning capabilities, and the ability to take autonomous action. If generative AI was about creating content, agentic AI is about “getting stuff done”.

Statistical
Learning
2000
Machine
Learning
2009
Deep
Learning
(new neural nets)
2014
Reinforcement
Learning
(self-teaching NNs)
2018
Generative
AI
(transformers)
2023
Agentic
AI
(autonomous agents)
2024
Figure 1 – The Timeline of AI Technology and Terminology since 2000 (Fourth Phase of ML)

This last, incredibly rapid evolution was unprecedented in the 70-year history of computing. An entirely new generation of AI evolved in just under two years. That was 40% of the time required for prior cycles. No major evolution of computing capabilities had ever occurred that quickly. It was an incredible accomplishment. It should also be a warning to all of us that the pace of technological innovation is about to skyrocket, and the skills our organizations will need to survive will change drastically in the very near future. This is the era we are in now, and it is the era that has fundamentally reshaped Salesforce’s AI strategy since I wrote the original article.


Predictive Machine Learning

A lot of today’s AI is what we called machine learning – the more traditional data science technology – only 10 years ago. It can make data predictions in one of two ways. Supervised learning has a target you can test a model against. For example, an auto manufacturer has a history of auto repairs in new cars we build over time. We want to predict the likelihood of an electrical problem with our new auto line. We use the old data as “the target” and then try to build a machine learning model to predict the actual historical data. If we are accurate enough, we then apply that machine learning model to the new car line/year and its electrical components and predict the likelihood of it having electrical failures. If the model is accurate and the underlying components haven’t changed drastically, our prediction should also be reasonably accurate.

With unsupervised learning, you don’t have target data to train the model. Instead, data scientists have found ways to use “clustering” techniques to identify and classify patterns in the data. The algorithm underlying the model looks for these patterns and puts a data point in one of the clusters based on how it “ranks” on the combination of features that are considered when the model is built.

This matters because these techniques haven’t gone away. They are still powering much of what Salesforce (and the rest of the industry) does under the AI label. Predictive lead scoring, opportunity insights, recommendation engines — these are machine learning, and they work. The newer techniques build on top of them, not in place of them.


Salesforce and AI

This history is important because it gives context to Salesforce’s investment in and deployment of AI into its platform. The fact is Salesforce’s commitment to what we call AI today began in 2014. That is when Marc Benioff (now) famously stated at a company meeting that “Salesforce will become an AI-first company.” Even though AI as we understand it today was not generally called that or in broad commercial use at that time, Benioff, like a few of the top visionary leaders in the technology industry, saw that it would be there in a timeframe that could make it the “North Star” to which company mid- to long-term strategy would point.

So Salesforce set off on an AI-focused journey. It started with typical machine learning capabilities and evolved them slowly through 2023. But then the pace changed dramatically, just like it did for the entire tech sector. In the span of roughly 18 months — from mid-2023 to early 2025 — Salesforce went through three distinct AI platform phases: the Einstein 1 Platform (September 2023), Einstein Copilot as a generative AI assistant (early 2024), and then Agentforce as an autonomous agent platform (October 2024). Each phase represented a more ambitious vision of what AI could do inside a CRM.

Marc Benioff described Agentforce as “the Third Wave of AI — advancing beyond copilots to a new era of highly accurate, low-hallucination intelligent agents.” Whether or not one agrees that we’re squarely in that third wave already, the directional statement is clear: Salesforce is betting its future on autonomous agents, not just assistive tools.

The suite of capabilities in Salesforce today is a mixture of machine learning, deep learning, reinforcement learning, generative AI, and agentic AI techniques. Salesforce in its AI suite still uses the best tool to manage a specific type of task. Sometimes the best tool is basic machine learning. For example, Salesforce’s B2C Commerce Einstein API still has a recommendation engine built on machine learning — not generative AI — because that technology is quite evolved, reasonably accurate, and has been working reliably since 2017. As the old adage goes, “If it ain’t broke, don’t fix it.”

As a result, as we talk about Salesforce AI, we will use it as shorthand for all the data science-based technologies underlying Salesforce’s platform. When we want to talk about specific capabilities, we will refer to them based on the particular data science approach their algorithms are based on (if that information is publicly available).


The History of Salesforce AI Capabilities

The table below provides a comprehensive view of how Salesforce AI’s capabilities have evolved — from early analytics through the current agentic era — and how the platform’s identity has shifted over time.

The Foundation (2014–2016)

In 2014, the broader tech industry was deep in the machine learning era. Deep learning was just beginning to show commercial promise, and the term “AI” was still reserved for academic circles and science fiction. Salesforce’s moves during this period were about laying groundwork, not making headlines. Benioff had declared his AI-first vision, but the platform didn’t yet have the data infrastructure or algorithmic sophistication to deliver on it. So the focus was acquisitive and foundational: build out analytics capabilities with Wave, acquire relationship intelligence with RelateIQ, and bring social data into the mix with Demodata. None of these were “AI” in the way we use the term today. But they created the data substrate — the customer signals, the behavioral patterns, the integration points — that everything after 2017 would be built on.

The Einstein Era (2017–2022)

By 2017, deep learning had matured enough for commercial deployment. Cloud computing had reduced the cost of training models at scale. And Salesforce – critically – had accumulated enough customer data across its clouds to make predictions meaningful. The result was the Einstein Platform: a unified AI layer that embedded predictive lead scoring, opportunity insights, and automated workflows directly into the CRM. This product evolution was combined wtih massive infrastructure acquisitions. MuleSoft ($6.5B) gave Salesforce the integration backbone to connect data across enterprise systems. Tableau ($15.7B) added visualization and business intelligence. Slack ($27.7B) brought collaboration. These weren’t AI products per se — they were the connective tissue and delivery surfaces that would make AI useful at enterprise scale. Salesforce was building the plumbing for an AI-based future. The investment was staggering: nearly $50 billion in acquisitions across five years.

🏗️
2014
–2016
The Foundation
2014
Product
Wave Analytics — BI and data visualization. Foundation for AI-driven insights.
2015
Acq
RelateIQ — Relationship intelligence. Key contacts, next-step predictions.
2016
Acq
Demodata — Social listening and sentiment analysis.
🧠
2017
–2022
The Einstein Era
2017
Product
Einstein Platform — Unified AI platform. Predictive lead scoring, opportunity insights. A defining moment.
2018
Acq
BeyondCore — NLP expertise. Sentiment analysis, text summarization.
2018
Acq
MuleSoft — Enterprise integration. Critical AI infrastructure. $6.5B
2019
Product
Prediction Builder — Custom AI models without coding.
2019
Acq
Tableau — Data visualization & BI. $15.7B
2021
Acq
Slack — Enterprise collaboration. Later a surface for AI agents. $27.7B
2023
–2024
Generative AI Inflection
2023
Product
Einstein GPT — Generative AI. Content generation, summarization, personalized comms.
Sep
2023
Platform
Einstein 1 Platform — Unified rebranding. Data Cloud integrated. Copilot & Trust Layer debut.
Early
2024
Product
Einstein 1 Studio — Low-code builders: Prompt, Model (BYOM), Copilot. Later renamed.
Apr
2024
Product
Einstein Copilot GA — Conversational AI assistant. Chain-of-thought reasoning. Later upgraded to Agentforce.
🤖
2024
–Now
Agentic AI Revolution
Sep
2024
Acq
Tenyx · Own · Zoomin — Voice AI, data protection, unstructured data. Agentic building blocks. $2.35B+
Oct
2024
Platform
Agentforce Launch — Autonomous agents for sales, service, marketing, commerce. $2/conversation.
Dec
2024
Platform
Agentforce 2.0 — Atlas Reasoning Engine. System 2 reasoning. Multi-model support. Enhanced RAG.
Mar
2025
Platform
Agentforce 2DX — Proactive agents. Multi-step process handling. Deeper data integration.
May–Jun
2025
Acq
Informatica · Convergence · Moonhub — Enterprise data mgmt, agent navigation, AI hiring. Largest since Slack. $8B+
Jun
2025
Platform
Agentforce 3.0 — Command Center. MCP support. AgentExchange. Agent Script. 100+ industry actions.
Aug–Dec
2025
Acq
Waii · Qualified · Regrello — NL-to-SQL, agentic marketing, process automation (pending).
Table 1 – The Evolution of Salesforce AI Capabilities (2014–2025)

The Generative AI Inflection (2023–2024)

The release of ChatGPT in late 2022 and the explosion of transformer-based LLMs created an inflection point that no enterprise software company could ignore. Salesforce moved quickly. Einstein GPT brought generative capabilities into the platform in early 2023. By September of that year, Salesforce had rebranded its entire AI platform as “Einstein 1,” natively integrating Data Cloud and introducing both Einstein Copilot (a conversational AI assistant) and the Einstein Trust Layer at Dreamforce. The pace was nothing short of frantic — Einstein 1 Studio shipped mutliple low-code AI builders/tools and Copilot went GA across Sales and Service. In retrospect, however, this was a transitional phase. Salesforce was adapting to the LLM revolution, but it hadn’t yet found its distinctive strategic position. That would come next.

The Agentic AI Revolution (2024–Present)

The pivot to agentic AI in late 2024 was where Salesforce found its thesis. While much of the tech industry was still building chatbots and copilots — assistive tools that helped humans do their work — Salesforce made a bet that the next step was autonomous agents that could do the work themselves. Agentforce launched in October 2024, and the pace of iteration that followed was unlike anything in Salesforce’s history: three major platform releases in eight months (2.0 in December, 2DX in March, 3.0 in June). The Atlas Reasoning Engine introduced System 2 reasoning. The Command Center added observability. AgentExchange created a marketplace. And the acquisition spree continued — most notably Informatica at $8 billion, which gave Salesforce enterprise-grade data management to feed its agents. This is the era we are in now, and the density of the table below reflects the reality: more happened in the last 15 months than in the previous five years combined.


The Philosophy Behind Salesforce AI

Our teams at Datagroomr, engineering and business alike, are using LLM-based AI tools 12-15 hours a day every day. As a result, we take a conscious and cautious approach to its use. AI, for all its benefits, has many potential downsides. Misuse of private data, intentional or unintentional biases in the algorithms, and the potential for generating just plain wrong results without an understanding of why are only a few. Anyone doing serious work with AI has a set of underlying principles they follow to try to ensure that these challenges are handled appropriately.

Salesforce has been especially conscientious about this (as have we) and has a five-point set of AI principles that it follows (Figure 2). It began work on these in 2018 and spent almost a year getting feedback from both within and outside the company before publishing.

Trusted AI Principles
🛡️
Responsible
Safeguard human rights and protect the data we are entrusted with.
📋
Accountable
Seek and leverage feedback for continuous improvement.
🔍
Transparent
Be transparent about how we build our AI and guide users through machine-driven recommendations.
🚀
Empowering
Promote economic growth and employment for our customers and society as a whole.
🤝
Inclusive
Respect the societal values of all those impacted, not just those of the creators.
Figure 2 – Salesforce’s Core Principles for Its Use of AI
Source: blog.salesforceairesearch.com

  • Responsible. Salesforce will ensure that its application of AI uses data appropriately, safeguards private data, and ensures that the use of AI does not impose on the basic rights of individuals or groups. Although this might be too strong a statement, the concept of responsibility behind Salesforce AI is their equivalent of doctors’ Hippocratic Oath: “first do no harm.”
  • Accountable.  The teams building Salesforce AI’s capabilities will be accountable to the company, its customers, its partners, and other audiences which have a vested interest in Salesforce’s creation and use of AI capabilities. They will create mechanisms, both manual and automated, to solicit continuous feedback on the performance and impact of their work.
  • Transparent.  This has two aspects. The more common usage of transparency around AI has to do with being able to understand why an algorithm came out with an outcome. But this principle also extends to cover the application of these tools by end users — providing a clear means to understand what a specific AI-driven API or capability does and how to implement it.
  • Empowering.  Salesforce AI’s capabilities have a key imperative: help Salesforce’s customers and their clients succeed. That success can come in the form of growing revenue, providing more jobs, or improving the lives of people in communities generally.
  • Inclusive.  The term “inclusive” when used in AI often refers to unintentional biases in algorithms that tend to favor one gender, ethnic group, age group, or demographic over others. Salesforce’s goal is to have governance against such biases built in from the start.

From Principles to Infrastructure: The Einstein Trust Layer

Since our original 2023 article, Salesforce has moved beyond principles on paper and operationalized them into a concrete technical component called the Einstein Trust Layer. This is a deployable security and governance layer that sits within the platform architecture (Figure 3) and performs functions including masking personally identifiable information (PII), scoring outputs for toxic content, enforcing zero-data-retention policies with LLM partners, logging every agent action for audit, and implementing dynamic grounding to improve accuracy.

The Trust Layer matters more now than it did in 2024 because the stakes are higher. When an AI assistant helps a human draft an email, a hallucination is an inconvenience. When an autonomous agent resolves a customer service case, processes an order, or modifies a record without human oversight, a hallucination is a liability. Agentforce’s autonomous nature makes the Trust Layer architecturally essential, not just philosophically nice-to-have.


The Components of Salesforce AI

Trying to get a holistic view of Salesforce AI can be a bit daunting — and if anything, it is more daunting today than it was in 2024. The functionality has grown significantly, the branding has shifted multiple times (Einstein 1 Platform became “Salesforce Platform,” then was repositioned under the Agentforce 360 umbrella), and different groups within Salesforce describe the capabilities differently depending on their audience. Figure 3 gives our take on the current Salesforce AI architecture.

New since April 2024
Significantly evolved
Largely unchanged
Agentforce 360 Platformformerly Einstein 1 Platform → Salesforce Platform
New
Agentic AI Layer
Agentforce
Sales Agent
Service Agent
Marketing Agent
Commerce Agent
Custom Agents
Agent Builder
Agent Script
AgentExchange
New
Observability
Command Center
Agent Health
Performance Analytics
Interaction Drill-Down
NewAtlas Reasoning Engine
System 2 reasoning · ReAct evaluation loops · Multi-agent orchestration · Guardrails
Evolved
Generative & Predictive AI
Einstein AI
Einstein GPT
Prediction Builder
Lead & Opp Scoring
Conversation Insights
Einstein Search
Einstein Bots
Evolved
Low-Code AI Builders
Builder Tools
Prompt Builder
Model Builder (BYOM)
Copilot Builder
formerly Einstein 1 Studio
Evolved
Cloud AI Functions
Salesforce AI Cloud
Sales AI
Service AI
Marketing AI
Commerce AI
Dev & Admin AI
Unchanged
Developer APIs
ML-Enabled APIs
B2C Commerce
Einstein Bot SDK
Vision & Language
Prediction Service
Discovery REST
Net Zero Cloud
NewEinstein Trust Layer
Zero data retention · PII masking · Toxicity detection · Dynamic grounding · Audit logging
EvolvedData Cloud
Vector DB · Zero-copy · Unstructured data · Real-time harmonization
EvolvedIntegration Layer
MuleSoft · Informatica ($8B) · MCP · Zero-copy partners
NewMulti-Model AI Layer
OpenAI · Anthropic Claude · Google Gemini · IBM Granite · Salesforce models
UnchangedHyperforce Layer
Cloud-native infrastructure · Multi-region · Compliance · Scale
Figure 3 – An Overview of Salesforce AI Elements (Updated February 2026)

We have purposely tried to provide a more holistic view of this architecture than Salesforce’s own documentation typically does. As practitioners who work with this platform daily, we think clearly outlining its underlying structure is critical to knowing how to best apply it to your specific business and technical challenges.

Here is how the layers stack up, from top to bottom:

Agentforce (Agentic AI Layer)
This is the newest, most strategically significant layer. Agentforce provides both pre-built autonomous agents (for sales, service, marketing, commerce, and more) and a set of tools to build custom ones. Agent Builder enables low-code creation. Agent Script allows hybrid reasoning — combining deterministic workflows with flexible LLM reasoning. The AgentExchange marketplace offers pre-built agents and actions from partners. The Agentforce Command Center, introduced in version 3.0, provides full observability into agent health, performance, and individual interactions.
Atlas Reasoning Engine
This is the “brain” of Agentforce — the component that enables agents to think, plan, and act autonomously. Atlas uses what is called “System 2” reasoning — a deliberative process that evaluates data, generates plans, and self-corrects through feedback loops before taking action. It employs a ReAct (Reasoning and Acting) approach rather than simple chain-of-thought, giving it the ability to handle complex, multi-step tasks with significantly higher accuracy and lower hallucination rates than earlier approaches.
Einstein AI (Generative & Predictive)
The Einstein capabilities that existed before Agentforce are still very much present and active. Einstein GPT handles content generation, conversation summarization, and personalized communications. Prediction Builder enables custom predictive models. Lead and opportunity scoring, conversation insights, Einstein Search, and Einstein Bots continue to serve their respective functions. This layer is where non-agentic generative and predictive AI lives.
Builder Tools
Formerly known as Einstein 1 Studio, these low-code tools allow developers to quickly build custom generative AI-based applications. They include: Prompt Builder for reusable LLM prompts, Model Builder for leveraging your own large language models, and Copilot Builder for extending agent capabilities with Apex, Flow, and MuleSoft APIs.
Cloud AI Functions and ML-Enabled APIs
The AI-enabled functions embedded in each Salesforce cloud — Sales AI, Service AI, Marketing AI, Commerce AI, and Dev & Admin AI — remain largely as they were. Similarly, the developer-facing ML-Enabled APIs (B2C Commerce Einstein, Vision and Language, Prediction Service, Bot SDK, Discovery REST, Net Zero Cloud) continue to serve their specialized purposes.
Einstein Trust Layer
As discussed above, this governance layer sits across the architecture, ensuring data protection, output safety, and auditability for all AI interactions.
Data Cloud and the Integration Layer
Data Cloud is the real-time hyperscale data engine that unifies and harmonizes customer data across systems. It has evolved significantly since 2024 with the addition of vector database capabilities for reasoning over unstructured data (emails, PDFs, images) and a “zero-copy” architecture that integrates with data lakes without duplicating data. The Integration Layer — powered by MuleSoft and now substantially strengthened by the $8 billion Informatica acquisition — handles the movement of data across enterprise systems. Together, these form the data foundation that makes everything above them possible.
Multi-Model AI Layer
This is new. Salesforce now supports a “bring your own model” strategy, allowing customers to choose from multiple LLM providers — including OpenAI, Anthropic’s Claude (via Amazon Bedrock), Google’s Gemini, and IBM’s Granite models — depending on their use case, regulatory requirements, or cost considerations.
Hyperforce
The cloud-native infrastructure layer that underpins the entire platform. Multi-region, compliance-ready, and scalable.

What Comes Next

We will leave it here for now. In subsequent posts in this series, we will drill into each of the major architectural components — starting with Agentforce and the Atlas Reasoning Engine, then working down through Einstein AI, Data Cloud, the Trust Layer, and the developer APIs. Each article will cover what the component does, how it works, how it fits into the broader architecture, and where we think it’s headed.

The pace of change in the last 18 months has been remarkable. Salesforce has not been sleeping — it has been accelerating. Whether the market fully adopts agentic AI at the speed Benioff envisions (one billion agents was the stated goal) or whether adoption follows a more measured path, the architectural investment is real, substantial, and — based on what we are seeing — unlikely to slow down.

Arthur Coleman

Arthur Coleman is a fractional Chief Product Officer, data scientist and builder of AI products. He is both deeply technical and a seasoned business leader who believes building a great product that uniquely fills an essential customer need requires attention to the tiniest details. With a special appreciation for elegant user interfaces, Arthur is a DataGroomr superfan.