Skip to main content

Salesforce AI is transforming how businesses manage customer relationships, automate workflows, and unlock data-driven insights—but it can also feel complex at first glance. This FAQ breaks down the most common questions about Salesforce AI, from what it is and how it works to how companies can use it to boost productivity, improve customer experiences, and drive smarter decision-making. Whether you’re just exploring AI capabilities or looking to deepen your Salesforce expertise, this guide will give you clear, practical answers.

salesforce ai faq

General Salesforce AI Questions

What is Salesforce AI?

Salesforce AI is the set of artificial intelligence capabilities that are embedded in the Salesforce platform. It includes predictive machine learning (lead scoring, opportunity insights), generative AI (Einstein GPT for content creation and summarization), and agentic AI (Agentforce for autonomous task execution).

These capabilities include the entire platform that is part of the Agentforce 360 umbrella and are powered by a layered architecture including the Atlas Reasoning Engine, Einstein Trust Layer, Data Cloud and support for multiple LLM providers.

Importantly, “Salesforce AI” is not a single product. It is shorthand for all of the data science-based technologies that are behind the platform, from simple machine learning models that have been operating since 2017 to the latest autonomous agents.

What is the difference between Salesforce AI and ChatGPT?

ChatGPT is a general purpose conversational AI product by OpenAI. Salesforce AI is a set of capabilities that was designed specifically for CRM and business operations.

The major differences are in the context and action: Salesforce AI is based on your real customer data (using Data Cloud), based on enterprise security policies (Einstein Trust Layer), and can take real business actions – like resolving a service case, updating records or executing marketing campaigns. ChatGPT does not have access to your CRM data and is not able to take actions in your business systems.

That said, Salesforce actually uses OpenAI’s models as one of several LLM providers within their multi-model architecture. And as of late 2025, the Agentforce Sales App is available as a beta plugin within ChatGPT, and users can research and update Salesforce records from a conversation within ChatGPT.

What are the building blocks of Salesforce AI?

Salesforce AI is designed as a layered architecture with 8 different components, from top to bottom:
1. Agentforce (Agentic AI Layer) – Pre-built and custom autonomous agents for sales, service, marketing, and commerce, Agent Builder, Agent Script, and AgentExchange, and Command Center for agent observability.
2. Atlas Reasoning Engine – The “brain” for Agentforce. Systems 2 reasoning using ReAct evaluation loops and multi-agent orchestration.
3. Einstein AI (Generative & Predictive) Einstein GPT, Prediction Builder, Lead/Opportunity Scoring, Conversation Insights, Einstein Search and Einstein Bots.
4. Builder Tools – Used to be Einstein 1 Studio. Includes Prompt Builder, Model Builder (BYOM) and Copilot Builder for low code AI development.
5. Cloud AI Functions & ML-Enabled APIs – AI functions built into every Salesforce cloud (Sales AI, Service AI, Marketing AI, Commerce AI) along with developer-facing APIs.
6. Einstein Trust Layer – Security and governance: PII masking, toxicity detection, zero-data-retention, audit logging, and dynamic grounding.
7. Data Cloud & Integration Layer – Real-time data unification engine including vector database, zero copy architecture, MuleSoft and Informatica for an enterprise integration.
8. Multi-Model AI Layer & Hyperforce – Support for Multiple LLM providers (OpenAI, Anthropic Claude, Google Gemini, IBM Granite, Salesforce models) running on the cloud native Hyperforce infrastructure.

For the full architecture diagram, see our “What is Salesforce AI?”article (https://datagroomr.com/what-is-salesforce-ai).

What types of organizations is Salesforce AI designed for?

Salesforce AI covers the entire range of organization size and industry, but the fit differs from capability to capability:

Enterprise organizations receive maximum value from the full stack – Agentforce agents, Data Cloud unification, the multi-model layer and MuleSoft/Informatica integrations. If you have complex data across multiple systems and a high number of customer interactions, the ROI on autonomous agents is the most evident here.

Mid-market companies receive a lot of benefit from the predictive capabilities of Einstein (lead scoring, opportunity insights) and pre-built Agentforce agents for sales and service (which require minimal configuration).

Smaller organizations that use Salesforce can also take advantage of Einstein AI capabilities included with standard editions of Salesforce: predictive scoring, Einstein Search, and basic AI-powered recommendations, without having to license additional capabilities.

Industry-wise, Salesforce has also released industry-specific Agentforce agents for financial services, healthcare, retail, nonprofit and manufacturing industries among others. Any organization that is already running on Salesforce and wants to automate their customer-facing or internal processes is a candidate.

Who in my organization should be using Salesforce AI?

Salesforce AI is not intended for technical teams only:
Salesforce Administrators are the number one builders. Agent Builder and Prompt Builder are low-code tools which can be used by admins to create, configure, and deploy agents without coding in Apex or Python. The new Agentforce in Setup (beta as of January 2026) even uses AI to help with configuration tasks.

Developers gain more advanced customization with Agent Script (a new scripting language for fine grained control of agents), Copilot Builder (increase agent capabilities using Apex, Flow, and MuleSoft APIs), Model Builder (BYOM), and Agentforce Vibes IDE extension for AI-assisted coding.

Business users – sales reps, service agents, marketers – are the end consumers. They work with AI in embedded capabilities such as predictive lead scores, AI-written draft emails, call summaries and autonomous agents that manage routine tasks on their behalf.

Data teams oversee Data Cloud configuration, data harmonization, and the integration layer that feeds Artificial Intelligence capabilities with clean and unified customer data.

IT/Security teams control the Einstein Trust Layer, the MCP server registry, and agent permissions & audit logging.

Agentforce

What is Agentforce?

Agentforce is Salesforce’s AI autonomous agents platform and was launched in October 2024. It helps organizations implement AI agents that can autonomously plan, reason and execute tasks in the realms of sales, service, marketing, and commerce.

Unlike its predecessor Einstein Copilot (which aided humans), Agentforce agents are able to operate autonomously. The platform consists of Agent Builder for low code agent creation, Agent Script for hybrid reasoning, AgentExchange market place for pre-built agents, and Command Center for agent performance monitoring.

Since the launch, three major releases have been built quickly: Agentforce 2.0 (December 2024), 2DX (March 2025), and 3.0 (June 2025). More than 8,000 deals were closed the first year.

What happened to Einstein Copilot?

Einstein Copilot was Salesforce’s conversational AI assistant, which became generally available for Sales and Service Cloud in April 2024. In October 2024, Salesforce updated Einstein Copilot as Agentforce to represent the shift of the platform from assistive AI (copilots that help humans) to agentic AI (agents that act autonomously).

The core capabilities were retained and expanded, but the branding and the underlying reasoning architecture were changed significantly with the introduction of the Atlas Reasoning Engine.

Is Salesforce Einstein being replaced by Agentforce?

No. Einstein is not being replaced – it is being complemented.

The predictive and generative AI capabilities which Einstein offers (lead scoring, opportunity insights, Einstein GPT, Prediction Builder, Einstein Bots, Einstein Search) continue to function and are still actively maintained. Agentforce adds a new agentic layer on top of these existing capabilities.

Think of it as an evolution rather than a replacement: Einstein deals with intelligence (predictions and content generation) and Agentforce deals with autonomy (planning, reasoning, and independent action). Both are simultaneously present in the Agentforce 360 Platform architecture.

What tools does Salesforce provide to build agents quickly?

Salesforce provides a number of different tools at varying skill levels:
One main tool – Agent Builder is the main workspace for building, testing and fine-tuning agents in a conversational environment. You can begin by writing what you want in natural language and then refine it with a document-like editor, low-code canvas, or pro-code script view. One-click simulations, with real-time debugging, allow you to iterate quickly.

Agent Script is a new scripting language (introduced with Agentforce 360) that allows you to have programmatic control over agent behavior. It combines the creativity of AI with rules of deterministic logic – if/then rules, conditional tool use, and guided handoffs – all in human-readable, portable, and human-readable, portable, human-readable, portable, human-readable, portable, human-readable.

Prompt Builder enables you to invent reusable and grounded LLM prompts that integrate CRM data into prompt templates with structured data.

Model Builder (BYOM) allows you to take your own large language models into the Salesforce environment.

AgentExchange provides a marketplace where you can find and deploy pre-built agents, actions, and MCP servers from Salesforce partners – no custom development needed.

Agentforce Vibes is an AI coding buddy for developers that is integrated into VS Code and Code Builder and uses Salesforce’s proprietary LLMs to aid in Apex, LWC, and Flow development.

How much does Agentforce cost?

When Agentforce was launched in October 2024, Salesforce offered it at $2 per conversation. However, the pricing may vary depending on edition, volume and specific use case. For the most up-to-date price information please head to Salesforce’s official pricing page or contact a Salesforce representative.

Note that many of the features of Einstein AI such as predictive lead scoring, opportunity insights and Einstein Search are included in standard Salesforce editions and do not require a separate Agentforce license. Data Cloud also has its own pricing structure, normally based on the amount of data credits consumed.

What types of projects are best for getting started with Agentforce?

Based on what we have seen in Salesforce’s early Agentforce deployments, the most successful first projects share a few characteristics: high volume, well-defined scope, and success metrics. Here are the most common jumping off points:

– Customer service case deflection – Implement a Service Agent to manage common and repetitive customer service inquiries (password resets, order status, returns policy). This is the most popular first project by far because of the immediately measurable ROI and low risk associated with it.
– Lead qualification and routing – Use a Sales Agent to not only score and qualify inbound leads 24/7, but also route them to the appropriate rep. Works especially well if you already have clean lead data in Salesforce.
– Meeting preparation and account research – Agentforce can automatically assemble account briefs, recent interactions and pipeline data before sales calls. Low risk, high time savings.
– Knowledge article generation – Use Einstein GPT to author knowledge articles from resolved case data then have the agents surface knowledge articles during future interactions.

The key is to begin with a use case where the data is already in Salesforce, the process is well-understood, and a human could easily review the agent’s work during the initial rollout.

Architecture & Components

What is the Atlas Reasoning Engine?

The actual intelligence part of Agentforce agents is called the Atlas Reasoning Engine. Introduced in Agentforce 2.0 (December 2024), it implements “System 2” reasoning, a deliberative process in which agents evaluate data, formulate plans and self-correct through feedback loops before taking action.

As opposed to simple chain-of-thought prompting, Atlas is based on a ReAct (Reasoning and Acting) approach, which gives it much higher accuracy and reduces the hallucinations rate. It also supports multi-agent orchestration, which allows multiple specialised agents to work together on complex tasks.

What is the Einstein Trust Layer?

The Einstein Trust Layer is a security and governance component in the Salesforce AI architecture that helps to operationalize Salesforce’s Trusted AI Principles. It does PII (personally identifiable information) masking, scores AI outputs for toxicity, enforces zero-data-retention policies with LLM partners, keeps a log of every agent action for audit, and has dynamic grounding to make things more accurate.

The Trust Layer became architectically crucial with the paradigm shift to agentic AI, where autonomous agents perform actions without the supervision of a human – it began to be a liability of hallucinations rather than an inconvenience.

What is Salesforce Data Cloud?

Salesforce Data Cloud is the real-time hyperscale data engine for unifying and harmonizing customer data across systems. It is the underlying data layer for all the Salesforce AI capabilities.

Since 2024, Data Cloud has changed dramatically with the addition of vector database capabilities for reasoning over unstructured data (emails, PDFs, images) and “zero-copy” architecture which integrates with external data lakes without having to copy data. Data Cloud is what underpins Agentforce agents on data from the actual customer, making it important to the agentic AI strategy.

Combined with Agentforce, Data Cloud produces more than $1.2 billion in recurring annual revenue.

What is the Agentforce 360 Platform?

The Agentforce 360 Platform is the current top-level branding for the Salesforce platform as it relates to AI capabilities. It has undergone a few name changes: it was introduced as the “Einstein 1 Platform” in September of 2023, it was known as simply the “Salesforce Platform” for a short period of time, and it was moved under the Agentforce 360 umbrella at Dreamforce 2025.

The platform now consists of four core components: the Agentforce 360 Platform itself, Data 360, Customer 360 Apps and Slack. The name is a reflection of Salesforce’s strategic bet on agentic AI as the organizing principle around its entire product suite.

What is Einstein GPT?

Einstein GPT is Salesforce’s generative AI capability that was launched back in 2023. It takes care of content generation, conversation summarization and personalized communications across Salesforce clouds.

Einstein GPT uses large language models to generate draft emails, summarize call transcripts, knowledge articles and help in the generation of personalized marketing content. It is still running as of the broader architecture, in parallel with the more recent Agentforce agentic capabilities.part of the Einstein AI layer.

Under the Hood: Technology & LLMs

What LLMs does Salesforce AI use?

Salesforce supports a multi-model strategy that lets customers select from the various large language model providers based on use case, compliance requirements, or cost. As of Agentforce 360 release, supported providers are: OpenAI (GPT models), Anthropic Claude (via Amazon Bedrock), Google Gemini (added as an Atlas Reasoning Engine option with Agentforce 360), and IBM Granite models, as well as Salesforce’s own proprietary models.

This “bring your own model” flexibility is controlled via the Model Builder tool and is intended to provide enterprises with the freedom to choose the appropriate model for each use case to avoid getting tied down to one vendor.

Does Salesforce build its own LLMs? What are they?

Yes – and this is one of the more interesting elements of Salesforce’s AI strategy. Salesforce AI Research has developed and maintains a number of proprietary model families:
– CodeGen is a family of code generation models, first made available as open source in early 2022. 
– CodeGen 2.5 (July 2023) has been optimized for production use and is fine tuned specifically for Apex (Salesforce’s programming language). It enables the computing of quick, low-latency operations such as inline code completion for the Agentforce Vibes developer tool. Internally at Salesforce, more than 2M lines of generated code have been accepted by developers working with the CodeGen-powered “CodeGenie” tool.
– xGen is Salesforce’s base, general purpose LLM family, which was trained with 2+ trillion tokens. It is used as the foundational model that all other Salesforce AI teams fine-tune for specific domains. xGen was more specifically designed for three advantages over external LLMs: supporting unique Salesforce customer use cases, keeping data within Salesforce’s secured environment (important for regulated industries, such as banking), and reducing cost to serve by using right-sized models instead of massive general-purpose ones.
– xGen-Sales is a variant of xGen that is fine-tuned for sales tasks: generating customer insights, enriching contact lists, summarizing calls, tracking pipeline, etc. This is used to power Agentforce sales agents.
– xLAM (Large Action Models) is a newer family that has been designed for use specifically for function-calling, the ability to cause actions in other systems. Unlike the traditional LLMs which can mainly produce text, LAMs are specialized in executing capabilities. Remarkably, the xLAM-1B model (only 1 billion parameters) has outperformed much larger and more expensive models on function calling benchmarks. The open source version is on Hugging Face, a much more advanced proprietary version powers Agentforce.
– SFR-Embedding is a text-embedding model family that transforms text into vectors for semantic search and retrieval. SFR-Embedding scored the highest on the MTEB benchmark when it was released and it powers the vector search capabilities within Data Cloud.

How do Salesforce’s proprietary LLMs differ from models like GPT or Claude?

The key difference is focus. GPT-4 and Claude are large general-purpose models which are designed to be good at everything. Salesforce’s models are intentionally smaller, more specialized and optimized for specific CRM tasks.

This design philosophy has three of its practical advantages:
1. Accuracy on CRM tasks: Since models such as xGen-Sales are fine-tuned on Salesforce-specific data and workflows, they can be better than other larger general-purpose models for tasks such as sales call summarization or Apex code generation, even with many fewer parameters.
2. Data privacy: Salesforce’s proprietary models are running entirely inside the Salesforce trust boundary. Customer data never leaves the platform which is a hard requirement for regulated industries. External API-based models (even with zero data retention agreements) introduce a data transit step that is not allowed by some compliance regimes.
3. Cost and latency: Small and task-specific models are cheaper to run and have a smaller response time. For a kind of use case (in real-time) that requires, say, inline code completion or live agent assistance, latency is of the utmost importance. CodeGen 2.5 was optimized especially for low-latency production serving.

Salesforce takes a “horses for courses” approach: proprietary models for tasks where they excel and they’re good at (code generation, CRM specific reasoning, function calling), and third-party models for tasks where general knowledge and broad reasoning is more important.

Where is Salesforce with Model Context Protocol (MCP)?

Q: Where is Salesforce with Model Context Protocol (MCP)?
MCP is a big strategic priority for Salesforce. Originally developed by Anthropic, MCP is an open standard that allows AI agents to interact with external tools, data sources and applications via a single, standard interface often referred to as “USB-C for AI.”

Here’s where Salesforce is in early 2026:
Agentforce 3.0 (June 2025) introduced Agentforce’s native MCP client in pilot, which allows agents to connect to any MCP compliant server without writing any custom code. This was a landmark move, in effect opening up Salesforce’s historically “walled garden” ecosystem to standardized external integrations.

Salesforce Hosted MCP Servers make it possible to expose Salesforce APIs and data to external AI agents (such as Claude Desktop, Cursor, or third-party systems) as fully managed MCP endpoints, without requiring any code changes.

An Enterprise-grade MCP Server Registry within AgentExchange offers centralized governance: Admins can discover, approve and manage all the MCP servers that their agents can connect to, and security policies, authentication, and rate limiting are granular.

Developer options include pre-built MCP servers for Salesforce DX (integrated into Agentforce Vibes), MuleSoft (for enterprise integration) and Heroku (for fully custom MCP servers with managed infrastructure).

MCP support is still maturing. Hosted servers are in beta and the wider ecosystem of 3rd party MCP servers on AgentExchange is developing. But the direction is clear: Salesforce is embracing MCP as the standard in agentic interoperability.

Will Salesforce AI be available as a plugin for ChatGPT, Claude, or other LLMs?

It’s already in effect, in limited form. As of late 2025, Agentforce Sales App in ChatGPT is in beta mode, which allows the capability to research and update Salesforce records within a ChatGPT conversation.

More broadly, Salesforce’s embrace of MCP means that outside AI tools will be able to gain more access to Salesforce data and capabilities. Salesforce Hosted MCP Servers (in beta) make it possible for tools such as Claude Desktop, Cursor and other MCP-compatible, AI applications to securely query Salesforce data, access metadata and trigger actions – all without writing any custom integration code.

That said, this is still in its early stages. The intent is two-way, i.e., Agentforce agents can consume outside MCP servers, and outside AI agents can consume Salesforce’s MCP servers. Whether this eventually takes the form of formal “plugins” for Claude or ChatGPT (that are similar to how those platforms support other integrations) will depend on how the MCP ecosystem develops.

How does Salesforce handle long-term memory for its AI agents?

Agent memory in Salesforce is managed more through Data Cloud than through the LLM itself. This is a very basic architectural choice.

Most LLMs (including the ones Agentforce uses) don’t have an inherent concept of retaining memory between conversations. Salesforce solves this by anchoring agents to Data Cloud, which has a unified and real-time customer profile about all their interactions. When an agent begins a new session, it dynamically loads in relevant context – past interactions, case history, purchase records, account data – from Data Cloud and passes it into the agent’s reasoning context.

The “Intelligent Context” feature introduced with Agentforce 360 takes this further to allow agents to reason with complex unstructured data (documents, emails, PDFs) in combination with structured CRM records. Data 360’s zero-copy architecture also allows agents to access data from outside systems without making copies.

This approach is different in architecture to approaches where the memory is stored inside the AI model itself. The upside to this is it’s auditable (all pieces of context are traceable), governable (the Trust Layer has control of what data agents can access), and shared (multi-agents working on the same account can see the same customer profile).

How does Salesforce operationalize its AI models at scale?

The Machine Learning Services team (as part of Einstein engineering) at Salesforce are responsible for taking research models and making them production-ready. The process includes a number of phases:

– Model storage: Once research scientists have trained a model, the weights and metadata are stored in an archive for access in a central repository.
– Code integration: The data archive of the model is integrated with other codes that translate inputs, figure out what needs to be done, and deliver formatted results to customers.
– Registration and routing: Models are registered and have attributes and access points to be used by the customer. An intelligent routing layer is used to ensure that each tenant’s requests are routed to the appropriate model version – important in running different versions of the model for different organizations at the same time.
– Execution and scaling: Models are scaled and distributed across shared containers through infrastructure such as AWS SageMaker and NVIDIA Triton. Rather than dedicating an endpoint to each model (no matter how many you have), as it’s prohibitively expensive to run thousands of models, Salesforce spreads models across shared GPU instances and routes requests to the appropriate container for them. SageMaker inference components allow multiple models to use GPU resources efficiently on the same endpoint.
– Streaming: For generative AI use cases where responses can take 20+ seconds, Salesforce has built response streaming capabilities so users see results being generated line by line in real time vs. waiting for the complete response.

The engineering challenge is huge. Salesforce has thousands of AI models running at once across its multi-tenant platform, and each of them has different SLA requirements in terms of latency, throughput, and accuracy.

Data & Data Quality

What data does Salesforce AI use?

Salesforce AI uses data from various sources, which are all united via Data Cloud:

– Structured CRM data – Accounts, contacts, leads, opportunities, cases, activities and custom objects. This is the basis for predictive AI (lead scoring, opportunity insights), and the major grounding data for Agentforce agents.
– Unstructured data – With the addition of vector database capabilities, Data Cloud can now ingest and reason over emails, pdfs, images, knowledge articles, call transcripts and other unstructured data. This is what makes the “Intelligent Context” feature in Agentforce 360 possible.-
– External data – Through MuleSoft, Informatica, MCP, and zero-copy integrations, Salesforce AI is able to access data from ERP systems, data warehouses, marketing platforms and third party databases without duplicating them.
– Real-time behavioral data – Website interactions, app usage, email engagement and any other behavioral data that Data Cloud unifies into real-time customer profiles.

All this data goes through the Einstein Trust Layer before it reaches any AI model, and thus PII masking, access control, and auditability are ensured.

Why is clean data so important for Salesforce AI?

This is a topic very close to our hearts at DataGroomr, and it’s one of the most underappreciated factors in AI success. The reality is simple, AI is only as good as the data it reasons from.

When Agentforce agents make decisions – qualifying a lead, prioritizing an opportunity, resolving a service case – they’re drawing on data in your CRM. If that data is full of duplicates, outdated data, inconsistent formats or missing fields, the reasoning ability of the agent will degrade. Not gradually — dramatically.

Take a concrete example: A sales agent is set up to prioritize leads based on account size and engagement history. If you have three duplicate entries in your account records for the same company with different revenue figures, then the results that the agent produces will be inconsistent and unreliable. It could prioritize the wrong lead, lose a cross sell opportunity, or assign a high value prospect to the wrong rep.

With predictive models (lead scoring, opportunity insights), dirty data leads to inaccurate training data which results in inaccurate predictions. With generative AI, it has hallucination-like artifacts – not because the LLM is hallucinating, but because it’s faithfully working from bad data. 

With agentic AI, the stakes are even higher since the agents take autonomous action based on what they “see.” A hallucination in a draft email is an inconvenience. A wrong action on a real customer record is a business problem.

Investing in data quality before rolling out AI isn’t just good hygiene; it’s the single highest ROI preparation step you can take. Tools like DataGroomr that identify and clean up duplicates, identify incomplete data, and standardize data formats will directly improve the accuracy and reliability of every AI capability in the Salesforce stack.

How does Data Cloud feed Salesforce AI capabilities?

Data Cloud is the glue that binds all the raw data of your data to all the AI capabilities in the platform. Here’s how it works:

– For predictive AI (Einstein): Data Cloud unifies customer data coming from various sources into unified profiles that are used as the training data and scoring inputs to models such as lead scoring and opportunity insights.
– For generative AI (Einstein GPT): Data Cloud is what gives generative outputs specificity to your business. When Einstein GPT is responsible for drafting an email, it will take context from the unified customer profile – recent interactions, open cases, purchase history – instead of coming up with generic content.
– For agentic AI (Agentforce): Data Cloud is the context from which agents reason in real-time. The Atlas Reasoning Engine makes queries to Data Cloud during its ReAct evaluation loops to retrieve relevant information, to check facts and to validate decisions prior to taking action. The vector database capabilities enable agents to reason over the unstructured documents in addition to structured records.

Data Cloud also enforces data governance rules that the Trust Layer depends on – that agents only access data they’re authorized to see.

What happens when my Salesforce data is dirty or incomplete?

The consequences cascade through all the layers of AI:

1. Predictive accuracy drops. The lead scoring models that are trained on duplicated or inconsistent datasets produce unreliable scores. A study by Salesforce found that as little as modest data quality improvements will improve accurate prediction by 15-25%.
2. Generative outputs are no longer reliable. If Einstein GPT is summarizing a call transcript, and cross-referencing against an outdated record of accounts, there is a chance that the summary could have a misleading context – not because the AI got it wrong, but because the data was wrong.
3. Agent decisions go sideways. A service case being resolved by an Agentforce agent may retrieve the wrong customer profile (due to duplicate records), reference an expired contract (due to outdated records), or recommend an irrelevant upsell (due to inconsistent product data).
4. Trust erodes. When stakeholders are witnessing AI generating unreliable results, they lose faith in the technology. This makes future adoption of AI more difficult – even once the problems with data quality are solved.

The solution is not to deploy AI and then clean up the data. It’s to provide for ongoing data quality processes such as deduplication, standardization, enrichment, and validation as a prerequisite for AI deployment. The Einstein Trust Layer can stop some types of harm but not the fundamentally bad data.

Getting Started & Training

How does my organization get started with Salesforce AI?

A practical starting path:

1. Assess your data readiness. Before touching any feature of AI, audit your CRM data quality. Deduplicate records, fill critical field gaps and standardize formats. This is the step with the greatest ROI of anything on this list.
2. Enable what already comes with it. Many Einstein features (lead scoring, opportunity insights, Einstein Search) come standard in Enterprise and Unlimited Editions. Turn them on and monitor the results and build organisational familiarity with AI assisted workflows.
3. Build your first agent in a sandbox. To spin up a Salesforce Developer org or sandbox and find a good use case that is relatively easy to work and relatively small (service case deflection is the most common start point), and use Agent Builder to create a prototype. Salesforce’s Agentblazer learning path guides you through this step by step.
4. Invest in training. Designate 1-2 admins to take the Agentforce Specialist certification path in Trailhead. The certification is available free of charge until the end of 2025 and includes Topics, Instructions, Actions, and Testing – the peer to peer building blocks.
5. Pilot with guardrails. Implement your first bot with human supervision – Enable your agent to assist reps but not be completely autonomous. Monitor results by the Command Center, tune and expand.

What training resources are available for Salesforce AI?

There’s a large and (for the most part) free ecosystem of learning resources:

– Trailhead (free) – Salesforce’s self-paced learning website has many AI and Agentforce trails. The Agentblazer Status program is a structured three-level learning journey, which is Pathfinder (fundamentals), Innovator (building agents) and Legend (advanced customization and certification prep).
– Salesforce Agentforce Specialist Certification Exam – A 60 question exam that covers prompt engineering, agent configuration, Data Cloud, Trust layer and multi agent interoperability. It costs $200 normally but the first go is free through a promotional period. Trailhead is comprehensive in the certification prep.
– Salesforce AI Associate Certification – A more fundamental certification on the concepts of AI, how AI is used in Salesforce, and the ethical aspects of the same. Also offered free during promotional time periods.
– Trailhead Academy (paid) – Instructor led training courses offered by Salesforce’s official training arm. Includes separate Agentforce courses for administrators and developers.
– Salesforce Developers site – Articles, tutorials, videos, hands on guides grouped by topic. Not just the developers in spite of the name.
– Agentblazer Community on Slack – A community for peer learning by asking questions and sharing experiences.
– Dreamforce and TDX conferences – Salesforce holds annual conferences with hundreds of AI sessions, workshops and hands-on labs. Dreamforce 2026 is September 15-17 at San Francisco.

How much does Salesforce AI training cost?

The good news: the basic materials are free.
Trailhead is completely free – all the AI and Agentforce modules, trails, and Agentblazer Status learning path.

Certifications vary: AI Associate Exam Normally $75 (It has been offered free on first attempt through promotions) Agentforce Specialist exam $200 ($100 for retakes) Salesforce regularly has discount codes and promotions for the first attempt for free – you can see what is on offer at the Trailhead Academy site.

Trailhead Academy instructor-led courses are paid, and are variable depending on course. These are optional; many people prepare successfully by using just the free Trailhead content available.

Salesforce developer orgs are free so they provide you with an environment to practice building agents for free.

Many employers reimburse Salesforce certification fees. Always ask.

Where can I find more information about Salesforce AI?

Fundamental resources outside DataGroomr:

– Salesforce official: Salesforce.com/agentforce Product Information developer.salesforce.com for technical documents and tutorials Salesforce Release Notes for feature updates.
– Salesforce AI Research: blog.salesforceairesearch.com For technical deep dives into the models and algorithms behind the platform.
– Community: Salesforce Ben for practical guides and analysis, the Trailblazer Community for peer discussion, Agentblazer Slack community for AI specific conversations.
– Open source: Salesforce’s AI Research models (CodeGen, xGen, xLAM, SFR-Embedding) can be accessed on Hugging Face for research purposes.
– Events: Dreamforce (Annual, Sept), TDX (Developer focused), and World Tour Events throughout the year.

Roadmap & Future

What is on the roadmap for Salesforce AI?

Salesforce doesn’t publish one public roadmap document, but from the Agentforce 360 and Spring ’26 release announcements, as well as some recent acquisitions and Dreamforce 2025 keynotes, a number of directions are apparent:

– Agentforce Voice – Bringing human-like and low-latency voice interactions integrated across the contact center stack (Amazon Connect, Five9, Genesys, NiCE, Vonage). Already available financial services; expanding across industries.
– Agentic Enterprise Search – Unified Search Experience that coordinates in an integrated search experience across 200+ external sources and across multiple AI Agents combining discovery, collaboration, and action within a single interface.
– Agentforce in Setup – A feature under beta (January 2026) which embeds the assistance of AI directly into Salesforce administration. Agents – Helps admins with configuration, troubleshooting and optimization – On every Setup page.
– Broader MCP ecosystem – MCP hosted servers moving from beta to GA, more partners listing MCP servers on AgentExchange and deeper integration with external AI tools.
– ChatGPT and consumer AI channel integrations – Agentforce Sales App in ChatGPT is in Beta. Commerce integrations with ChatGPT (via Stripe and the Agentic Commerce Protocol) are on the way and so is Google Agent Payments Protocol.
– Industry-specific agents – Further expansion into healthcare, financial services, retail (where Cimulate’s acquisition of was for AI-powered product discovery), nonprofits and manufacturing

Dreamforce 2026 (September 15 – 17, San Francisco) will be the site of the next big round of announcements.

Acquisitions & Strategy

What were Salesforce’s major AI acquisitions in 2024 and 2025?

In 2024: Tenyx (AI agents for voice), Own Company ($1.9B, data management and protection), and Zoomin ($450M or so, unstructured data processing).

In 2025: Informatica ($8B, enterprise data management – largest acquisition since Slack), Convergence.ai (autonomous agent navigation), Moonhub (AI driven hiring), Waii (NLP to SQL), Qualified (Agentic marketing), Cimulate (AI powered product discovery and agentic commerce), Regrello (process automation, pending).

The Informatica deal stands out in particular because it has given a considerable boost to the data integration layer that ensures all AI capabilities. Salesforce Ventures also went fully into its $1 billion AI fund during FY26, with investments in firms such as Black Forest Labs, Lovable, LiveKit, among others.

Security, Trust & Governance

How does Salesforce prevent AI hallucinations?

Salesforce has a multi-layered strategy:

1. The Atlas Reasoning Engine implements the System 2 reasoning using ReAct evaluation loops. Rather than producing one response, agents think through steps and check their results and make self-corrections before taking an action. This deliberative strategy greatly limits hallucinations when compared with simple prompting using chain-of-thought.
2. Dynamic grounding through the Einstein Trust Layer helps to ensure that AI response is anchored to actual CRM data in Data Cloud and not just the knowledge learned from the LLM.
3. Agent Script enables organizations to specify deterministic guardrails (conditional logic, explicit rules, and necessary validation steps) that limit agent behavior in ways that are known to be safe for use cases of high economic value.
4. The Command Center offers real-time observability into what the agents are doing so that teams can monitor for anomalous behavior and take action when necessary.

No system helps people eliminate hallucinations entirely. But the combination of grounding, structured reasoning and deterministic guardrails and monitoring, substantially reduces the risk, which is why Salesforce is calling Agentforce “low-hallucination” rather than “zero-hallucination.”

What is zero-data-retention and how does it work?

Zero-data-retention (ZDR) is a policy that is enforced by the Einstein Trust Layer when Salesforce sends data to third-party LLM providers (OpenAI, Anthropic, Google, etc.) for processing.

Under ZDR, the external LLM provider will process the request and return a response, but will not store, log, or use the customer’s data for any purpose, including model training. The data is only present in the provider’s system for the duration of the inference request, and is deleted immediately upon request.

This is enforced contractually with each of the LLM partners and is a fundamental reason that Salesforce can boast enterprise grade data protection even with external models. It’s also why Salesforce’s own proprietary models (CodeGen, xGen) are still important: for the most sensitive use cases, organizations can avoid the need for external LLM providers altogether and keep all their data within the Salesforce trust boundary.

How is personally identifiable information (PII) handled?

The Einstein Trust Layer does the PII masking automatically before any customer data is sent to an LLM – whether that is an external provider or a Salesforce proprietary model.

The way it works is as follows: When an AI request is generated (say, an agent is to summarize a customer case), the Trust Layer scans the outgoing data, identifies the PII fields (names, email addresses, phone numbers, social security numbers, etc.), substitutes the values with anonymized tokens, sends the masked data to the LLM to be processed, receives the response, and re-hydrates the anonymized tokens with the original values, returning the result to the user.
This means that the LLM never sees real PII. Combined with zero data retention policies, it results in two layers of protection because even if data was retained (which it isn’t under ZDR), it would be anonymized data instead of real customer data.

All of this is recorded for audit purposes, providing a full record for compliance teams as to what data was processed, when and by which model.

Arthur Coleman

Arthur Coleman is a fractional Chief Product Officer, data scientist and builder of AI products. He is both deeply technical and a seasoned business leader who believes building a great product that uniquely fills an essential customer need requires attention to the tiniest details. With a special appreciation for elegant user interfaces, Arthur is a DataGroomr superfan.