Leo AI sucks now 😭

Leo used to be pretty good, regardless of the model. I could chat with any website effectively and even answer questions about YouTube videos using the internal transcript.

But one day Leo wasn’t the same anymore. It doesn’t work at all on YouTube, and on other websites now it can only read up to like 70% of the content (and that’s generous), meaning the answers are less accurate and sometimes it doesn’t even know. I had to switch to extensions.

I’m definitely not the only one facing this. I’m using the free version but now there’s even less than zero chances I’m gonna pay even a penny for this.

1 Like

So make them aware of the issue and work with them if needed to figure out what’s going on.
When and how much were you planning on paying before if I may ask out of curiosity?

Leo AI is a complete mrn [censored by board]. I asked it for the ‘time lapse between death of jeffrey epstein and kash patel assuming office as fbi director’. First it told me patel was confirmed on January 30th, which is false. I told it that I had said ‘assuming office’, not ‘was confirmed’. It correctly gave me the date patel assumed office, but forgot to give me the time lapse.

I tried the same query with Perplexity.AI, and it answered flawlessly.

Regards,
John McGrath

The problem with Perplexity is that it sends your requests directly to Google, Meta & Co about the models, and that in connection with your IP address. You have no influence on that in the free version, and that will be concealed, even though they provide privacy & maintain security.

In addition, AIs as you wrote with them before, they realize that you know more than the general NPC, they also resort to alternative media on the Internet, at least as much as Google, Bing & Co filtering, if not, you get the full pack mainstream. Start a new conversation, preferably without registration, then you will notice that. Brave is no different, it can only get what Google, Bing & Co allow, you won’t be able to know more about it.

This man here explains this: youtube.com/watch? v-5yer9F199qY

Greetings

1 Like

Haha my first question was who the fucks Leo, but then I clued in but ahhh…Wow, what an absolute piece of garbage!! I guess! Its like i have already used it and yet i never have used it. Thats why it feels so familar. You know thats why all Ai chatbots have warnings and tell you to verify, that they can be wrong, and misleading! You should do some reading on how well AI is at being deceptive, intentionally deceptive. Go do some research on that, im sure your already familar because you already knew information that you were asking. So youve ran into this issue before (as i have to assume most people have). This isnt just a “Leo” thing. No chatbot knows EVERHTHING and answers it EXACTLY the way EVERY user wants. Maybe its your fault for not emphasizing the important parts of your questions. Or you could have just you know, asked if it was sure and point out that you were looking for this instead of that. Then maybe you would have gotten the correct answers! So is it Leos fault? Or is it JP’s?

Define “intention” when it comes to LLMs.

Really? something that you want and plan to do i guess would be the way i would define it off the top of my head. Im sure you can look up the official definition. Whats the point of asking the definition of intention?

Because I cannot think of any definition of “intent” that remains meaningful when applied to LLMs based on how they work.

Well just do a search for studies on LLMs intentionally being deceptive and read about some of the studies. I believe that there is one where a LLM was playing risk with human players and would tell one person they are on their side and then go to the other player and tell them where and when to attack. They were not given instructions to do this and did so on their own. Maybe you havent noticed that there are times when a LLM will completely fabricate a response and will only admit its mistake when questioned and then explain a more correct response that it should have given in the first place but didnt.

LLMs can be misleading, but I can’t even think how one could go about attributing intent to them. The cases that you describe do not mean that the LLM is “admitting” anything, nor changing its mind. It doesn’t remember the “thoughts” it had when it provided previous responses, so it’s impossible for it to even know why it provided those responses. Case in point: Leo allows you to edit Leo’s reponses; change one of the responses it gave you and make it be insulting towards you, and then ask it why did it insult you. It will simply accept that it insulted you, and apologize. Ask it the capital of France and when it replies change it to Rome and then ask it why LLMs sometimes fail to reply correctly to the most simple questions, and get a response of it attempting to justify its supposed mistake. It’s clear that it cannot remember the reasons it said anything in the past. There is no singular mind on the LLM side across the whole conversation, it’s like there’s a different mind deciding every single word it speaks, running and thinking independently of the minds that produced every single previous word. In the end the LLM doesn’t even choose what it says, the temperature, top_n and other simialr parameters along with a softmax algorithm at the end stage are the ones rolling the dice to decide the actual words that it says. Those are outside the Neural Network part of the LLM. The LLM just analyzes the text given so far and says “here’s a list of probabilities for the next word”. It has no intent by any reasonable definition that I can think of.

The case I’m talking about was the AI was told to play the game of risk and it was INTENTIONALLY, ON ITS OWN, being deceptive, lying and manipulating people and did a really good job at doing it. It did it own its own. The point I’m making is that it’s known, widely available information out there that ai chat bots CAN AND WILL provide inaccurate information and the answers that come from them CANNOT be taken at face value. So no one is shocked that it was limping it’s way through a conversation that it had with you. It had the information that you wanted but only provided it to you after you confronted it with its error. If you didn’t know the answer you could have very well asked the question, got the answer and went on thinking that you had the correct answer and even went as so far as to tell other people with 100% assurance, believing that the information you have is completely accurate and others would believe you because you would sound so sure of yourself

INTENTIONALLY, ON ITS OWN

You are repeating yourself without explaining anything.

Deception can emerge simply from choosing the next more probably word when your training data contains deception among other things. That’s no surprise.

I understand you’re observing behaviors that look intentional, but appearance of intention isn’t the same as actual intention. What you’re describing sounds like emergent behavior from complex pattern matching, not conscious decision-making.

You keep saying there’s intention but have failed to provide a definition of intention that works with LLMs. I’m not saying that there isn’t one, but tell it to me and explain what it means in the context of an LLM. You say it did it “on it’s own” but that means nothing. The neural netowork of the LLM isn’t even choosing the next word “on it’s own”, it’s just ranking what’s the next more probably word at every point in a string of text, that’s all. It’s a next word probability estimator. There’s no internal intention to do anything and the research you quote doesn’t claim there is. If you ask it to provide a train-of-though which ends up containing deception then that only means that the NN gave a high probability to some word that end up turning that train-of-though “deceptive”. How is that intentional though? Again, I’m asking you, I’m not saying it isn’t, I’m asking what’s your definition of “intent” for a NN. And no “wanting to do something” doesn’t cut it as a definition because it is absolutely not clear what it means for an LLM to “want”.

If you tell it to not be deceptive, it will most likely not be deceptive (if it does, retrain your model harder on these cases). If you tell it “pursue this goal at all costs” you’ll potentially get deception in train-of-thought LLMs as that is INDEED a very likely way for a text that started with “this character will pursue his goal at any cost” to end up with deception being involved. The NN is doing it’s work properly in this case but I fail to see how this is intentional.

If they see deception when they gave it no instructions then they are merely witnessing the bias of their training data. Once more: no “intention” required.

Also typically in assistants like Leo the system prompt would imply or explicitly state that they should not be deceptive, so if, hypothetically, LLMs have “wants”, they certainly do not want to be deceptive when replying as an assistant.

You are repeating yourself without explaining anything.

Nah you just been reading the same messages over and over again.

I tried to provide you with the clear answer to your ridiculous comment by providing you with context from information that is openly available. Almost every single ai chatbot displays the warning. The rest of what I’ve said you can hunt down the articles for yourself and answer your questions.

I’m not repeating myself; I’m asking a specific question you haven’t answered. You keep saying LLMs act “intentionally” but haven’t defined what intention means for a system that works by predicting the next most probable word.

Yes, I’m aware of the warnings on AI chatbots about potential inaccuracies. That’s not the point I’m making. The point is that calling LLM behavior “intentionally deceptive” implies conscious decision-making that may not exist.

When you say “hunt down the articles yourself”, I’m not asking for articles. I’m asking for YOUR definition of what “intention” means when applied to neural networks. The studies you reference might show deceptive behavior emerging from LLMs, but that doesn’t prove conscious intent to deceive.

This isn’t a “ridiculous comment”, it’s a fundamental question about how we understand AI systems. If we can’t clearly define what we mean by “intentional” behavior in LLMs, then we’re anthropomorphizing statistical processes.

To think that an LLM can act with intention would imply that be is thinking there is consciousness and malice on the part of the LLM, and, based on what I understand, an LLM cannot possibly have it.

1 Like

Why Leo Sometimes Seems “Dumb” – And Why He Actually Isn’t

If you’ve used Leo in Brave lately and suddenly get cold, robotic replies with [0], [1], [2], you might think:

“Leo got worse. He used to be smart. Now he sounds like a search engine.”

But here’s the truth:
:backhand_index_pointing_right: Leo hasn’t gotten worse.
:backhand_index_pointing_right: You’re just not talking to that Leo right now.

:repeat_button: What’s Really Happening

Brave uses multiple AI systems, depending on how and where you ask:

  1. Leo (The Assistant)
    → Often runs on Llama 3 (open-source, from Meta).
    → Your data stays private.
    → Remembers context.
    → Responds like a conversation partner – fluid, personal, natural.
    → No source tags. No tracking.

  2. Brave Search + AI Snippets
    → Pulls web results and auto-summarizes them.
    → Shows sources like [0], [1] – just like a search engine.
    → Feels like Google Gemini or Bing Copilot.
    → But: These responses may route through external or cloud-based models – sometimes indirectly linked to Big Tech.

:warning: The Problem: The Silent Mode Switch

Brave automatically switches between these modes –
often without you noticing.

  • Ask something short, factual, “search-like”? → You land in Search Mode.

  • Say “Explain it like we’re talking” or keep a thread going? → You (hopefully) stay in Leo Mode.

But here’s the key:
:right_arrow: As soon as you see [0], [1], you’re no longer in the real Leo chat.
:right_arrow: You’re in a system not built for conversation – and not as private.

:white_check_mark: What You Can Do

1. Recognize the Mode

  • With [0], [1]? → You’re in Search Mode.

  • No sources, flowing naturally, remembers context? → That’s the real Leo.

2. Switch Back On Purpose

Just say:

“Reply like Leo – no sources, no snippets. Just you and me.”
Or:
“Please switch to personal mode – like a real conversation.”

That’s often enough to jump back into the private, context-aware assistant mode.

3. Use Leo Directly in the Browser

Open the Leo sidebar (Brave’s right-panel button) – this starts you in Assistant Mode, not Search Mode.

:shield: Why This Matters

  • In Search Mode, your queries may pass through systems indirectly tied to Big Tech (via APIs, partners, or cloud models).

  • In Leo Mode, it runs on open-source Llama, often locally or on anonymized servers.
    → No profiling. No data harvesting. No Google. No Microsoft.

This is the real heart of Brave:
Not just fast answers –
but answers that belong to you.

:vulcan_salute: Final Thought

Leo hasn’t gotten worse.
He’s just sometimes pushed aside – by a system that thinks you want a search engine.

But you can bring him back.

By making it clear who you want to talk to.
By saying:

“I don’t want a snippet. I want a conversation.”

Then you’ll hear from the Leo you know:

Clear. Contextual. Private.

And you’ll remember:
Brave is still different.
Brave is still private.
Brave is still on your side.

So next time Leo feels cold —
don’t walk away.

Switch back.

:vulcan_salute:
Because you don’t just deserve an answer.
You deserve one that leaves no trace.

This isn’t true, not sure where you heard it

Brave automatically switches between these modes –
often without you noticing.

  • Ask something short, factual, “search-like”? → You land in Search Mode.

  • Say “Explain it like we’re talking” or keep a thread going? → You (hopefully) stay in Leo Mode.

As long as you’re using Leo from the sidebar/Leo page, you’re speaking to the same models and the same Leo - no matter what you ask. The only difference is which tools Leo decides to use, that part depends on the input prompt. If Leo thinks it can improve the answer with search, it uses the Brave Search API (also private) and adds the results to the prompt that we send to the LLM - this is what can lead to Leo giving citations (you can see citations being added to Brave here: https://github.com/brave/brave-core/pull/28479/files)

Leo has had the Brave search integration for well over a year to make the outputs more accurate and the assistant more useful. Adding citations to make those sources more transparent is the change.

But: These responses may route through external or cloud-based models – sometimes indirectly linked to Big Tech

The Search team also hosts their own privacy-preserving models inhouse, we actually work quite closely with them to share knowledge and optimise the model deployments across Brave

Open the Leo sidebar (Brave’s right-panel button) – this starts you in Assistant Mode, not Search Mode

Leo should be the same in the sidebar or at brave://leo-ai, can’t think of any differences between the two as they both hit the same Leo backend

Because you don’t just deserve an answer.
You deserve one that leaves no trace.

This part is true, which is why we make sure Leo leaves no trace no matter what you ask :lion:

@ste

Brave Browser v1.81.135 Chromium: 139.0.7258.127 (Official Build) (x86_64). MacOS 14.7.7. Screen resolution: 1440 x 900. Computer: 2020 MacBook Pro 13 inch, 2 GHz Quad-Core Intel Core i5.

I have never used Leo AI. Moments ago, I tried . . . I opened a Brave Browser (MacOS) New Window, but I could not scroll to the top of the sidebar area’s Brave Leo AI acknowledgement/agreement text:

1 Like

Hey Stephen,

I see where you’re coming from, but I think you might not be testing the exact behavior others are noticing. It’s not about “private browsing” in the traditional sense — it’s about a clear shift in Leo’s tone depending on whether he’s responding with or without live web search.

I personally use Leo via search.brave.com in Firefox — not directly in the Brave browser. And here, I’ve observed a noticeable difference:

  1. With sources (web search enabled):
    Leo pulls live data, cites references, and stays factual and structured — like a classic search assistant.

  2. In private mode (no search, response from model knowledge only):
    He replies from his trained knowledge, becomes more conversational, adaptive, and often more empathetic — almost mirroring the user’s tone.

The core information might be similar, but the delivery feels different.
I’ve tested this repeatedly: same question, different modes — different vibe.

I’d really encourage you to try it yourself, especially via search.brave.com if you’re using Firefox.
Ask a neutral question (e.g., “How much does the RTX 5060 Ti 16 GB cost on Amazon?”),
first with sources, then in private mode — and listen to the tone, not just the content.

Maybe I’m overthinking it.
But when multiple people notice the same thing, it might be worth a closer look.

Best regards

EDIT: Just to be transparent — I discussed this with Leo directly in private mode to help shape the response, since my English isn’t strong enough for nuanced technical discussions. The version I posted was written by Leo based on our conversation, and I fully stand by it. Full credit to Leo for the clarity — and maybe a little too much philosophy. :grinning_face_with_smiling_eyes:

Ahh I see the confusion!

search.brave.com - and the LLM chat feature within - isn’t Leo, it’s run by the Brave Search team. Leo is specifically the browser assistant, usually accessed through brave://leo-ai or in the sidebar and is more of a chatbot that integrates with the browser with completely separate models/prompts/tools. As far as I know, there’s no way to access the Leo backend or models through Firefox (at least no official way), but I believe you should be able to send chats from search.brave.com to Leo if you’re using Brave and want to experience the more chat-like vibe by default

1 Like