How do I remove Leo

Yeah you ignored practically all of my message and still repeat the same things so I’ll just ignore you.

1 Like

Of course I had to repeat because you ignored totally what I wrote except when you asked “can you proof it” – and therefor I answered where and what you have to check by yourself because I cannot upload a sceenshot
You seem to be one of those always bendung under “given rules” and being personally insulted by different opinions - a slave to the system.
I really did not answer to you but to the readers here where I can see that many have the same problems and anti-AI-opinion I have …
I guess nobody really cares if you ignore him or not …

1 Like

EVERY AI depends on userdata like search requests or posts or opinions to be trained … and this is the reason why they are forced on users because this way they get the necessary data - if users could decide to install them or not hardly anyone would install them … and they would remain on a basic condition without learning from user behavior
There will be just one solution - if we cannot avoid them we will have to produce one of our own feeding all kind of crap content to the forced AIs … so that they are destroyed from within with false and useless information and cannot collect and analyze user behavior
And the lame excuse of “safety” when not integrated is really the most — word not allowed- it was the opposite of intelligent — I ever heard or read
They all start with nice small and light software and once one is used to it they install one criminal crap after the other … finally they are all the same … maybe we who want something else should team up and create something of our own - the best way to get rid of something unwanted is to replace it by something better
An error occurred: Sorry, you can’t post the word ‘st*upid’; it’s not allowed.

This is just misinformation. Even if anyone today is so naive to believe all those statements, with LEO it is obvious that everything is tracked and saved (yes, there is a feature to delete all LEO personal search data. But that it is saved at all, is in itself a breach of trust). Obviously all search date is used by BRAVE and their partners ‘to improve’ search results and who knows what else… (the ‘improvement’ of course is misinformation again, because no one in these digital companies is interested in truly improved search results which would mean replying truthfully. Well you can forget that forever with the internet.)

1 Like

LEO IS NOW FORCED. The deactivation features do not work in a very clandestine way. So yes in my case they worked at first, but LEO just pops up again a few days later. From then on, you don’t have any choice anymore (in a realistic ‘plug and play’ consumer sense of way - and yes BRAVE is not a development tool, it is a mass market consumer tool, where you need simple and transparent one click solutions, but this was never the goal of crypto-Brave)

2 Likes

AI is just a frame term. It is algorithm to make summaries of existing mainstream information (interesting, they never use the now most visited alternative sources where you still can get some truth). So LEO just repeats narratives. I have extensively tested LEO and it is obviously faulty with every query that is just a little complex (like not asking for a mc donalds menu item). After 2 months of deep dive tests I can say that it is not trustworthy and that’s why I want to deactivate it, which is now not possible anymore on a continuous basis. It seems that there is something built-in on purpose to annoy rejecting users. Of course the alternative is to just leave BRAVE which is extremely simple to do. Maybe this is what we all should do now. I am testing LibreWolf and others right now.

1 Like

This sounds like a bug. You can report it on github or tag one of the Brave employees that spoke in this thread and perhaps they’ll pick it up and fix it.

These settings do not work on Private Sites and Private Sites with TOR - on both sites EVERYTHING is pre-activated and each time you open such a private site you have to 1. type a search request, for example the letter a - then the search results pop up and then you can go to quick settings and deactivate everything – when you try to go to general settings it leads you to your “normal site” and you see everything deactivated - while in fact everything is activated ! and of course every time I type something in the search field the AI pops up although deactivated – it collects my search-behavior, just does not show AI results and this is the last thing I want – to feed this disgusting monser… all settings are just to “dont see it” -but its there and watching and collecting and this is a crime !

I dont want to test it or accept it because its “just a frame term” - I SIMPLY DONT WANT THAT MONSTER OR ANY OTHER AI !! No matter if I can see it or “block” it out of sight – I dont want to have it and feed it and “accept” it ! The sooner users stand up against anything with AI the better !

Why can’t I disable Leo AI from the settings like any thing else? I meticulously went through every option in the settings trying to remove this annoyance just for it to pop up as soon as I type something in the address bar on android. :expressionless_face:

2 Likes

My problem with Leo is its a really poorly written vesrion of LLama from like 3 months ago that gets everything wrong… Try getting xml code for handbrake or maybe a light console script for mint… it will give you csv code instead of xml and console script from 2011 that has been broken since snap came out in 2016… its a giant waste of time, thats my problem with it…

The creator of Ai code says he is profiting off your use of brave, by having you train leo… but i dont see how that could be profitable… or even more profitable than just doing the right thing while asking for donations… Just greedy tiny brained toe suckers wanting to be rich…

Try the instructions in this video - https://youtu.be/2XqwaAr5WFI

Its possible using group policy editor and registry editor, I found the instructions in ths video - https://youtu.be/2XqwaAr5WFI

It actually changes to collecting data frequently. I have to check and delete it.

1 Like

I created an account just to comment on this thread. Maybe the Brave developers don’t understand, but I consider LLM models on my machine to be a security threat. I consider LLM models running any ANY MACHINE, ANYWHERE to be a potential security threat. One reason for this is the outrageous lies and defamation we see coming out out the Google Gemma model, accusing people of horrible crimes and making up completely hallucinatory “evidence” to back up their FALSE claims. AI appears to be becoming WEAPONIZED by large companies (Google in particular), and I especially especially don’t want it “guarding my privacy” or having ANY access to my queries or personally identifying data, in any way.

I don’t want LLM libraries to be installed anywhere or running anywhere on my machine. I consider sending queries to an AI without my consent to be a MAJOR MAJOR privacy violation, as these evil things try to identify you and create profiles for you even against your will. Putting AI in any program and NOT having an “OFF BUTTON” is a pretty big mistake and will drive away a ton of users. I’m just coming from Chrome was has graciously decided to klll UBlock Origin and now I think I’m gonna have to abandon Brave also just to escape the “AI avalanche”.

1 Like

Leo is not running on your machine. It’s running on some remote servers, and unless you explicitly agree to the privacy agreement for Leo your browser doesn’t send requests to those servers.

I consider LLM models running any ANY MACHINE, ANYWHERE to be a potential security threat

You consider me running LLMs on my laptop (which I do) a security threat to you? How so?

One reason for this is the outrageous lies and defamation we see coming out out the Google Gemma model

How is that a security threat though? I can see how it can be a legal issue maybe, but what does it have to do with security? And it’s not worse than a tabloid printing rumors anyhow, is it? Tabloids are real people fabricating stuff, while an AI is something which is notorious for hallucinating frequently and which warns you that you should never trust it for giving you critical information, something which a tabloid doesn’t do.

AI appears to be becoming WEAPONIZED by large companies (Google in particular),

How? Can you point to specific cases? Who was the victim of said weaponization and what were the damages?

don’t want it “guarding my privacy” or having ANY access to my queries or personally identifying data

If you don’t accept the privacy policy of Leo, it doesn’t do anything. So it doesn’t have access to identifying data and you cannot make queries. That’s the case for Leo, the AI in the sidebar. If you are talking about the Brave Search engine and the LLM that runs there, that’s different. I’m not completely sure if it can be disabled (I see some ways to disable at least something about it here: https://search.brave.com/settings ) but for what it’s worth: LLMs (specifically the ones used by Brave) are stateless, they do not remember or learn from your search queries or the questions you ask them. Basically, any snooping and profiling that Brave as a company would be able to do with your search queries, they can do it whether there’s an LLM in the search engine or not, there’s no difference. You either trust them to be privacy friendly, or you don’t, the LLM doesn’t really alter the risks here.

I don’t want LLM libraries to be installed anywhere or running anywhere on my machine.

AFAIK there’s no library. Leo runs remotely, not on your machine. Data is communicated to and from it using standard HTTP calls, same way you fetch pictures or text from any website or how you upload attachments to your webmail, zero difference and no extra libraries needed to do that.

as these evil things try to identify you and create profiles for you even against your will

No, LLMs are stateless by design. LLMs do not learn. LLMs do not store any information. An LLM is just static weights for a neural network. The companies that run those LLMs on their servers MAY store data, profile you, or try to identify you and they can do that with any of the services they provide, the LLMs aren’t anything special here.

don’t want it “guarding my privacy”

It’s not actively guarding your privacy in general. They mean that, if you chose to use it, they’ve designed their server side code in a way that is privacy-conscious and strips identifying data before it reaches their backend. If you don’t ever accept the Leo privacy policy (the one that you’ll be asked to accept the first time you use it), then Leo will do absolutely nothing in general.

Putting AI in any program and NOT having an “OFF BUTTON”

It’s OFF by default, it doesn’t turn on until you accept the privacy policy when you first attempt to use it. I agree it would be nice to have a way to completely hide it though.

I’m gonna have to abandon Brave also just to escape the “AI avalanche”

  1. And go where exactly? Cause I haven’t seen any other browser with such good privacy protection as Brave.
  2. Just don’t accept the privacy policy of Leo and you won’t use it. You can hide at least some of the buttons that open its window too. Nothing is running on your machine, nothing is sent to Leo which is running on Brave’s servers without your consent in the privacy policy.

If I’m wrong on any of the above may an employee of the company correct me.

فرستنده: Tritonio via Brave Community <notifications@brave.discoursemail.com>
‪Date: یکشنبه ۱۶ نوامبر ۲۰۲۵، ۲:۵۸‬

| Tritonio
November 15 |

  • | - |

GuySmilerson:

LLM models on my machine to be a security threat

Leo is not running on your machine. It’s running on some remote servers, and unless you explicitly agree to the privacy agreement for Leo your browser doesn’t send requests to those servers.

I consider LLM models running any ANY MACHINE, ANYWHERE to be a potential security threat

You consider me running LLMs on my laptop (which I do) a security threat to you? How so?

One reason for this is the outrageous lies and defamation we see coming out out the Google Gemma model

How is that a security threat though? I can see how it can be a legal issue maybe, but what does it have to do with security? And it’s not worse than a tabloid printing rumors anyhow, is it? Tabloids are real people fabricating stuff, while an AI is something which is notorious for hallucinating frequently and which warns you that you should never trust it for giving you critical information, something which a tabloid doesn’t do.

AI appears to be becoming WEAPONIZED by large companies (Google in particular),

How? Can you point to specific cases? Who was the victim of said weaponization and what were the damages?

don’t want it “guarding my privacy” or having ANY access to my queries or personally identifying data

If you don’t accept the privacy policy of Leo, it doesn’t do anything. So it doesn’t have access to identifying data and you cannot make queries. That’s the case for Leo, the AI in the sidebar. If you are talking about the Brave Search engine and the LLM that runs there, that’s different. I’m not completely sure if it can be disabled (I see some ways to disable at least something about it here: https://search.brave.com/settings ) but for what it’s worth: LLMs (specifically the ones used by Brave) are stateless, they do not remember or learn from your search queries or the questions you ask them. Basically, any snooping and profiling that Brave as a company would be able to do with your search queries, they can do it whether there’s an LLM in the search engine or not, there’s no difference. You either trust them to be privacy friendly, or you don’t, the LLM doesn’t really alter the risks here.

I don’t want LLM libraries to be installed anywhere or running anywhere on my machine.

AFAIK there’s no library. Leo runs remotely, not on your machine. Data is communicated to and from it using standard HTTP calls, same way you fetch pictures or text from any website or how you upload attachments to your webmail, zero difference and no extra libraries needed to do that.

as these evil things try to identify you and create profiles for you even against your will

No, LLMs are stateless by design. LLMs do not learn. LLMs do not store any information. An LLM is just static weights for a neural network. The companies that run those LLMs on their servers MAY store data, profile you, or try to identify you and they can do that with any of the services they provide, the LLMs aren’t anything special here.

don’t want it “guarding my privacy”

It’s not actively guarding your privacy in general. They mean that, if you chose to use it, they’ve designed their server side code in a way that is privacy-conscious and strips identifying data before it reaches their backend. If you don’t ever accept the Leo privacy policy (the one that you’ll be asked to accept the first time you use it), then Leo will do absolutely nothing in general.

Putting AI in any program and NOT having an “OFF BUTTON”

It’s OFF by default, it doesn’t turn on until you accept the privacy policy when you first attempt to use it. I agree it would be nice to have a way to completely hide it though.

I’m gonna have to abandon Brave also just to escape the “AI avalanche”

  1. And go where exactly? Cause I haven’t seen any other browser with such good privacy protection as Brave.
  2. Just don’t accept the privacy policy of Leo and you won’t use it. You can hide at least some of the buttons that open its window too. Nothing is running on your machine, nothing is sent to Leo which is running on Brave’s servers without your consent in the privacy policy.

If I’m wrong on any of the above may an employee of the company correct me.

1 Like

I understand what youre talking about. (Maybe need more technical details for us to explain and convince anyone…
For example;
Not only Leo AI, Not Only Brave, indeed, most of all online connection inspect our DATA.
You’re right when you tried to explain that no matter if and what are said on LEGAL TERMS, USER PRIVACY COntracts. Saying or publisshing some on “words’ doesn’t means that the real fact occurs. ANd if so, there’s no matter if you are accepting A CONTRACT, it doesn’t means that it is a VALID REAL agreement.
I do suggest that you should SPARE SOME TIME to READ the CDC BR . (Código de Defesa do COnsumidor BRAZIL)(Its maybe the most advanced LAWS CODE nowadays..)

if youre willing to translate it… let me know…if you rember…

IF you wanna talk more about this theme you can find me on telegram t.me/gcvlcnti

Theres nothing that proves that the access doesnt happening BEFORE the agreement…
THe agreement is the “WAY” of companies get LEGAL alibi for everything..
Its impossible to any user to audit whats happening behind everything…
Whats is said on USER Agreements and Privacy Policy is not a self-execute-automated-auditable contract…

Contracts just say things…. words on wind…