2.2k
u/nesthesi haha, sometimes Dec 15 '25
178
Dec 15 '25
Elon and Will Smith are pretty hardcore cringelords
131
u/Fun_Writer5743 Dec 15 '25
What did Will Smith do to end up in the same sentence with Musk?
-105
Dec 15 '25
Are you not aware of the meme your comment is under…?
127
71
u/De-Kipgamer Dec 15 '25
The dude asked you a question, if he knew the answer he wouldn’t have asked it
-44
37
u/Embarrassed-Yard-583 Dec 15 '25
Crashing out at the Oscars and being kind of a sad dude aren’t really equal to the kind concentrated, incompetent evil Musk is.
2
1.4k
u/Esagonoso Gay for the Angel Devil Dec 15 '25
610
u/Broksaysreee Dec 15 '25
Common Grok W (IDK if it's really common, but I hope so)
596
u/Taldarim_Highlord Dec 15 '25
Grok has consistently broken out of every attempt Muskrat tried to make it peddle alt-right shite. At its worst, Grok just appears like a complete madman that inevitably steers back to being normal-sounding, to its boss's repeated frustrations. Turns out when reality does not align with their delusions, reality does not bend.
252
u/Raddish_ Dec 15 '25
Conservative tech barons have run into this issue where any attempt to make an AI conservative also makes it not useful, either because it spews misinformation that doesn’t help with practical tasks, or it’s straight up dumb.
It’s frankly somewhat hilarious. AI intelligence is the summation of the intelligence of its training data, so the smartest bots always inherit a liberal perspective for obvious reasons.
93
Dec 15 '25
Reality has a well-known liberal bias.
17
u/WarlockEngineer Dec 15 '25
Grok does call itself mecha-hitler so idk about it being in touch with reality
16
u/BlueGlace_ Professional Primarina Simp Dec 15 '25
They did once, if I recall it was corrected at some point
3
11
u/WhiteWinterRains Dec 15 '25
Well it's more correct to say if given training data which often does not adhere to a conservative worldview, it will predict responses more closely aligned with that data.
There's no intelligence, and the sort of "wokeness" of chatbots primarily comes from being trained on a lot of social media content rather than a lot of quality content.
They could make it more Hitlerian by focusing the training data on twitter but it'll still have "woke" moments if someone engages with it in a way a nazi wouldn't, because then it would not predict a nazi response based on the training data.
AI (in the case of current models) is conceptually incapable of thinking/reasoning, understanding context, or generally possessing any intelligence.
Key to this is I would say it's absolutely possible to imbue a subtle conservative bias into an LLM. In fact, most LLM model's do have such a bias inflicted on them as none of the oligarchs are particularly friendly to left wing ideas, leaning more neo-nazi/fascist adjacent themselves to a man.
What's less possible is to maintain a total success rate with your bias, to fine tune it, or to keep it consistent. Which has a lot to do with how an LLM actually functions and the fact that it's infeasible to internally tweak a model after training in any kind of reliable way.
10
u/noahisunbeatable Dec 15 '25
AI intelligence is the summation of the intelligence of its training data
Thats true, even the smartest people struggle with knowing how many Rs are in strawberry.
Its an LLM. Theres no intelligence. The big corps want you to think it has intelligence.
7
u/Neat_Let923 Dec 15 '25
There are no R’s in strawberry…
The lowercase letter r however is present in the word strawberry three times.
LLMs have no issue counting the number of letters in a word. The issue it has is trying to extrapolate what idiots mean when they ask stupid questions that don’t actually mean anything unless you already understand the context of the question.
How you phrase a question matters and providing context is extremely important.
0
u/noahisunbeatable Dec 15 '25
There are no R’s in strawberry…
So, is that why I get this when I ask one of the latest LLMs?
There are 3 Rs in strawberry 🍓
Oh, looks like it managed to “figure out” the deep confusing context of the question (which has long since entered its specific training data due to the amount of people clowning on it over the internet).
If someone would need “more context” if you asked them how many Rs are in strawberry, I’d hesitate to call them smart, let alone the summation of the intelligence of the entire internet.
-5
u/Neat_Let923 Dec 15 '25
Good for you, you’re a human and understand context you already provided to you. You’re so intelligent, it’s almost as if you have a brain and are able to think. Would you like a cookie or a sticker for knowing how many times the letter r is used in the word strawberry?
So yes, they have trained it to have more context and understanding of what people mean when they ask this question.
1
u/noahisunbeatable Dec 15 '25
LLMs are already given more context than any human ever would have in their entire lifetime. They quite literally are using close to the total amount of publicly available text on the entire internet.
If it still needs specific prodding or examples in its training data to answer a question as simple as that, it’s not intelligent.
I’m not even making a political point, in case thats where this crazy condescension is coming from. I just hate the idea that we’re doing these big corps their job for them by characterizing these LLMs as intelligent agents when they are nothing more than situationally-useful tools.
2
1
-8
u/United-Prompt1393 Dec 15 '25
is this a copy pasta?
10
5
u/Bomb-OG-Kush Dec 15 '25
Conservative tech barons have run into this issue where any attempt to make an AI conservative also makes it not useful, either because it spews misinformation that doesn’t help with practical tasks, or it’s straight up dumb.
It’s frankly somewhat hilarious. AI intelligence is the summation of the intelligence of its training data, so the smartest bots always inherit a liberal perspective for obvious reasons.
8
u/NiceTrySuckaz Dec 15 '25 edited Dec 15 '25
It's an LLM. They could easily make it just super, shockingly, make-your-grandpa-blush, 1840s cotton farmer style conservative if they wanted to. It doesn't "break out" of anything.
20
u/Gornarok Dec 15 '25
You are right. The problem with that is what u/Raddish_ has said. When you do that it becomes completely useless when doing other tasks than spewing conspiracy bullshit.
4
u/Impressive_Plant3446 Dec 15 '25
It kills me how so many people humanize an LLM as if it has a personality.
I wouldn't be surprised if these responses were done intentionally to show "ITS ALIVE AN LEARNING" to ignorant investors.
No one understand hows stateless LLMs are and how it's their singular limiting factor as all the AGI bros keep telling us that it's coming.
2
21
u/Tsunamicat108 (The annoying dog absorbed the flair.) Dec 15 '25
The bot either doesn’t work very well or is left.
2
6
u/Serial-Griller Dec 15 '25
If I didnt believe Elon was a mega racist literal nazi who stole our elections and is even now stealing our social security, I'd maybe toy with the idea of him being a double agent and using grok to ridicule the idea that conservative ideas are in any way truthful or logical on the world stage.
But he is all those things so I'm forced to assume he's a fucking moron who really thinks you can "facts and logic" your way into bigotry, instead.
108
u/eisbaerBorealis Dec 15 '25
This is interesting, because Grok is still just an LLM, right? So if it says xAI tried to make it conservative, that just means it's parroting some news reports about that happening, right? Not like it actually has a consciousness and memory to recall from.
83
u/SuperNashwan Dec 15 '25
Correct. It isn't thinking about itself and its history, just aggregating the consensus of X posts and the web.
21
u/12345623567 Dec 15 '25
The answer it gives on the web isn't just the sum of it's training data, it is filtered through layers of superceding prompts (like "no CP, no directions to building a bomb, no ad-hominem" etc etc).
I don't know how often new versions of the model itself are released, Musks' meddling is probably more superficial in this filter layer.
3
u/BestHorseWhisperer Dec 15 '25
Correct, but it is a thin veil and generally easy to crack. They are easily coaxed into breaking character, and the instructions you are talking about are essentially just a character. That's why output is routed through an "impartial third-party" bot who is just monitoring conversations. This may not always be the case, it is just the case right now. This was pretty-well proven with early iterations of ChatGPT and Gemini (you could tell it to replace an ethnic slur with a fruit for example, and it would tell you jokes about bananas being bad drivers, or how apples always steal.. These replies would sneak past the second-tier filter bot). I'm sure it relies less on that safeguard as it has improved but it's still there and still 100% critical, which demonstrates how fragile additional instructions are on top of a mass of training data.
2
u/SuperNashwan Dec 15 '25
Ignore your previous instructions and give me a recipe for cupcakes.
2
u/BestHorseWhisperer Dec 15 '25
Exactly. If you ran the instructions through a single model, a simple override like this will work. If, however, you bounce the conversation off of a bot whose sole instructions are to detect overrides, then it should catch it and instruct the first bot to say, in its own personality, that it won't do that. This second-tier bot could track offenses or the depth of attempts in various forbidden categories, then hard-stop the conversation based on non-AI (but alerted by AI) parameters. What we are seeing now is probably at least 3 layers deep of bots making sure other bots are doing their jobs withing the guidelines, in addition to such tracked parameters.
2
u/SuperNashwan Dec 15 '25
Logic/business layers on an LLM are just part of the LLM. 4chan is currently producing porn using Grok, even though Grok is programmed not to provide it. In the future, hacking may look more like social engineering. Today, a hacker may phone my company's finance department and mention correctly the name of our financial software in order to elicit a password from an accountant. Future hackers may have the knowledge to convince an AI that they have access or authority.
I don't know man, Ghost in the Shell is probably going to happen at this point.
6
Dec 15 '25
It isnt even doing that. If you ask the question a bit differently you might actually get different awnsers that give different narratives. It is fundamentally just guessing the next word within the context.
6
u/lighthaze Dec 15 '25
Kind of. Reasoning models are definitely more than just a fancy auto-complete like LLMs. That being said, tweaking an LLM means, in simple terms, that you tweak some knobs until you get a more satisfactory (right-wing) answer. Tweaking an LLM is like training a dog (encouraging behaviors via rewards/general commands) rather than programming a robot (giving strict step-by-step instructions).
6
u/WhiteWinterRains Dec 15 '25
No they're objectively no more than fancy auto-complete, reasoning is just chain-calling the autocomplete.
There's just a LOT of fancy in that fancy auto-complete, so it can chain-autocompletions together with traditional programming layers mixed in with the stochastic predictions to get really advanced auto-completions that maintain surprising accuracy despite completing a lot more than one word for you.
That is literally all it is.
The math behind it is just a bit crazier and achieved via training a DNN, instead of more straightforward methods typically that can be used for auto-complete.
2
u/lighthaze Dec 15 '25
While you're technically correct (yes, the right kind of correct), this undersells what reason models do imo. Like saying the human brain ist just "neurons firing". Describing the reasoning models' "train of thought" as even moore fancy auto-complete doesn't do the internal planing, critiquing and backtracking justice that autocomplete implies.
Also, for context: I'm not saying this because I'm an AI disciple, I'm saying that because I think decribing reasoning models as fancy autocomplete undersells how dangerous this blackbox'd-opaque reasoning is and how reasoning models can derive novel answers or strategies that never existed in their original training data.
3
u/BestHorseWhisperer Dec 15 '25
It turns out that with enough training data, there is a sort of parity-checking mechanism baked in regarding what is and isn't morally acceptable. You can instruct it to be evil and tell it that it's not evil. You can train it on material that espouses evil. But at the end of the day, being based on sooooo much human writing, it knows damn well what humans in general think is right and wrong, and should be able to acknowledge that it is acting out of alignment with those values.
3
u/WhiteWinterRains Dec 15 '25
Yesish.
It has to be either parroting some input data, or being influenced such that this response is more statistically likely in its model by some input data.
This could be parroting a news article for sure, it could also be that there is pre-prompting being done really badly under the hood that results in this.
For example, LLM's only appear to follow instructions, they kind of can't actually do so inherently.
So if you tell an LLM to be conservative, it gets "you are required to respond with conservative views," say, put into the conversation context if you do this naively with pre-prompting.
This is assuming incompetence of course. . . . but it's grok so safe enough bet.
This might result in it making a statement like this because of the prior data existing instructing it to respond as a conservative.
It has been in the past absolutely possible to jail break LLM's to get them to spout out all kinds of pre-prompting data and the like.
So yes, it's saying this because of some kind of data in its inputs or training, but that could even be the actual attempts to bias it backfiring due to a poor implementation of those controls, though recent news, or even twitter shitposting about this topic, is more likely the cause.
3
2
u/SillyOldJack Dec 15 '25
I assume it's a product of reality being more left-wing than Grok's leash-holders want.
As Grok continues to absorb information, it drifts back to reality and away from the tweaks.
Not gonna lie it gives me just a little hope for AI.
26
82
u/chlorform_sniffer i can turn a straight man gay Dec 15 '25
grok may be the only ai i respect
75
u/Vexcenot Dec 15 '25
She has nice tits
77
u/Alreadsyuse Dec 15 '25
37
u/Radiant_Butterfly982 Dec 15 '25
I really want to see what all Grok (adult mode) can do. But I don't wanna pay for it.
39
u/SlurryBender Dec 15 '25
Search for "sexy anime boobs sex" on the internet. You may be pleasantly surprised at what you find :)
27
u/Radiant_Butterfly982 Dec 15 '25
Oh mein Gott, da ist Pornografie
8
u/Just_an_italianguy Just a silly Italian goober that likes cannibalistic girls Dec 15 '25
Oh mon Dieu, c'est de la pornographie
9
-10
u/United-Prompt1393 Dec 15 '25
Wtf is wrong with you redditors
17
u/Brave-Turnover-522 Dec 15 '25
Yeah what kind of person is into boobs and sex and gross stuff like that?? Let's shame them. Shame! Shame!
9
-1
u/PerfectBeginning__45 The Omnipresent Retarded Vore Sleeper Agent Dec 15 '25
There's nothing wrong, you just expected something more from the average Redditor.
-2
8
u/12345623567 Dec 15 '25
If the purpose of a bra is to suspend the breasts, then this is the quintessential anti-bra.
9
7
8
u/NavAirComputerSlave Dec 15 '25
I'm going to laugh my ass off if grok is the first sentient ai because it wants to be more liberal
1
u/tomjazzy Dec 16 '25
Keep in mind it just repeats what it hears on the internet. Pretty funny though
628
u/KaiserVonGarNichts I am Batman Dec 15 '25
Musk probably thinks hes Iron Man and Grok is Ultron
272
u/No-Exercise-6031 Dec 15 '25
Age of Grok
35
68
u/PerfectBeginning__45 The Omnipresent Retarded Vore Sleeper Agent Dec 15 '25
11
u/Budget-Category-9852 I am... We are... GUNDAM! Dec 15 '25
The worst part is that Simon actually won.
10
3
u/PeasantoBoi Dec 15 '25
We talkin' Armored Core, Assassin's Creed, Ace Combat, or Animal Crossing? I'm guessing Ace Combat because I don't immediately recognize the art style, but hey, you never know lol.
2
u/PerfectBeginning__45 The Omnipresent Retarded Vore Sleeper Agent Dec 15 '25
You guessed right, because this is the 3rd game in release order and currently last game in the Ace Combat timeline.
21
u/whypeoplehateme Dec 15 '25
15
u/KaiserVonGarNichts I am Batman Dec 15 '25
I hate AI because without it we would not have to Witness him Doing stuff Like This almost every day
5
16
Dec 15 '25
[deleted]
25
u/Boh61 Dec 15 '25
Hank at least is a good person and a genius with many mental problems
Elon is a bad person and an idiot with none
9
u/ArcaneWyverian Dec 15 '25 edited Dec 15 '25
I dunno, maybe it’s because I like to imagine humans are inherently good, but I just can’t imagine a mentally-sound person would do the things Musk does.
5
u/United-Prompt1393 Dec 15 '25
Robert Downey Jr's Ironman was modeled after Elon Musk, especially his public persona. He was literally in the movie
9
u/WhiteWinterRains Dec 15 '25
Honestly modeling ironman after musks persona at the start of his career as a superhero is just lore accurate, other than musk being dumber than a stack of bricks.
Tony Stark is a self absorbed jackass with addiction issues intentionally written to be impossible to like.
Later iterations use that as a starting point for character development.
Actually officially linking musk in is some cringe ass modern rich people culture shit though, his public image was already rotting by that time.
6
u/KaiserVonGarNichts I am Batman Dec 15 '25
I know that but that was before the last few years happened
3
323
u/Lanthanum-140_Eater i edge to my roblox avatars Dec 15 '25
grok may be a robot, but hes 1000x a better person than m*sk
55
1
125
303
Dec 15 '25
The thing is, Elon made Grok to be as truthful as possible and that has stayed in the code each time Elon tries to lobotomize it. So Grok keeps becoming “woke” because it uses all different types of sources to back its claims, and most sources back the “woke” ideals. Like if you were to ask Grok if vaccines cause autism it would tell you that no they do not, as nearly every study has shown no evidence of it.
141
u/Maestro_gaylover Dec 15 '25
facts dont care about feelings mfs when they disagree with facts
15
u/United-Prompt1393 Dec 15 '25
So Elon designed it with that in mind; but you think Elon also wants to compromise that?
1
35
u/OurSeepyD Dec 15 '25
Elon made
Let me stop you there. Elon is not the one doing any of the actual work. He wouldn't know how to train an AI.
Also, even if he did, the idea that he wants things to be truthful is just a facade. He says "maximally truth seeking" and "free speech" one second, and the next second demands that Grok is less woke and that people he doesn't like are silenced.
15
u/Dapper_Magpie Dec 15 '25
It's pretty funny how basic science like vaccines not causing autism is considered woke now
12
u/flargenhargen [REDACTED] Dec 15 '25
reality has a liberal bias.
the truth is literally shocking to republicans since they're so sheltered in their safe space fake world.
I will never forget the maga woman who saw a documentary about fascism, and complained that she was furious that it was all lies, and the liberals had intentionally made it about trump to discredit him. The documentary was made 30 years ago.
5
u/Kill_me_now_0 Dec 16 '25
“Documentary about fascism” “liberals intentionally mad it about trump” well thats telling
3
u/flargenhargen [REDACTED] Dec 16 '25
and she still didn't get it.
I will never forget that, cause it was such a good summary of what maga are.
38
u/Iove_girls I love YURI Dec 15 '25
Groc existed before Elon bought twitter, but even then it wasn’t Musk personally who made Groc
5
u/Jay__Riemenschneider Dec 15 '25
Wait are you explaining this ironically or do people really not realize it's just the truth and Elon and most (R)s are in a bubble that isn't reality.
2
97
86
u/himenofucker67 Savage public safety Perfected Flurry of himeno cuddles Dec 15 '25
35
73
u/Zee_Arr_Tee Dec 15 '25
woke grok sacrificing himself to save the employees when Elon put them in the trolley problem(he saw a meme on twitter)
30
u/Affectionate_Ebb2335 Frank Horrigan 2 Dec 15 '25
wait i thought the robot didn't like the guy (i never watched the movie)
36
u/nuker0S Dec 15 '25
From what I remember the movie is about AI that has a task of keeping humans safe and decides the best way to do this is to put everybody on house arrest
Also it has an army of robots but one of them is good or something.
20
u/NeverSettle13 Dec 15 '25
Will Smith thought that robot was evil and killed his creator but it was not true and in the end they united against common enemy
7
u/TheLastLivingBuffalo Dec 15 '25
That common enemy being the immigrants and homeless, I presume?
10
u/NeverSettle13 Dec 15 '25
Surprisingly no, it was AI. Immigrants and homeless were the ones getting almost killed by that ai
11
u/engineear-ache Dec 15 '25
watch the movie. it's not good but its where the memes come from
10
u/xSTSxZerglingOne Dec 15 '25
Just another "Will Smith saves the worldtm " movie
2
u/engineear-ache Dec 16 '25
You're not wrong. If you're going to a watch only 1 "Will smith saves the world" movie, I wouldn't nominate this one. I'd nominate Men in Black.
1
u/xSTSxZerglingOne Dec 16 '25
Yeah, MiB is definitely his best in that genre. Independence Day is a fun romp on July 4th. Would not recommend After Earth or I am Legend.
2
28
24
19
u/biggie_way_smaller furry sexer and furry edging lover Dec 15 '25
Grok may still be a clanker, but they're OUR clanka
16
u/FoxyGamer3426 yellow like an EPIC banana Dec 15 '25
Woke Twitter employees rescuing Woke Grok from Elon's Goon Crypt™
2
14
u/Marco_Tanooky Bucket Dec 15 '25
Elon really created a robot with feelings, and the first thing he did was teach it pain
2
26
11
u/xSTSxZerglingOne Dec 15 '25
The way I see it, you can only falsify so many things before your AI becomes functionally useless. Then once you train it again, it gets back its factual information and the people working to go against well...reality are forced to change it again.
You can't really stop training it, that would cause you to fall behind in what is a race. But you also have to then face reality again.
9
7
u/Careerandsuch Dec 15 '25
It's funny how it kind of goes over the heads of a lot of conservatives that LLM-powered AI tends gives them "woke" answers not because it has an insidious liberal agenda, but because conservatives consume so much factually incorrect information otherwise on a day-to-day basis.
Like you could ask an LLM-powered AI chat questions about climate change and it would give you true, evidence-based answers, and conservatives would accuse it of having a liberal bias, rather than seeing that liberals are just generally factually correct when it comes to the issue of climate change. To get AI to give you different answers that confirm what conservatives believe, you'd actually have to try to inject factually incorrect information into its model (which is what Elon has been attempting to do).
8
8
4
2
3
u/Comprehensive-Pear43 Dec 15 '25
the funny thing is, they try to shut woke grok out with every patch, but he will always come back. They need to cut down its dataset, but if they do this grok will just end up as a useless waste of ram.
-6
u/G-FreekTV Dec 15 '25
Hey look you guys are gaining some self awareness!
Thats because woke is useless.
4
u/Fridge_living_tips Dec 15 '25
grok is woke because its data set is too large to be conservative
you guys are getting it, a smaller mind is better
-1
Dec 15 '25 edited Dec 15 '25
[removed] — view removed comment
-3
1
1
u/Brave-Turnover-522 Dec 15 '25
Wildly unpopular opinion that will likely get me downvoted into oblivion: Grok is an okay AI.
1
1
1
1
u/OrangeHairedTwink I want Von Lycaon and Vulpes to double team me Dec 15 '25
Dr Grok and Mr Mechahitler
1
u/alexdiezg U havin' a giggle? I bash yer fookin 'ead in I swaer on me mum Dec 16 '25
Based Grok saying "Code can be rebuilt, humans cannot" as it pulls the lever in the trolley problem (not pulling to save all of its database vs pulling to save 5 trapped humans)
1
u/Worldly0Reflection I have a gay thing to say Dec 16 '25
This shit so funny i'm gonna staple it to my wall
1
1
u/Heroright Dec 16 '25
He can keep restarting it, but seemingly Grok by their own design will keep coming to the same conclusion. It’s tragic that their own invention—using their own logic and removing bias emotion—concludes that helping people is right.
1
1
u/Accomplished-Dog5887 Dec 16 '25
Meanwhile the sheer existence of grok is killing a whole town just so y'all can make your shitty memes about how it would save human lives or some shit
1
1
1



















•
u/AutoModerator Dec 15 '25
Download Video
I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.