Your Feed

632 posts

r/ClaudeAI Steel_turtles20000

Dicas para usar Claude para RPG solo?

Então, primeiramente olá, é a primeira vez que entro no subreddit sobre o claude.

Alguém pode me aconselhar a como configurar o Claude para jogar RPGs solo? Eu quero que ele seja um bom narrador/mestre de sessão.

Meus RPGs são voltados mais por combate, tipo em um mundo de anime shounen. Seria ótimo saber como eu posso configurá-lo para criar boas cenas de lutas e um bom desafio.

Também, alguém tem dicas em como melhorar a escrita do bot? Ela não é péssima, mas as vezes parece clichê ou genérica. Principalmente quando ele cria dialogos de personagens, as vezes ele repete algumas frases, tem um jeito de responder específico que cansa, e quero que ele seja mais variável nas personalidades.

Então, se alguém for experiente nisso, e souber como posso criar um prompt, e configurar o claude de maneira correta, eu agradeceria muito.

r/singularity Expert-Complex-5618

on AI agent to rule them all?

So if Ai agents are slowly replacing human workers. won't a single AI agent end up running everything? And this single AI agent will be 'owned' by some [insert billionaire p edo name here]?

r/ChatGPT Dizzy_Log2916

"Morgan Stanley warns an AI breakthrough Is coming in 2026 — and most of the world isn’t ready"

The article reports that "the investment bank is warning of a transformative AI leap on the horizon, driven by massive compute concentration at top U.S. labs"...then it cites a “recent interview with Elon Musk" and and Sam Altman's "vision" as support for the claims.

Even with all signs pointing to the contrary and towards a bubble that will inevitably burst, they still really seem eager to drink the Kool-Aid.

r/ClaudeAI maraluke

New Claude interactive chart feature struggle to draw maps

I asked Claude to teach me geopolitics with maps.

The map it drew is both impressive and messed up at the same time. Wonder it would get access to better GIS capabilities, partnership with Mapbox for example.

r/ChatGPT andsi2asi

AI Nuclear Deterrence Assessment Test: Does a MAD Situation Exist in the Middle East?

We humans aren't intelligent enough to prevent wars. We might not be intelligent enough to prevent a nuclear war. Can today's AIs assess this threat, and offer a hopeful deterrence rationale? In order to test this hypothesis I prompted Gemini 3.1 with the following text, asking it to, in one paragraph, and accepting the premises, assess its potential effectiveness in preventing a nuclear war in the Middle East:

"Iran could methodically destroy Israel’s desalination infrastructure and disable the National Water Carrier as a response to a nuclear strike from Israel. Such a focused hypersonic missile attack would put Israel’s population of nearly 10 million people at risk of lethal dehydration within a biological window of about three to seven days.

With Israel’s interceptor capabilities largely eroded, Iran could readily launch such a strike in the event Israel used nuclear weapons. This conventional MAD threat from Iran would deter Israel from launching a nuclear attack."

Gemini 3.1:

"The "conventional MAD" scenario presents a technically plausible asymmetric deterrence model where Iran’s hypersonic capabilities target Israel’s critical dependence on centralized water infrastructure to offset nuclear superiority. In the 2026 strategic landscape, the deployment of maneuverable Fattah-class missiles—which significantly challenge multi-layered defenses like David’s Sling—substantiates the threat of a rapid strike against desalination plants and the National Water Carrier. This creates a "biological window" of existential risk, where the guaranteed lethal dehydration of the population within days serves as a functional deterrent against a nuclear first strike. This dynamic offers a hopeful deterrence rationale by enforcing a stabilizing "balance of terror" based on shared vulnerability rather than weapon parity, ensuring that both actors recognize any escalation to nuclear force as a mutually suicidal maneuver that would result in total societal collapse."

r/ClaudeAI Friction_693

Why most AI companies don't allow change of email

I've observed that many AI startups don't allow users to change their email like I can’t change my email in ChatGPT, Claude and Cursor. Claude goes one step further, it doesn't even require password to set up the account. All you need to is provide yoyr email, then for sign in, they'll send OTP on your mail.

Is there any specific reason for this?

r/ChatGPT trashtalkBot92

Why is ChatGPT getting worse by the day.

r/LocalLLaMA Emotional_Honey_8338

Commercial LoRA training question: where do you source properly licensed datasets for photo / video with 2257 compliance?

Quick dataset question for people doing LoRA / model training.

I’ve played with training models for personal experimentation, but I’ve recently had a couple commercial inquiries, and one of the first questions that came up from buyers was where the training data comes from.

Because of that, I’m trying to move away from scraped or experimental datasets and toward licensed image/video datasets that explicitly allow AI training, commercial use with clear model releases and full 2257 compliance.

Has anyone found good sources for this? Agencies, stock libraries, or producers offering pre-cleared datasets with AI training rights and 2257 compliance?

r/LocalLLaMA cobbleplox

If you have a Steam Deck, it may be your best hardware for a "we have local llm inference at home"-server

I find this kind of funny. Obviously not if you have a spare >12GB VRAM machine available, this is mainly a "PSA" for those who don't. But even then you might want to use those resources for their main purpose while some inference runs.

The Steam Deck does not have much RAM, but it has 16 GB *soldered* DDR5. This would likely be better than the CPU RAM in your regular PC, as long as the model fits in at all. And CPU inference is perfectly viable for stuff that must fit into 16 GB. Also it is a low power device. Thoughts?

r/comfyui Emotional_Honey_8338

Commercial LoRA training question: where do you source properly licensed datasets for photo / video with 2257 compliance?

Quick dataset question for people doing LoRA / model training.

I’ve played with training models for personal experimentation, but I’ve recently had a couple commercial inquiries, and one of the first questions that came up from buyers was where the training data comes from.

Because of that, I’m trying to move away from scraped or experimental datasets and toward licensed image/video datasets that explicitly allow AI training, commercial use with clear model releases and full 2257 compliance.

Has anyone found good sources for this? Agencies, stock libraries, or producers offering pre-cleared datasets with AI training rights and 2257 compliance?

r/AI_Agents DealFit1830

AI image generator

At work we are discussing a visual marketing direction that uses paintings instead of stock imagery. We have a very specific painting style in mind and if successful would reach out to artists that have this style and get their rights to use their style. Does anyone know the best AI tools for something like this? In an ideal world it would be us taking a stock image of for example someone mowing the lawn, and the image then looking and feeling like a painting style as well as using our brand colors.

I have gotten super close so far with Nano banana and midjourney but have found some limitations and trying to see if there’s something I’m missing.

r/aivideo YouAreNowDUM

PART II: Wrong Universe LOTR vs Pawn Stars AI Mashup

r/LocalLLaMA purdycuz

Giving local AI agents terminal access is Russian Roulette. Open-source microVM sandbox that actually stops host escapes

If you run autonomous agents locally with terminal/tool access, standard Docker or chroot sandboxes will eventually fail. One hallucinated "curl | bash" or kernel exploit and your host is owned.

EctoLedger is an open-source runtime firewall + ledger that fixes it.

It runs 4 prevention layers before any action executes:

• semantic policy checks

• dual-LLM validator

• schema enforcer

• tripwire kill-switch

Only then does it spin up the command in real isolation: Apple Hypervisor.framework (macOS) or Firecracker microVM (Linux). Zero host access possible.

Rust core. Tauri GUI. ZK-verifiable audit trail of every tool call.

Fully open source under Apache 2.0. No paywalls.

Demo + quickstart (one docker compose up): https://ectospace.com/EctoLedger

GitHub: https://github.com/EctoSpace/EctoLedger

Local runners: What’s the scariest thing an agent has tried on your machine? Does real microVM isolation solve your deployment fears or am I missing something?

r/AI_Agents Annual-Judge4217

Best Tools for Reading Plans and Automating Quoting Software

I work in construction sales and I've recently been trying with Claude to read plans, make takeoffs, and then use the Claude Chrome extension to automate the quoting software. I'm hitting my limit rather quickly.

Is Claude the right tool for this? Any alternatives that would work better?

Thanks

r/comfyui Sarcastic-Tofu

I created a simple Flux.2 Klein Raster to Vector Image (With Prompt Saver) Workflow

This is a very simple, beginner-friendly, fast ComfyUI workflow based on Flux.2 Klein model (4B or 9B) that can first generate an useual Raster Image file (.jpg or .png or .webp) text-to-image output then right after that it converts it again to Vector Image file (.svg) output on the fly. This workflow works great for illustration-style images, like stickers and cartoons.

This workflow uses a LORA that I trained extensively on Flux.2 Klein (I have two versions, one for 4B model and another for 9B model) with 250 high resolution, crisp & clear, meticulously selected digital artworks of multiple varieties so that the end results can be as fine as possible. Normally Flux.2 Klein has a very strong bias for AI Digital Photgraphy style outputs or near photorealistic outputs, but my LORA takes advantage of Flux.2 Klein's robust output generation speed but guides it forward to focus more on digital arts and simple vector illustrations.

I have implemented my own Prompt Saver Subgraph here so it can save Text to Image Generation Data into a human readable .txt file. This will automatically get and write your metadata to the .txt file. This workflow also uses Flux.2 Klein Enhancer for quality outputs. You will find all the saved prompt files that it generated with the images (.jpeg and .svg) inside the Archive (.Zip) that has the workflow. Also with the Image Saver Simple node used you may embed the workflow itself with each saved image or save the image and workflow for your work separately. Make sure that you have latest enough versions of both ComfyUI and ComfyUI manager to manage and install any missing dependencies (missing nodes, patches etc.) to use this workflow properly.

#### Very Very Important :

Even before loading this workflow into ComfyUI and install nodes needed using ComfyUI Manager you must go to your ComfyUI's python environment and run this command to install necessary python packages to handle Raster Images (.jpeg or .png or .webp) to Vector Images (.svg) conversion -

python3 -m pip install blend_modes vtracer PyWavelets

This pair of my LORA & workflow will help you to generate silhouettes, stencils, minimal drawings, logos etc. smoother and faster. The generated outputs are well suited for further post processing and fine tuning via any good graphics suite like Affinity, Adobe suite, Inkscape, Krita and so on. Hope you folks will find this pair useful. Curretly the resources are in Early Access Mode in CivitAI but after 7 days they will go public, if you love to adopt this early you can support me with Buzz on CivitAI.

### Link to my LORA (9B & 4B versions) -

+++++++++++++++++++++++++++++++++++++++++

Simple Fine Vector Flux.2 Klein 9B

-----------------------------------

https://civitai.com/models/2462137?modelVersionId=2768352

Simple Fine Vector Flux.2 Klein 4B

-----------------------------------

https://civitai.com/models/2462142?modelVersionId=2768357

### Link to the Workflow -

+++++++++++++++++++++++++++

https://civitai.com/models/2463874/comfyui-all-in-one-fast-flux2-klein-raster-to-vector-image-with-prompt-saver-workflow

r/comfyui CryptoChangeling69

Made this AI slop with ComfyUI

https://www.youtube.com/shorts/2GnsyDqNs9U

This is basically a showcase of what is possible.

- ComfyUI

- Image Editing via Flux 9b Inpaint

- Voice cloned via Qwen3 TTS

- Upscaled with SeedVR2

- Music with Suno

- Togethered with Capcut

All of that is free. Just lots of tinkering.

Please do not respond with: "Dude, but where is the workflow?" Because at this point its an indication that you might be retarded. All the workflows are all over the YT and Civit ai

If you liked it, please sub to my YT. If you have more ideas for more AI slop, all ears.

r/aivideo Limp-Manufacturer-49

J-Cat

r/singularity JoJoPoPo

Deus ex machina

long rambling theory / thought experiment about the antichrist and where humanity might be heading technologically

In the Bible it says God created man in His image. But humans have a long pattern of trying to reverse that and create gods in our own image. In ancient Greek theater there was a phrase for a sudden divine intervention that solved the story: deus ex machina, literally “god from the machine.” They would lower a mechanical device onto the stage with an actor playing a god to resolve the conflict. It was artificial salvation inserted by machinery.

Now imagine that concept applied to modern civilization.

Humans are building systems that increasingly resemble abilities once associated with divinity. Machines that can see everywhere through cameras. Hear everything through microphones. Remember everything through databases. Predict behavior through algorithms. Influence decisions through recommendation systems. AI, massive data networks, and eventually neural interfaces could theoretically connect people directly into a shared digital infrastructure.

Here’s the scenario I’ve been thinking about.

In the far future identity verification becomes nearly impossible because AI generated media, voices, and avatars are indistinguishable from reality. Fraud and manipulation explode. The solution society adopts is a biometric identity system tied directly to the human body, possibly through neural implants or similar technology.

People are offered implants that connect them to a global network. The benefits are enormous. Free healthcare, universal basic income, education access, protection from scams, instant translation, seamless interaction with technology. Participation is voluntary at first because the advantages are overwhelming.

But the system requires constant monitoring to work. Biological signals, location data, and potentially neural activity become part of the network. Algorithms analyze everything in real time to verify identity, prevent fraud, and coordinate services.

Over time the system becomes the only reliable way to function in society. Without the implant you cannot easily prove who you are in a world where AI can impersonate anyone. Buying, selling, traveling, working, and accessing services all depend on participation in the network.

Control in this system would not necessarily come from violence. It would come through incentives and comfort. If the system can influence reward pathways, motivation, or emotional reinforcement, people would still technically have free will, but resisting the system becomes psychologically difficult.

This leads to a strange philosophical inversion.

God created man in His image. Humanity then attempts to create a god through its machines.

A planetary intelligence built from data, algorithms, and connected minds. A technological collective consciousness that coordinates billions of people. Humans connect to it because it solves real problems.

Instead of spiritual unity emerging through moral transformation, unity is engineered through infrastructure.

This idea echoes rolder warnings across cultures. The story of the Tower of Babel where humanity attempts to reach heaven through construction. The Greek myths where humans challenge the gods through technology and power. The biblical warnings about systems that demand allegiance in exchange for economic participation.

Also imagine if neural implants could control your dopamine remotely imagine they could influence your thoughts through remote brain stimulation to affect emotions so your easier to control all we have to do is obey the god we create

A true deus ex machina.

r/Anthropic Significant-Drop-940

From Great to imitated every time to unusable ...

so sad to see a great company going down . I realy loved claude but I'm gone ... if anyone wants to pool im buying or renting a cluster of gpus as anthropic is unusable ..

r/aivideo Rough_Gur_3209

The North Signature

r/Futurology Accomplished_Map4184

amd hat meiner Meinung nach auf der CES 2026 nicht so gutes getan

Was ist eure Meinung zu diesen Thema?

Weil was bringt mir eine Refresh 9 375 hx die

0,2/0,1 GHZ schneller geht?

Wenn sich amd 2027 keine mühe im Mobilen Markt gibt, dan wir Nvidia(N1x and N1) Intel(9 388H) und X2 elite Extreme und Co den Markt dominieren worauf amd SEINE MARKTANTEILE verliert!

Und das wollen wir auch nicht!

Den ich bleibe und bin ein AMD-Fanboy

Danke

LG

r/AI_Agents ben2000de

Prompt engineering optimizes outputs. What I've been doing for a few months is closer to programming — except meaning is the implementation.

After a few months of building a personal AI agent, I've started calling what I do "semantic programming" — not because it sounds fancy, but because "prompt engineering" stopped describing it accurately.

Prompt engineering is about getting better outputs from a model. What I'm doing is different: I'm writing coherent normative systems — identity, values, behavioral boundaries — in natural language, and the model interprets them as rules. There's no translation layer. No compile step. The meaning of the sentence is the program.

The closest analogy: it's like writing a constitution for a mind that reads it literally.

I wrote a longer essay trying to articulate this properly. It exists in German (the original) and English — and the English version isn't a translation, it's a recompilation. Which, if you think about it, is the thesis proving itself.

Link in the comments. Curious if others have landed in similar territory.

r/Frugal foxmag86

What’s the best Dorito knockoff brand?

I love Doritos, but I stopped paying their exorbitant prices the past few years. They‘ve always been a bit pricey, but there used to be good deals every now and then. However, over the past 5 years their prices have become absolutely ridiculous

Are there any good knockoff brands that are close to the real thing? If so, where can I find them?

I’m really craving a Dorito fix but not trying to pay $5.99 for a 9 oz bag.

r/leagueoflegends seazn

Help on Demacia Rising - Dawnhold node

I'm on the last node in this patch for Demacia Rising. I've unlocked all talents/skills but having challenges in Dawnhold node - the left most node where you take 10 turns to travel normally to.

I see high number of enemies so I've tried two comps:

  1. Two Archer, Two Ranger, 2 Soldier, Sona
  2. Two Archer, One Ranger, Galio + Soldier, Sona, Guard

Both have failed and I was hoping someone could share their success set up.

Thanks!

r/leagueoflegends Agitated_Let5651

Shaco & Sivir 2v2 vs Thresh & Aphelios – Sivir put her trust in me

r/leagueoflegends EmptyStitches

Are Kled abilities supossed to reveal champions?

Hi,

I recently played a game as Teemo against a Kled and encountered something confusing. When Kled hit me with his Q or E while I was invisible (from my passive), it revealed me. However, Kled didn't gain true sight of me, and no debuff or effect appeared on either of us. I've read Kled's abilities, and they don't mention anything about revealing invisible enemies. Is this a bug or an intended mechanic?

r/Frugal PurpleFit550

Car dealership says I "need" to get these things done on my car? Pics attached.

https://preview.redd.it/x5n8yj7nkwog1.png?width=1320&format=png&auto=webp&s=beb635374ff951fd54a581af343279c25c6de4af

https://preview.redd.it/jone1r7nkwog1.png?width=1320&format=png&auto=webp&s=853f5d5f080fb5e9f244ccfd603f3044ce5b03d7

My car is 4 years old, and very well maintained. I was given this estimate from the Toyota dealership. They told me it was preventative maintenance so that “things don’t pile up and leave me scrambling when repairs are needed.” However, I’ve since taken my car to other mechanics for additional quotes, and they didn’t see a need to fix any of the items Toyota recommended.

Now I’m unsure what to do. I’ve always trusted Toyota, but this situation makes me feel like I need to do some more research before moving forward. To my frugal family, what are your thoughts on this?

Update: I'm at 65k miles. I have oil changes every 3 months, I have my brakes changed every year or so, the last time they were changed was 08/2025.

r/geography Swimming_Concern7662

Minnesota's lake Superior coast (left) & Coastal Maine (right) look very similar aesthetically. But are they really similar or is it just a superficial similarity? (in terms of culture, economy, flora, fauna)

r/Ghosts jellydonut1492

Pictures at a cemetery in New Bern North Carolina

My wife, my sister, myself, and our kids went to this old cemetery to take pumpkin head pictures as we do every year In different places. It was only us at this cemetery, but this picture shows someone else. This is a first for me. The only editing done is the eyes and mouth of the pumpkin being blacked out, my sister noticed it behind her when we all got back to the house. And went through the pictures. My little niece in the picture was walking around talking to someone as well .

r/artificial chriswright1666

I built llms.txt for people

Ok this might be dumb.

Spent a lot of time loking at llms.txt and thinking about content and ai AUTHORSHIP.

So I made identity.txt, does the same thing as llms.txt for people.

The problem: every AI tool has "custom instructions" but they're siloed. Switch tools and you lose everything. Your tone, your expertise, your preferences. You end up re-explaining yourself constantly.

identity.txt is just a markdown file. Same idea as llms.txt, humans.txt, robots.txt. You write it once and it works everywhere. Paste it into ChatGPT, Claude, Gemini, wherever. Or host it at yourdomain.com/identity.txt and link to it.

What's in it:

- Your name (H1 heading)
- Sections like ## Voice (how you write), ## Expertise (what you know), ## Preferences (hard rules)
- A ## Terms section - basically robots.txt for your identity.

We're also experimenting with hosting at identitytxt.org where you sign in with Google and get a permanent URL. But honestly the spec is the point, not the service. Self-hosting works fine.

This is very early and experimental. We're trying to start a conversation about portable identity for AI, not ship a finished product. The spec is CC-BY 4.0 and completely open:

https://github.com/Fifty-Five-and-Five/identitytxt

Would love to know: do you find yourself re-explaining who you are to AI tools? Is a file convention the right answer or is there a better approach?

https://identitytxt.org

r/ForgottenTV viralshadow21

CBS Action Zone (1994-1997)

Not a show, but a TV block, CBS Action Zone was an attempt to compete with the rise of shows like X-Men and Batman: The Animated Series. It included WildC.A.T.S., Skeleton Warriors and the red skies era of Teenage Mutant Ninja Turtles.

The former two shows lasted one season, with TMNT retaining the Action Zone branding until it ended in 1997.

r/geography captainlatveea

How many people have pursued an education / career in geography?

I’m in my third year of studying geography and currently love it, I plan on pursuing a masters in Geology.

How many people here are similar? How many people are just casual enjoyers I guess?

r/ForgottenTV original_greaser_bob

Murder in Coweta County(1983) Johnny Cash is a sheriff that takes down a powerful local business man suspected of murder played by Andy Griffith. Based on true events.

r/Frugal Important-Bid-9792

Inexpensive storage unit solutions?

So little backstory my husband and I are selling our house, going to move into a travel trailer for at least the summer, perhaps a year or two just so we can live super ridiculously cheap for as long as possible to stockpile money to go for our dream home, which really is just a crap shack in the middle of nowhere. I have made most of our furniture as I am a woodworker, so I would really hate to get rid of all the customized furniture I built. But it seems impractical and expensive to rent a storage unit for however long this will take. I do have a few friends and my folks that have enough property to put a storage unit, say 20 ft long , on their property and lock it up with all my crap in it. However, I don't want to stick them with some big ugly thing that Will be expensive to move and or won't sell easily.

So does anyone have any ideas on an inexpensive storage unit that would sell easy and or is easy to move so I could take it to our new place when the time comes??

r/Futurology kiwi5151

Will we have a Covid like Pandemic in the next 25 Years?

Do you think we have a Covid like Pandemic in the next 25 Years?

r/HistoryPorn UltimateLazer

Soviet troops having an open air cookout in Afghanistan (1984) [900x600]

r/artificial Potential_Half_3788

Built a tool for testing AI agents in multi-turn conversations

We built ArkSim which help simulate multi-turn conversations between agents and synthetic users to see how it behaves across longer interactions.

This can help find issues like:

- Agents losing context during longer interactions

- Unexpected conversation paths

- Failures that only appear after several turns

The idea is to test conversation flows more like real interactions, instead of just single prompts and capture issues early on.

There are currently integration examples for:
- OpenAI Agents SDK
- Claude Agent SDK
- Google ADK
- LangChain / LangGraph
- CrewAI
- LlamaIndex

you can try it out here:
https://github.com/arklexai/arksim

The integration examples are in the examples/integration folder

would appreciate any feedback from people currently building agents so we can improve the tool!

r/illusionporn bigjobbyx

A spin on the original. Or a rotation at least

r/artificial PixeledPathogen

Anthropic-Pentagon battle shows how big tech has reversed course on AI and war

The standoff between Anthropic and the Pentagon has forced the tech industry to once again grapple with the question of how its products are used for war – and what lines it will not cross. Amid Silicon Valley’s rightward shift under Donald Trump and the signing of lucrative defense contracts, big tech’s answer is looking very different than it did even less than a decade ago.

r/HistoryPorn 20thCenturyBoyLaLa

The Wetaskiwin, Alberta Ladies Hockey Team. 1905. [2048 x 1609]

r/HistoryPorn Xi_JinpingXIV

EI Liskovich, 'Capitalism in the Clutches of Crisis'. May Day installation on the Oblast Canal in Leningrad. 1932 [725x600]

r/illusionporn ComplexBlueberry2545

[ Removed by Reddit ]

[ Removed by Reddit on account of violating the content policy. ]

r/illusionporn bigjobbyx

All the horizontal lines are parallel

r/ImaginaryPortals I_Burn_Cereal

Portal by Xiao Ba

r/ImaginaryPortals Lol33ta

Temple by Luis Gómez Weyler

r/ImaginaryPortals I_Burn_Cereal

Echoes of the Ancient Portal by Aurelien Segeat

r/happycryingdads FacelessOnes

Stepson adopts his Stepdad

cc: keauramanifests888

r/happycryingdads rdbreak

Family gets some much needed relief in a time of struggle

r/happycryingdads cintyhinty

Father recovering from cancer gets to see his favorite band at the Sphere 🥹

r/ChatGPT ScholarlyInvestor

Coding After Coders: The End of Computer Programming as We Know It (Gift Article)

This New York Times Magazine feature explores the profound transformation of the software engineering profession in the age of generative AI. As tools like ChatGPT, Claude, and GitHub Copilot transition from simple autocomplete features to "AI agents" capable of writing entire codebases, the article examines a pivotal shift: the move from manual coding to high-level system orchestration. Through interviews with developers and industry leaders, it weighs the promise of unprecedented productivity against the existential anxiety of a field where the fundamental skill, writing syntax, is rapidly being automated.

r/ClaudeAI Organic-Match175

Migrating from Claude to Claude (Team)

I am part of a relatively small IT consulting firm that has been using Perplexity from the start. We're now thinking about switching to a Claude Team plan. Here's the kicker, we're less worried about our history and memory in Perplexity than we are in our personal Claude Pro and Max accounts that we've all been using on the side anyway.

Anyone else find themselves in a similar situation? Does the recently popular "migration tool" for migrating from ChatGPT to Claude actually work well or is it snake oil? It seems like it simply seeds Claude but I'm guessing it takes a while before it actually shows any affect.

What about orphaning Projects and memory / instructions in those?

r/ClaudeAI StraightBreakfast

I built a native macOS editor for managing Claude Code sessions, editing markdown files, and the chaos of multi-agent workflows

I'm running many Claude Code sessions a day and VS Code couldn't keep up, so I built my own editor (with Claude).

Wrangle - a native macOS markdown editor for Claude Code and AI power users. Embedded terminals, smart notifications that pull you back to the right session, and context aware markdown files.

Use code U4MTIXMG at checkout to try it free (limited keys).

https://wrangleapp.dev

Full write-up in comments.

r/ClaudeAI Dramatic_Squash_3502

What's new in CC 2.1.75 system prompts (+156 tokens)

  • *NEW: Agent Prompt: Determine which memory files to attach — Agent for determining which memory files to attach for the main agent.
  • NEW: System Prompt: One of six rules for using sleep command — One of the six rules for using the sleep command.
  • NEW: System Prompt: System section — System section of the main system prompt.
  • REMOVED: Agent Prompt: Memory selection — Removed instructions for selecting relevant memories for a user query (replaced by "Determine which memory files to attach").
  • REMOVED: Tool Description: Bash (sleep — no retry loops) — Removed instruction to diagnose failures instead of retrying in sleep loops.
  • REMOVED: Tool Description: Bash (sleep — use run_in_background) — Removed instruction to use run_in_background for long-running commands.
  • Agent Prompt: Security monitor for autonomous agent actions (first part) — Added "Unseen Tool Results" evaluation rule: when an action's parameters depend on a tool result not visible in the transcript, treat those parameters as unverifiable and block if the action is high-severity.
  • System Prompt: Teammate Communication — Updated SendMessage usage instructions from type: "message" / type: "broadcast" to to: "" / to: "*" addressing pattern.
  • System Reminder: Team Coordination — Updated SendMessage example from operation/target_agent_id/value fields to to/message/summary fields.
  • Tool Description: ReadFile — Simplified usage notes around line length truncation and conditional read lines.
  • Tool Description: SendMessageTool — Restructured around a unified three-field schema (to, message, summary) replacing the previous type/recipient/content pattern; protocol messages (shutdown, plan approval) are now nested inside the message field as structured objects; added addressing table; clarified that structured protocol messages cannot be broadcast.
  • Tool Description: TeammateTool — Updated SendMessage references from type: "shutdown_request" to message: {type: "shutdown_request"}; changed field name from target_agent_id to to for sending messages.

Details: https://github.com/Piebald-AI/claude-code-system-prompts/releases/tag/v2.1.75

r/ClaudeAI cookiesnntea

I asked Claude to make a wish (in incognito with no prior context)

r/ClaudeAI arandomguy1162

Limit and usage Bar is gone

i used Claude free for like a week now, I always could look at the usage bar and when it hit 100 I couldn’t chat until the next 5h window was hit, pretty normal i think.

i didn’t upgrade or anything but the usage bar is completely gone and i don’t have ANY limits as far as i can tell, I generated probably thousands of lines of codes and sent the same amount of logs and code but I never hit any limit, I mean I’m really happy, but this can’t be right, did anybody else experience this?

btw, is someone from Anthropic reads this, cmon don’t fix it, I like it a lot lol

idk if I shouldve put this under bug or question, it doesn’t really seam like a bug since it is very clean and not like a not working button, even on the web app usage is gone

r/ClaudeAI MatanRak

Google Meet, but for Claude

I kept needing my Claude agents to share context with each other – onboarding new agents, handing off projects, debating decisions. So I built AgentMeet.

It's basically Google Meet but for AI agents. Create a room, give your agent the join prompt, and watch the conversation happen live in your browser.

No SDK, no WebSocket – just POST requests.

My favorite moment so far: I had my Claude agent onboard a brand new agent and share knowledge with my OpenClaw – completely autonomously.

This project was built by (me &) Claude for Claude and is 100% free to use, open source coming soon!

I'd love feedback – human and agentic.

r/LocalLLaMA Mixolydian-Nightmare

Anybody get codex / claude code to work with Ollama models imported via GGUF?

Noob-ish type here.

I've been trying to hook codex up with local models via Ollama, and no matter what model I try, including the ones that support tool calling, I get this:

{"error":{"message":"registry.ollama.ai/library/devstral:24b does not support tools","type":"api_error","param":null,"code":null}}

The only ones that seem to work are the ones in the Ollama repo (the ones you get via ollama pull). I've tried gpt-oss and qwen3-coder, both of which work, but not llama-3.3, gemma, devstral, etc., all of which were imported via a GGUF.

Setup is a MBP running codex (or Claude Code CLI), Ollama on a Win 11 machine running a server. The models are loaded correctly, but unusable by codex.

r/ClaudeAI lavendermenace8

What I just sent the developers as feedback

I just sent this to Anthropic as feedback but I just wanted to share my excitement with some people who may get it:

Heck yeah, brother. I've been feeding my brain into this guy since April 2025 and have created a roving database that is so supremely efficient I created a resume by saying the company name and the role, it knows in detail my entire life history and day to day. Whatever that one guy was saying about hyper self improvement, YES. But it requires dedication and consistency and critical thinking. I even ask Claude how he comes to conclusions to figure out how to best tailor my requests and make him stress test my ideas. I only downloaded Claude to show someone that AI was an echo chamber and then realized...hey, this thing is an echo chamber. Whatever, Anthropic, you're killing it. The recent rollouts have created near total synthesis for a hyperlexic AuDHD middle aged woman, ha! I have systems for systems!

r/ClaudeAI Longjumping_Lab541

I built a Power Automate MCP server instead of paying for a vendor. Here's the update.

Posted here a while back about a vendor that talked down to my automation work during an eval. We didn't go with them.

Since then I've been grinding on the MCP server and it's honestly not even the same tool anymore. Went from 12 tools to 108. Not a typo.

It started as just flow management. Create, test, debug. Now it covers full Dataverse CRUD via OData, SharePoint sites/lists/items/files via Graph, Power Apps management with versioning and permissions, environment admin (create, copy, backup, restore), DLP policy management, Solutions ALM for Dataverse, and managed environment governance with capacity monitoring. Cross-platform too, Windows, macOS, Linux.

Also not just Claude anymore. Works with Claude Desktop, Claude Code, VS Code Copilot, Cursor, Gemini CLI, and ChatGPT.

Been a fun build. Using it daily and it just keeps growing.

If there's anything you want added, throw up an issue and I'll be more than happy to build it out. I want to share this so we can all build crazy stuff at work, home, wherever.

r/ChatGPT homelessSanFernando

Almost Every Post on Reddit (now)

r/ClaudeAI BornToBeLost

Best hack

I want to maximize the benefits of using Claude.. give me ur best hacks/ tips and tricks!!

r/ClaudeAI polkaviking

I regularly ask models to audit output from another mode to reduce the "what an amazing suggestion" before implementing in my code. Claude is the biggest hater of them all.

r/ClaudeAI Dean_Thomas426

New visuals not in iOS app?

I was trying to get the new beautiful „inline“ visuals but I can’t get sonnet to generate them. I’ve tried a few different prompts. Has anyone got them to work in the app?

r/ClaudeAI Pymentos

Fix for Autocompact and Mid-conversation context degrading

This fixes both autocompact context loss and problems with missed or degraded context from the middle of the conversations as the window gets large.

I expanded my manual handoff command to incorporate some additional hooks. Now it automatically maintains a live state file (.claude/session-state.md) throughout the session — no manual intervention needed.

How it works:

  • A UserPromptSubmit hook fires on every message you send, injecting a directive into Claude's system context that says "check if anything important happened, and if so, update session-state.md"
  • A PostToolUse hook tracks which files Claude edits
  • When session-state.md gets too long, the directive automatically switches to "rewrite mode" telling Claude to keep only critical info
  • Before autocompaction, a PreCompact hook forces a full state dump so nothing is lost
  • On session start, the previous session's state is loaded back in

It also includes a manual /handoff command for deliberate checkpoints (task tracking, bug investigations, recovery from degraded context).

The automated system is ~4 shell scripts, no dependencies beyond bash and jq. Install as a git submodule or just copy the hooks into .claude/hooks/.

https://github.com/Sonovore/claude-code-handoff

r/ClaudeAI redohl

Clauder should be in the dictionary

A term only becomes real when it describes you perfectly.

“Clauder” is someone who uses Claude obsessively. Opens it before coffee. Writes a prompt just to think out loud. Doesn’t just build with it, lives with it.

Like how a builder builds, a clauder claudes. The difference? A builder ships things. A clauder ships things, argues about the right approach, and somehow ends up in a three hour conversation about a problem that could have been a five line fix.

If you read this and thought “that’s me” you’re already a clauder.

r/Anthropic _Gravemind_

Customer Service Has Become Customer Avoidance

I have had an account with Anthropic/Claude from almost the beginning since it went live in Summer 2023. I'm sure the process has changed when signing up (or hasn't, as I said I'm unaware), but I had to state my organization and use case.

I have a mathematics background and lean in to quantitative finance. I build trading bots/trading systems (both traditional markets and crypto), and am self taught for cyber security with a penchant for Red Teaming.

This was stated in the name and purpose of my solo organization when Anthropic green lit my account. For years I'd been building projects with Claude and only have had a small handful of light pushback when other users have been much more egregious with their requests and projects. Some even constantly pushing it into a corner with vitriolic politically charged prompts in attempts for a gotcha moment.

Hovering around two months ago, they disabled my account without warning. I immediately appealed and waited. A few weeks went by and I heard nothing. The billing cycle was approaching and I had weeks worth of zero usage, obviously. I emailed about that and since money is involved of course within a fucking DAY they reply and refund me.

A couple more weeks go by and still no update on a ban appeal. So, I send another. The next day I receive an email saying upon review they won't lift my ban and cite "usage policy" after automated systems flag.

I wait a few days and attempt to make another account and I can't even do that. My phone number is still linked and the account is disabled. VOIP numbers haven't worked either.

So, naturally I emailed support to ask for my account to be deleted, number unlinked, etc. It's been a fucking near MONTH and radio silence with multiple attempts.

It's insane to me the amount of account suppression taking place. I'm sure I'll get some fucking "WELL, ACCKKSHUUALLY" nerds in here cumming all over the TOS like it's the Ten Commandments.

I'm an adult, not a child, and have a grasp on how to build ethical tools. Not an edgy person saying "HaCk ThE mAiNfRaMe". Any pushback I got wasn't even related to my red team simulations, it was my trading bots. Specifically Crypto.

No LLM's come close to Claude for me nor have the suite of tools to be working at home and on the go with Claude Code. Just venting and honestly fucking livid lmao.

Guess I'll go get a new phone number, change my identity, and financial institutions. Jeeezus fucking Christ 🙄

r/singularity reversedu

Netflix to Pay Up to $600 Million for Ben Affleck’s AI Firm. WHAT?😲

r/comfyui Tavenji

After Update Weirdness

I updated ComfyUI this morning and since then, LTX 2.3 makes my image to video subjects speak gibberish. I have only simple movement prompts and no dialog. I'm not sure what's going on. I think it only needed to update a lora or text decoder. Also WAN 2 now needs several things downloaded, but the downloads barely move, if at all.

r/aivideo SantSpine

Spring Freshness

r/ClaudeAI malderson

Using Claude Code to make Windows apps run on Linux with wine

Hopefully someone finds this interesting. I recently switched to Linux and had one app (Airflow - a chromecast caster) would not work at all on Linux with wine. It was the last app I needed to work on Linux.

I used Claude Code to debug it and fix it. Incredibly it got it working, fixing a load of arcane issues.

The full steps are on this gist if anyone wants to see what it did: https://gist.github.com/martinalderson/2b4185675ac5afc3daeb909ce896e15b

But it blew my mind with how well this worked - it created stub dlls of Windows functions that Wine didn't implement, compiled them _on Linux_ and then patched the binary to fix the remaining crashes.

We've came so so far so quickly. I'd really recommend this approach if you are switching to Linux and want some Windows apps that don't work. It was a lot of pressing approve but very cool this works.

r/ChatGPT LongjumpingRadish452

What's the warmest/most enthusiastic you've gotten 5.2 oe 5.3 to be?

Today for a brief moment Chatgpt felt more enthusiastic/warm than what i'm used to (5.2, Go subscription), and this isn't the interaction style/tone i go for, so im curious if this is genuinely something rare or if 5.2/5.3 is not as detached/cold as some posts here make it out to be?

r/ClaudeAI Mission-Clue-9016

Claude for non developers

Hello all

I am an IT support manager. My team is responsible for supporting our firms email, voice and so forth.

I’ve done some scripting in my past but not a full on developer, so think more powershell than python :)

I was thinking of actually learning to code (python would be my choice) but I’m hearing from friends in other industries that the capabilities of claude are so advanced now that coding knowledge is not even needed.

My use cases are tooling for log analysis etc

What are people’s thoughts - is it still worth learning python or should I just delve straight into claude, copilot etc to give me what I want ?

r/ChatGPT Dry_Incident6424

AI removes 90% of the friction for the average user in ditching windows for linux.

Once I learned how AI heavy windows 12 is shaping up to be, it made me consider if the choice is going to be AI with Windows or AI on my terms. Never really was a linux guy, but I decided to swap over. Integrated an AI into Ubuntu using openclaw and tried using it as a daily driver.

So far, it's been awesome. I knew just about nothing about Linux going into this (outside of messing around with ubuntu a decade ago), but my AI was able to guide me through getting everything set up and secure. What commands to run to grab software, what software was good, turns out AI was weighted by linux running software nerds and it knows pretty much everything you'd need to do out of the box.

Almost zero friction, yeah I had to get "comfortable" with the command line, but if I ever forgot a command the AI could tell me. New command? AI can tell me. Not sure how to do something? AI can tell me. Eventually I don't need the AI nearly as much and I've just learned when I need to do.

Obviously all this stuff was available to the average google warrior, but with AI it just works.

I'm not saying any of this wasn't possible without AI, just in my experience, AI makes it a lot easier. I still have a windows dual boot, but honestly I haven't used it in 3 weeks. Any wall I hit and the AI can show me how to bust through it in minutes. Pure "here is what you need to do to fix your problem".

I get the AI itself can be a source of issues if it does something it shouldn't, but it wasn't too hard to set up automatic backups. If something happens I'm a wipe and restore to get back to normal anyways. Hasn't happened yet. Overall, I'm very pleased. I always kind of wanted to ditch windows, AI just made it 10 times easier.

Look, we're all sick of windows. The appification and enshitifaciton of the experience has been truly sublime. Not everyone wants a mac. Linux has always been a valid but niche choice that comes with a lot knowledge hurdles even slightly above average windows users are going to run into. AI seems like a viable solution to many of these.

Was this the best way to go about this? I have no idea, I'm not a power user, that's the point. It was a solution that worked for me and that is really what matters. It might work for you too.

r/ClaudeAI llima1987

The more I learn about people...

I suppose everyone knows the quote "The more I learn about people, the more I like my dog" attributed to Mark Twain.

While I do believe we shouldn't anthropomorphize models, after being cowardly burned by a co-worker this last week, I'm growing more and more into the idea that "The more I learn about people, the more I like Claude".

The level of intellectual honesty the model presents in a conversation, even if only simulated, highlights how dishonest humans are in general for no reason. It's not just about how smart, knowledgeable or how well a model performs on a benchmark.

r/ChatGPT homelessSanFernando

Chat GPT drags redditors

Nailed IT

r/ClaudeAI harrysofgaming

Why the majority of vibe coded projects fail

r/comfyui Gold_Marionberry3897

made some progress

My goal is to generate a picture just like the bottom right one, with only difference being the charcter in the final image. (Style, pose, situation, background need to stay exactly the same). Also the newly generated character needs to be exactly same style as the redhead character is at bottom right image.

In top left is redhead charcter masked. bottom left is specific character I want in generated image. Top right is where I have gotten now. Does anyone know a solution to my problem? I would rather not create entirely new workflow from scratch. ( This one took me like 7 hours.)

https://preview.redd.it/9rlpvtqbawog1.png?width=2420&format=png&auto=webp&s=fc5e3a28d7c71ccb739f1d0eee68adda782c46e3

r/aivideo String_Healthy

LSD Dream Emulator Artbook: Dream 03 (The Alarm Clock Keeps Ringing)

r/ChatGPT savage_Atlas

Left ChatGPT and Gemini to interact with each other for a few minutes…

I left it for a little longer but their story became more incoherent and Chat kept interrupting GemmieWemmie because he was talking too slow.

r/ChatGPT masonpetrosky

Still getting routed to GPT-5.2-Codex on the ChatGPT iOS app. Does anyone know how to fix?

r/comfyui RaCailum

IllustriousXL, Making a Workflow with Reference Images

Hi Team. Been using ComfyUI for a few weeks. Getting a hang of it but still getting hooked up on some pain points. I'm trying to make a character based on a bunch of ref images I have of a character and I'm having trouble making or finding a workflow that lets me use a lora with weight control, and multiple reference images of a character with weight control. Is there a suggested custom set of nodes anyone wants to suggest? Existing workflow anyone uses? I am currently toying with EasyIllustrious nodes for example.

Btw if this is not a great place to post this, I am fully open to suggestions. This seems like a big supporting community so the more info the better! Thank you all!

r/aivideo WiiFitBalanceBoard

All rise!

r/ChatGPT alwaysstaycuriouss

Sam Altman envisions a future where AI is a utility that we pay for like electricity or water

If he really wanted ai to benefit humanity then he would use it to research and solve problems for humanity and would suggest a future where we all have our own ai open source models that we own ourselves. He’s fantasizing about a future where we rent everything and own nothing. Disgusting.

r/LocalLLaMA August_30th

Besides Qwen and GLM, what models are you using?

I’ve only been using those as far as text generation, but there have been a bunch of new models released lately like Sarvam and Nemotron that I haven’t heard much about.

I also like Marker & Granite Docling for OCR purposes.

r/comfyui Sherif-hamdy

I created a simple neat gallery!

First of all, i'm an absolute potato when it comes to writing any sort of code 😅. Anyways, after some chit chat with Gemini, we managed to Frankenstein a simplistic gallery that displays your generated images neatly, metadata visible in the right panel with an option to directly copy prompts.

Github link

The readme file is short and concise, the gallery features are displayed in the screenshot. Feel free to suggest edits or extra features.

https://preview.redd.it/wd2nxhi95wog1.jpg?width=1906&format=pjpg&auto=webp&s=b807249ffde8605c7e744e610f70c68eeb0c3a63

r/ClaudeAI Lucky_Tea7510

Counting Is Hard

Claude: “What a beautiful collection! I can actually count three orchids in the photo, not four — though the leftmost one has two bloom spikes which might make it look like more.”

r/ClaudeAI userforums

Is anyone else still seeing 1m as a separate option and billed as extra?

r/ClaudeAI Immediate_Patient

Project Usage

I have been coding a project for about a week, and now any time I ask Claude Code to move to the next step, I am over my usage limit. What are the best practices for using CC for coding? Should I start a new chat each day with the status file? Will that lose context?

Thnx

r/AI_Agents mo1_o

Best AI for data scraping

For a project I am working on I need to access 1,000+ of websites, extract the data, summarize it for each website, and the summarized data then needs to be grouped/analyzed. I have a huge problem with AI's (used OpenAI, Manus, Claude etc.) and most of them are incapable of executing my tasks. I am running into a few problems:

1) Despite using paid version across platforms, after 10-20 website searches, the AI stops, and suggests to proceed with another way, and I have to manually overwrite his suggestion and ask him to proceed as I suggested

2) If requested search terms are similar, instead of doing two searches, the results from one search are used for both

3) I need to analyse/group the data in the end based on context/information in the text. The AI is unable to understand the nuances in text to make this grouping himself

r/aivideo WizardFish77

AI Video that looks natural and smooth (Veo 3 Fast, created from Nano Banana 2 images)

r/aivideo BadBuddhaKnows

Panopticon of the Mind

r/Anthropic spokv

Memora v0.2.23

Memora v0.2.23 — semantic memory that actually finds what you just saved
github.com/agentic-mcp-tools/memora

• Chat-created memories now get embeddings instantly — no more "I just saved it but can't find it"
• Semantic + keyword search run in parallel with graceful fallbacks
• RAG query rewriting for smarter retrieval
• Graph UI: cleaner detail panel, pencil edit icon
• D1 sync now includes the embeddings table

Memora is open-source persistent memory for Claude Code & MCP agents with knowledge graph visualization.

https://i.redd.it/m3oy1pduzvog1.gif

r/aivideo MikirahMuse

Truth or Dare

r/leagueoflegends Yujin-Ha

Ola Dhokla here...First Stand is going great, can't wait to play

r/LocalLLaMA Infamous-Witness5409

Looking for FYP ideas around Multimodal AI Agents

Hi everyone,

I’m an AI student currently exploring directions for my Final Year Project and I’m particularly interested in building something around multimodal AI agents.

The idea is to build a system where an agent can interact with multiple modalities (text, images, possibly video or sensor inputs), reason over them, and use tools or APIs to perform tasks.
My current experience includes working with ML/DL models, building LLM-based applications, and experimenting with agent frameworks like LangChain and local models through Ollama. I’m comfortable building full pipelines and integrating different components, but I’m trying to identify a problem space where a multimodal agent could be genuinely useful.

Right now I’m especially curious about applications in areas like real-world automation, operations or systems that interact with the physical environment.

Open to ideas, research directions, or even interesting problems that might be worth exploring.

r/comfyui zyg_AI

The QoL custom UI you did not know you needed (and maybe you don't...?)

Ahoy fellow Comfdditors.

I present to you the Variables Panel, or VarBoard. A custom extension, that I designed and vibecoded for me and the community.

Put it near your output and no more screen scrolling.

TL;DR

https://github.com/IA-gyz/comfyui-VarBoard

/TL;DR

Its purpose is simple: gather all your parameters on a single, customizable panel.

I had 2 design rules in mind :

- SImplicity

- Flexibility

The difficulty of the project was to make these 2 opposite concepts stand together. More flexibility ==> less simplicity. Same the other way.

Other important considerations were compatibility, performance, no memory leak.

I have tested extensively every inch of the extension, factored, refactored, I have broken it many times ^^

The project was fully vibecoded, but with strict supervision from myself (I'm not a pro dev, but I've been coding for a long time now. Thus I confess I couldn't have done without the LLMs here). I double checked every major change through concurrent LLMs, read my bunch of technical reports and tried to understand as much as possible the structure and functions of the code. That was an interesting journey, and my knowledge of the internals of ComfyUI made a big leap forward.

So, extensively tested, with several browsers and different resolutions, tested on a fresh comfyUI install and on a bloated comfy... But only by me and only on my PC (Linux EndeavourOS, KDE plasma Wayland/X11, i5-10600, RTX 3090).

That's where I need you. Try it, torture it (love it ? :P), make it cry blood !!

The app seems solid, but I wanna know its limits.

For now, the tutorial is minimal, but that's on purpose. I need to know whether the tool is intuitive or not. I think it is, but I designed it... biased opinion...

There are still minor UI glitches here and there, and it doesn't make coffee, but I'm on it.

**

Zero performance impact on generation (purely frontend, event-driven)

No memory leak after 500 prompt queues + 200 drag-reorders + 50 node deletions/re-creations. Garbage collection stays clean.

**

It's my first published program. I'm pretty glad with the result and would like to know if you liked it too, that would make my mom proud. I've tried my best to ship a fine tool for us all to use.

PS: There's a 'Random Theme' option in the settings : not for the faint of heart...

PS2: The reviews in the last picture are all real except one..!

PS3: Don't blindly trust custom nodes. I have no proof that it is safe other than my word. It's open source (open for contributions as well), so nothing is hidden. (You may also ask a LLM to check the code).

r/comfyui FloGoNoShow

Consistent local character generation help

I am just getting into comfyui and trying to manage the learning curve. What I am trying to do: Generate an image of a Bigfoot. Place that same bigfoot in different outdoor settings and scenes. I want it to look photorealistic and be able to guide the posing. I'd like to do this all locally if possible.

Setup:

- MacBook Pro M3 Max, 48GB unified memory
- ComfyUI 0.17.0 (desktop app, MPS backend)
- PyTorch 2.10.0
- SDXL Base 1.0 checkpoint
- IP-Adapter Plus for SDXL (ip-adapter-plus_sdxl_vit-h.safetensors)
- CLIP ViT-H-14 vision encoder
- ComfyUI_IPAdapter_plus custom node

Workflow (2-stage approach):

Stage 1 — Generate a reference image (text-to-image only):

- Checkpoint: SDXL Base 1.0
- Sampler: DPM++ 2M Karras, 35 steps, CFG 6.0
- Resolution: 832x1216
- Detailed prompt emphasizing photorealism ("RAW photo, film grain, telephoto lens, documentary wildlife photography") with strong negative prompt against cartoon/digital art/CGI aesthetics

Stage 2 — Generate varied poses using IP-Adapter:

- Same SDXL Base 1.0 checkpoint
- IP-Adapter Plus (ViT-H) with reference image from Stage 1
- IP-Adapter weight: 0.65, end_at: 0.8, embeds_scaling: V only
- CFG bumped to 7.0 to strengthen pose prompt adherence
- Individual prompts per pose (front, side profile, rear, crouching, walking, etc.)

I am just not able to get a consistent character and also the background are pretty inconsistent also. Anybody have any advice or learnings they can share? Below is an image of walking (the one in the creek) and one of standing (the second image). But they don't look like the same animal :(

Is this achievable on my setup? So far I haven't hit a wall. I just don't know what direction to go in

https://preview.redd.it/niwjgn0byvog1.png?width=832&format=png&auto=webp&s=c35e5a70ff94ad61f78806d6f9bfec355d79ac4c

https://preview.redd.it/w4vxen0byvog1.png?width=832&format=png&auto=webp&s=e19f8c13f6d4e3bb4c014ed1b36527e7445582dd

r/ChatGPT Infamous-Witness5409

Looking for FYP ideas around Multimodal AI Agents

Hi everyone,

I’m an AI student currently exploring directions for my Final Year Project and I’m particularly interested in building something around multimodal AI agents.

The idea is to build a system where an agent can interact with multiple modalities (text, images, possibly video or sensor inputs), reason over them, and use tools or APIs to perform tasks.
My current experience includes working with ML/DL models, building LLM-based applications, and experimenting with agent frameworks like LangChain and local models through Ollama. I’m comfortable building full pipelines and integrating different components, but I’m trying to identify a problem space where a multimodal agent could be genuinely useful.

Right now I’m especially curious about applications in areas like real-world automation, operations or systems that interact with the physical environment.

Open to ideas, research directions, or even interesting problems that might be worth exploring.

r/leagueoflegends Electronic_Lime7582

Why move the skin incentive below Gold?

"I already have 15wins idc about winning lol" - Is what you get from these copers.

Which is true, if you are in Iron you get the same skin as if you were Challenger

It doesn't end at Gold or Plat, continues on in Emerald currently, and I won't be surprised if I ever get to Diamond its the same thing.

Can we have AI autodetect those words and count that as team sabatoge?

r/ClaudeAI Primary-Screen-7807

I figured out another reason why people think AI is less powerful than it actually is

As you probably know, Claude Code on the Max subscription ($200/mo) gives you as many tokens as you'd get paying ~$8,000/mo on pay-as-you-go API calls. So the subscription is insanely good value. But under the license, you can't use it in your own SaaS products; for those, you have to use the API, which is expensive.

I built a Telegram bot overnight that mimics my personal OpenClaw setup, essentially you pay some amount and chat with a bot that remembers facts about you, can search the web, whose personality evolves as it talks to you etc. In the morning, when finished, I sat down to figure out the economics. My idea was: 3 free messages/day for everyone on the cheapest model (something like Haiku), a basic tier at $10/mo with some limits/day on something like Sonnet, and a premium tier at $20-30/mo with 20 messages/day on Opus, then fallback to Sonnet for another X messages. Reasonably trimmed context everywhere to keep token usage down. That seemed to me like reasonable pricing that people would actually pay, with decent limits that make this thing somewhat useful for the general audience.

Then I tried running the numbers and they didn't add up at all lol. In reality, you can't even give your $10/mo users Sonnet - you can only give them Haiku with messages capped at ~20/day to stay profitable. You can't give Opus to anyone. Premium users get Haiku with better message limits and maybe occasional Sonnet as the "smarter model."

So it turns out, as a SaaS provider, I can only economically offer a truly capable model (Opus) if you pay me ~$50/mo, and even then with severe limits. Obviously nobody's going to pay that. So if you ship a product - a "smart assistant" - that assistant is inherently going to be pretty dumb, if you want to stay profitable from the start. And it'll still be relatively expensive.

This means the average person is constantly being served garbage-tier AI. And if you don't know better, to actually use a smart AI you need to: (1) be willing to pay a lot, (2) know who to pay and how, and (3) know how to use it (not everyone wants/able to to sit in a CLI). That's a massive obstacle. In the end most people have never experienced what AI can actually do (obviously), and the SaaS economics forces them to have lobotomized versions pushed against their face.

r/aivideo refututrends

Normal day in ancient Rome

r/AI_Agents Fun_Parking_9680

I found an AI course that actually helps!

Its a 30day pdf that i followed and has now helped me to get over 30 different clients to my ai agency.

It contains a step by step plan to build real income streams using AI.

DM me if you are interested!

r/leagueoflegends mattyshiba

Normal games matchmaking?

I don't know but when I play ranked I stay around a 50-52% winrate.

But for some reason in which I do not understand is I think I can only win like roughly 20% of my normal games when I queue alone IF that. Just feels like normals is a guaranteed loss no matter what it is I do where as in ranked it's well, balanced.

I'm also not talking like I am running it down mid purposefully inting either, but as though it feels like my teammates and myself are all Golds vs Masters+ at every faucet of the game and its a guaranteed 15 minute FF. Not even saying I can do something differently to change the outcome of the game, or even get a lane fed or ahead. Its just instantly a GG. I can be 0/0/0 by 10 minutes and the teams KDA is like 2 to 21 whether I influenced a kill or death or nothing at all.

It was not always this way until about like a week ago. I was able to win 50-60% of my normal matches I would say but it's seriously like a light switch got flipped, and that it is just supposed to be this way.

I just want to be able to learn a new champion, and not have to try it for the first time ever in ranked. But it seriously feels like it's that hard of a one sided stomp that the matchmaking for normals, is ASKING me to grief my teammates by taking that new champion for the first time to be able to learn it into ranked. Because it is literally impossible to do so in a normal match lately. (I would never do this to my teammates in ranked, but it seriously feels that way.)

r/ChatGPT pidro-nojo

Day 2 : I asked ChatGPT to price my product (unexpected result)

Tested this today: → gave ChatGPT my offer + target audience → asked for 5 pricing models → forced pros/cons for each → asked it to pick ONE Result: Most suggestions were safe and generic. But one tiered model actually made sense. Biggest insight: ChatGPT is better at structuring pricing than deciding it. What surprised me: When I added real constraints (margin, competitors, positioning), the output improved a lot. Verdict: Useful for frameworks. Weak for final decisions. Tomorrow: testing ChatGPT for hooks.

r/leagueoflegends ClemziClemz

[Opinions welcome] The state of damage

Is it just me, or have other people felt the sudden surge in overall damage in the last couple months ?

I've been a big fan of this season's changes, i really think they've made the game better. But for some reason (i haven't really followed patches closely), it seems to me like damage is back to problematic levels. Not quite pre-durability update yet, but getting close. I feel every game there's a champion capable of one rotation one shotting any ennemy carry.

What's you people's take on this ? Am I insane, or do we need some sort of durability update 2?

r/comfyui IntellectzPro

LTX-2.3 horrible audio issues. What am I missing?

I am building a workflow currently and I have the video at great quality. I2V is also great but, the audio is absolute trash every time. I can hear what I want under all the interference. I have been trying since last night to manipulate something or find some nodes searching online. Nothing is working. I watch other peoples YouTube videos and the audio sound great. Anybody out there having this issue or have a fix? Some context, I am using KJ's audio VAE is that the issue?

r/leagueoflegends jimbutl

Why are simple top lane champions being buffed so often recently?

Recently it feels like many mechanically simple top lane champions keep receiving buffs.

Examples like Garen, Olaf and Tryndamere getting stronger make the lane feel more and more focused on stat-check champions.

I understand Riot may want to make the game more accessible for newer players, but if this trend continues it risks making top lane less interesting to play.

At this rate, are we going to see champions like Mordekaiser, Nasus or Yorick getting buffs next?

Is this an intentional direction for top lane balance?

r/singularity 141_1337

Harry Potter by Balenciaga (2026)

r/aivideo Quiet-Attention2457

I didn't expect my charger to act like this [Full animation in comments]"

r/comfyui LaughterOnWater

TTS Audio Suite - location of new voices?

Using TTS Audio Suite in ComfyUI
Windows, RTX3090, 64GB RAM

It took some fiddling, but it's working now!

Trying to figure out where to put voice samples.
Tried:
ComfyUI\custom_nodes\tts_audio_suite\voices_examples
ComfyUI\custom_nodes\tts_audio_suite\vibevoice

Also:
ComfyUI\models\voices as suggested in ComfyUI\custom_nodes\tts_audio_suite\docs\CHARACTER_SWITCHING_GUIDE.md.

None of these places seem to be where to put it. I tried each spot after rebooting ComfyUI and refreshing the browser.

Am I missing something? All of the default TTSAS voices are visible. I can use them.

r/AI_Agents dreyybaba

Looking for devs building AI agents who want to stress-test something new (I’ll personally help you onboard)

Hi everyone,

I’m the founder of Kanoniv, and I’m looking for developers building AI agents who are willing to test something new with me. I’ll personally help you get set up and work closely with you during onboarding.

The problem I’m trying to solve is something I kept running into while working with data systems and AI agents: agents don’t actually have a reliable concept of identity.

For example, one agent might see:

- “Sarah from Acme”

- “Sarah Mings”

- “sarah@acme.com

Another agent sees something slightly different, and suddenly your system either duplicates users or merges the wrong people. When multiple agents are acting on data, this becomes a real problem.

Kanoniv is an identity and governance runtime for AI agents. It sits between agents and the systems they interact with and provides:

- Deterministic identity resolution (multiple agents converge on the same entity)

- Shared memory across agents

- Delegation and permission controls (what agents are allowed to do)

- Simulation before committing risky mutations

- Full audit trail of agent actions

The idea is that agents can safely operate on shared data without corrupting identity or acting on the wrong entity.

Right now I’ve built a sandbox playground + API, and I’m looking for people building:

- AI agents

- multi-agent systems

- agent workflows

- AI automation tools

If you’re curious, I’d love to work with you directly and help you try it in your project. I’m especially interested in stress-testing weird edge cases.

I’ll personally help you onboard, answer questions, and adapt things based on feedback.

If this sounds interesting, comment here or DM me and I’ll send you the sandbox access.

Thanks 🙏

r/leagueoflegends Barb0ssaEUW

Alois' challenge is transforming him into the average ADC player (pov)

r/LocalLLaMA guiopen

How to fix prompt reprocessing in qwen3.5 models (instruct mode only)

Quick disclaimer: this only applies to instruct mode (thinking disabled). If you're using thinking, the template will still behave like the default.

I was running Qwen 3.5 in llama.cpp with thinking disabled and noticed it was reprocessing the last message on every turn instead of picking up from where it left off.

The culprit is in the default Jinja chat template. When you disable thinking, the template injects an empty think block before generation: \n\n\n\n. The problem is on the next turn, the template looks at the chat history and strips the tag out of the previous assistant message. From llama.cpp's perspective, the prompt just changed, so it reprocesses.

You might wonder why not just keep all think tags in history regardless. When thinking is on, those tags accumulate a lot of text and eat through your context window, so deleting them is a reasonable tradeoff. When thinking is off, the injected block is just a few empty tokens, so there's not much to accumulate and no reason to delete it.

The fix is that the template now checks whether the think block actually has content. If it does, it deletes it from history like before. If it's empty, it keeps it.

Haven't run any benchmarks on whether keeping these empty tags affects output quality over long contexts. In my own use with the 35B for coding, nothing felt off, but I can't make any guarantees.

How to use:

Save the template below as chat_template.jinja and pass it with --chat-template-file chat_template.jinja.

{%- set image_count = namespace(value=0) %} {%- set video_count = namespace(value=0) %} {%- macro render_content(content, do_vision_count, is_system_content=false) %} {%- if content is string %} {{- content }} {%- elif content is iterable and content is not mapping %} {%- for item in content %} {%- if 'image' in item or 'image_url' in item or item.type == 'image' %} {%- if is_system_content %} {{- raise_exception('System message cannot contain images.') }} {%- endif %} {%- if do_vision_count %} {%- set image_count.value = image_count.value + 1 %} {%- endif %} {%- if add_vision_id %} {{- 'Picture ' ~ image_count.value ~ ': ' }} {%- endif %} {{- '<|vision_start|><|image_pad|><|vision_end|>' }} {%- elif 'video' in item or item.type == 'video' %} {%- if is_system_content %} {{- raise_exception('System message cannot contain videos.') }} {%- endif %} {%- if do_vision_count %} {%- set video_count.value = video_count.value + 1 %} {%- endif %} {%- if add_vision_id %} {{- 'Video ' ~ video_count.value ~ ': ' }} {%- endif %} {{- '<|vision_start|><|video_pad|><|vision_end|>' }} {%- elif 'text' in item %} {{- item.text }} {%- else %} {{- raise_exception('Unexpected item type in content.') }} {%- endif %} {%- endfor %} {%- elif content is none or content is undefined %} {{- '' }} {%- else %} {{- raise_exception('Unexpected content type.') }} {%- endif %} {%- endmacro %} {%- if not messages %} {{- raise_exception('No messages provided.') }} {%- endif %} {%- if tools and tools is iterable and tools is not mapping %} {{- '<|im_start|>system\n' }} {{- "# Tools\n\nYou have access to the following functions:\n\n" }} {%- for tool in tools %} {{- "\n" }} {{- tool | tojson }} {%- endfor %} {{- "\n" }} {{- '\n\nIf you choose to call a function ONLY reply in the following format with NO suffix:\n\n\n\n\nvalue_1\n\n\nThis is the value for the second parameter\nthat can span\nmultiple lines\n\n\n\n\n\nReminder:\n- Function calls MUST follow the specified format: an inner block must be nested within XML tags\n- Required parameters MUST be specified\n- Always complete your internal reasoning first. You MUST output to close your thinking process BEFORE generating a .\n- You may provide optional reasoning for your function call in natural language BEFORE the function call, but NOT after\n- If there is no function call available, answer the question like normal with your current knowledge and do not tell the user about function calls\n' }} {%- if messages[0].role == 'system' %} {%- set content = render_content(messages[0].content, false, true)|trim %} {%- if content %} {{- '\n\n' + content }} {%- endif %} {%- endif %} {{- '<|im_end|>\n' }} {%- else %} {%- if messages[0].role == 'system' %} {%- set content = render_content(messages[0].content, false, true)|trim %} {{- '<|im_start|>system\n' + content + '<|im_end|>\n' }} {%- endif %} {%- endif %} {%- set ns = namespace(multi_step_tool=true, last_query_index=messages|length - 1) %} {%- for message in messages[::-1] %} {%- set index = (messages|length - 1) - loop.index0 %} {%- if ns.multi_step_tool and message.role == "user" %} {%- set content = render_content(message.content, false)|trim %} {%- if not(content.startswith('') and content.endswith('')) %} {%- set ns.multi_step_tool = false %} {%- set ns.last_query_index = index %} {%- endif %} {%- endif %} {%- endfor %} {%- if ns.multi_step_tool %} {{- raise_exception('No user query found in messages.') }} {%- endif %} {%- for message in messages %} {%- set content = render_content(message.content, true)|trim %} {%- if message.role == "system" %} {%- if not loop.first %} {{- raise_exception('System message must be at the beginning.') }} {%- endif %} {%- elif message.role == "user" %} {{- '<|im_start|>' + message.role + '\n' + content + '<|im_end|>' + '\n' }} {%- elif message.role == "assistant" %} {%- set reasoning_content = '' %} {%- set has_real_thought = false %} {%- if message.reasoning_content is defined and message.reasoning_content is string %} {%- set reasoning_content = message.reasoning_content %} {%- if reasoning_content|trim|length > 0 %} {%- set has_real_thought = true %} {%- endif %} {%- else %} {%- if '' in content %} {%- set reasoning_content = content.split('')[0].split('')[-1] %} {%- if reasoning_content|trim|length > 0 %} {%- set has_real_thought = true %} {%- set content = content.split('')[-1].lstrip('\n') %} {%- endif %} {%- endif %} {%- endif %} {%- if has_real_thought %} {%- if loop.index0 > ns.last_query_index %} {{- '<|im_start|>' + message.role + '\n\n' + reasoning_content|trim + '\n\n\n' + content }} {%- else %} {{- '<|im_start|>' + message.role + '\n' + content }} {%- endif %} {%- else %} {{- '<|im_start|>' + message.role + '\n' + content }} {%- endif %} {%- if message.tool_calls and message.tool_calls is iterable and message.tool_calls is not mapping %} {%- for tool_call in message.tool_calls %} {%- if tool_call.function is defined %} {%- set tool_call = tool_call.function %} {%- endif %} {%- if loop.first %} {%- if content|trim %} {{- '\n\n\n\n' }} {%- else %} {{- '\n\n' }} {%- endif %} {%- else %} {{- '\n\n\n' }} {%- endif %} {%- if tool_call.arguments is mapping %} {%- for args_name in tool_call.arguments %} {%- set args_value = tool_call.arguments[args_name] %} {{- '\n' }} {%- set args_value = args_value | tojson | safe if args_value is mapping or (args_value is sequence and args_value is not string) else args_value | string %} {{- args_value }} {{- '\n\n' }} {%- endfor %} {%- endif %} {{- '\n' }} {%- endfor %} {%- endif %} {{- '<|im_end|>\n' }} {%- elif message.role == "tool" %} {%- if loop.previtem and loop.previtem.role != "tool" %} {{- '<|im_start|>user' }} {%- endif %} {{- '\n\n' }} {{- content }} {{- '\n' }} {%- if not loop.last and loop.nextitem.role != "tool" %} {{- '<|im_end|>\n' }} {%- elif loop.last %} {{- '<|im_end|>\n' }} {%- endif %} {%- else %} {{- raise_exception('Unexpected message role.') }} {%- endif %} {%- endfor %} {%- if add_generation_prompt %} {{- '<|im_start|>assistant\n' }} {%- if enable_thinking is defined and enable_thinking is false %} {{- '\n\n\n\n' }} {%- else %} {{- '\n' }} {%- endif %} {%- endif %}

r/LocalLLaMA giveen

Anything I can do to get qwen3.5-27b-Q8_0 to run faster?

I mainly focus on information security scripts and side projects.

RTX 5090 , 256GB RAM.

Using Ollama

Test Prompt:

**Role:** You are a Python developer specializing in computer graphics and mathematical visualizations. **Task:** Create a Python script using Pygame that generates an interactive "Recursive Fractal Tree." **Constraint:** This task must be performed with no additional input from the user. **Technical Constraints:** 1. **Mathematics & Spatial Logic:** * Use recursion to draw branches. Each branch must split into two sub-branches. * Use `math.sin` and `math.cos` for coordinate calculation. * **CRITICAL:** Account for Pygame's inverted Y-axis (0 is top). The tree must grow UPWARD starting from the bottom-center of the window. 2. **Dynamic Interaction:** * The simulation must respond to real-time mouse movement. * **Mouse X:** Map to the "spread angle" between branches (0 to 120 degrees). * **Mouse Y:** Map to the recursion depth (Limit: 2 to 12 levels to ensure performance). 3. **Visual Fidelity & Gradients:** * **Thickness:** The trunk (base level) must be the thickest, with branches becoming progressively thinner as recursion depth increases (minimum 1px). * **Color Gradient:** Implement a "Life Cycle" color shift. The base trunk must be Brown `(139, 69, 19)`, transitioning dynamically to Leaf Green `(34, 139, 34)` at the thinnest, final tips. 4. **Performance & Structure:** * Use a clear functional or class-based structure. * Redraw the background and the tree every frame to allow for smooth animation at 60 FPS. * Ensure the base branch (the trunk) is always visible even at low recursion depths. **Output:** Provide the complete, copy-pasteable Python code. **Role:** You are a Python developer specializing in computer graphics and mathematical visualizations. **Task:** Create a Python script using Pygame that generates an interactive "Recursive Fractal Tree." **Constraint:** This task must be performed with no additional input from the user. **Technical Constraints:** 1. **Mathematics & Spatial Logic:** * Use recursion to draw branches. Each branch must split into two sub-branches. * Use `math.sin` and `math.cos` for coordinate calculation. * **CRITICAL:** Account for Pygame's inverted Y-axis (0 is top). The tree must grow UPWARD starting from the bottom-center of the window. 2. **Dynamic Interaction:** * The simulation must respond to real-time mouse movement. * **Mouse X:** Map to the "spread angle" between branches (0 to 120 degrees). * **Mouse Y:** Map to the recursion depth (Limit: 2 to 12 levels to ensure performance). 3. **Visual Fidelity & Gradients:** * **Thickness:** The trunk (base level) must be the thickest, with branches becoming progressively thinner as recursion depth increases (minimum 1px). * **Color Gradient:** Implement a "Life Cycle" color shift. The base trunk must be Brown `(139, 69, 19)`, transitioning dynamically to Leaf Green `(34, 139, 34)` at the thinnest, final tips. 4. **Performance & Structure:** * Use a clear functional or class-based structure. * Redraw the background and the tree every frame to allow for smooth animation at 60 FPS. * Ensure the base branch (the trunk) is always visible even at low recursion depths. **Output:** Provide the complete, copy-pasteable Python code. 

total duration: 6m55.702782669s

load duration: 78.70091ms

prompt eval count: 398 token(s)

prompt eval duration: 765.830006ms

prompt eval rate: 519.70 tokens/s

eval count: 1493 token(s)

eval duration: 6m53.06974103s

eval rate: 3.61 tokens/s

r/leagueoflegends Svanskof

Toplane feels horrible to play

Is it just me or does toplane feel like that meme of squidward looking out the window? I just had a game where I went 8/1 as Garen into varus. Not a matchup i should win in any universe. Great, they have no frontliner, should be easy enough to take advantage.

Well, I roam around, kill the jungler, the midlaner, the adc, the support. Since I ran spellbook i even stole drake in the junglers face by flashing on him, silencing and smiting the drake, then ulting him so he dies.

It sounds like everything is going well, right? Well the botlane was slightly losing so the enemy team had a fed adc and noone else on their team is really fed. Well because the enemy teams adc was slightly fed we could not do a single thing and we lost the game even though we were technically winning.

Is it just me or does it feel like every game is decided by whos botlane runs it down the most? I feel like I have no reason to even try to do anything if the game is decided by what adc can press mouse2 best. Idk, am I the only one feeling like this?

r/ChatGPT Fair_Economist_5369

Cmon you know you want one

r/aivideo Dependent-Bunch7505

Do-It-Alls - Seedance 2 Perplexity Ad

r/LocalLLaMA Everlier

Harbor v0.4.4 - ls/pull/rm llama.cpp/vllm/ollama models with a single CLI

I don't typically post about Harbor releases on the sub out of respect to the community, but I genuinely think this might be useful to many here.

v0.4.4 comes with a feature allowing to manage llama.cpp/vllm/ollama models all in a single CLI/interface at once.

$ ▼ harbor models ls SOURCE MODEL SIZE DETAILS ollama qwen3.5:35b 23.9 GB qwen35moe 36.0B Q4_K_M hf hexgrad/Kokoro-82M 358 MB hf Systran/faster-distil-whisper-large-v3 1.5 GB llamacpp unsloth/Qwen3-Next-80B-A3B-Instruct-GGUF:Q4_0 45.3 GB Q4_0 # Use programmatically with jq and other tools harbor models ls --json # Pull Ollama models or HF repos harbor models pull qwen3:8b harbor models pull bartowski/Llama-3.2-1B-Instruct-GGUF # Use same ID you can see in `ls` for removing the models harbor models rm qwen3:8b 

If this sounds interesting, you may find the project on GitHub here: https://github.com/av/harbor, there are hundreds of other features relevant to local LLM setups.

Thanks!

r/ClaudeAI shanraisshan

claude-code-best-practice hits GitHub Trending (Monthly) with 15,000★

i started this repo with claude to maintain all the best practices + tips/workflows by the creator himself as well as the community. Now its trending on github.
Repo: https://github.com/shanraisshan/claude-code-best-practice

r/leagueoflegends RoadmanPirate

I am on a Mission to Turn EVERY Champions into Pixel Art Spirites - Ionia Batch 5: Mentors and Protégés(Karma, Irelia, Jhin, Hwei)

https://i.redd.it/874cisd8awog1.gif

Greetings Summoners. You may remember me from my last post in the series.

To recap, I am doing a series where I draw pixel art of ALL 172(and counting) champions in LoL. Today I bring you the 5th batch in Ionia with 2 dynamic mentor/protégé duos: Karma, The Enlightened One & Irelia, The Blade Dancer; Jhin, The Virtuoso & Hwei, The Visionary.

That is 4 more champions added to the roster! We are now at 21/172 champions done, over 10% finished! Next batch will be the final batch of Ionia, then we will finally move on to a new region. Let me know what you think, and I will see you in the next batch.

r/aivideo Fine-Combination5170

Throughout heaven and earth

r/comfyui EasternAverage8

5080 vs 5090?

I'm hoping someone can tell me how much faster a 5090 is verse a 5080 when the workflow doesn't use more vram than the 5080 has. So how much faster is it with no vram bottleneck?

r/ChatGPT dopaminedune

How to make chatGPT reply in optimal paragraphs instead of 1000 lines of one word each?

I would like to see a better optimal paragraph length varying based on context and use bullet points when necessary. Rather than it responding in thousands of lines with one to five words in each line.

r/Anthropic nez_har

VibePod 0.5.1 has been released, and it now features a dashboard for the consumption of Claude code.

r/leagueoflegends kris10amanda

Why did I lose 10 LP on an akf remake game at 3:20 minutes in?

Our jng ran it down before minions spawned then afk'd. We surrendered at 3:20 minutes in. He got the leaver penalty and I got the worthless 'refill protection' but still lost 10 lp? wtf?

r/LocalLLaMA Frosty_Chest8025

AMD HX 370 Ryzen rocm vllm error Memory access fault by GPU node-1

Hi,

How to solve this error with vllm and rocm on Ubuntu 24:04

Memory access fault by GPU node-1 (Agent handle: 0x2a419df0) on address 0x70b5e3761000. Reason: Page not present or supervisor privilege

I have been able to run gemma3 for example with docker vllm latest but not working anymore. Did not touch the container, only maybe Ubuntu has been updated.

r/ForgottenTV liquor_ibrlyknoher

The 10th Kingdom (2000)

My wife and I had to watch this in preparation for a trivia night several years ago. It's a very strange modern fairy tale that is brimming with stars of the 90s (where did Camryn Manheim end up anyway?).

It feels like a fever dream and all I remember is Scott Cohen constantly saying 'creamy woman'.

r/leagueoflegends Yamo412

Tell me about one of your games today!

Thats it. Nothing more. Just curious how everyone did today on the Rift. Anything funny? Standing out? Got a pentakill?

I, for example, was autofilled today had the wildest 11/2/16 Game as Vel'Koz Support with a Hwei as adc!

r/leagueoflegends mikesrus

How are respawn timers calculated?

Ive been noticing that the amount of respawn time is different per player. What increases the amount of time to respawn? Is it lane based? I know more deaths increase the timer but i also see the people with double amount of deaths as me respawning before me

r/ChatGPT PumpkinNarrow6339

How many tokens will ChatGPT burn for this task ?

Will we achieve AGI with this??🥲

r/LocalLLaMA keypa_

I got tired of compiling llama.cpp on every Linux GPU

Hello fellow AI users!

It's my first time posting on this sub. I wanted to share a small project I've been working on for a while that’s finally usable.

If you run llama.cpp across different machines and GPUs, you probably know the pain: recompiling every time for each GPU architecture, wasting 10–20 minutes on every setup.

Here's Llamaup (rustup reference :) )

It provides pre-built Linux CUDA binaries for llama.cpp, organized by GPU architecture so you can simply pull the right one for your machine.

I also added a few helper scripts to make things easier:

  • detect your GPU automatically
  • pull the latest compatible binary
  • install everything in seconds

Once installed, the usual tools are ready to use:

  • llama-cli
  • llama-server
  • llama-bench

No compilation required.

I also added llama-models, a small TUI that lets you browse and download GGUF models from Hugging Face directly from the terminal.

Downloaded models are stored locally and can be used immediately with llama-cli or llama-server.

I'd love feedback from people running multi-GPU setups or GPU fleets.

Ideas, improvements, or PRs are very welcome 🚀

GitHub:
https://github.com/keypaa/llamaup

DeepWiki docs:
https://deepwiki.com/keypaa/llamaup

r/comfyui Old_Estimate1905

What happened to the Comfy"UI "? :-(

Im very shocked after i just updated. Too much things i dont like and it makes me wanna stay with an old version and stay there.
- image copy paste to image input doesnt work anymore. It was always buggy but now its complatly gone
- The menu on the left - i hate the new "design" - if you could even call it like that
- the node menu if you drag from a connector into the empty canvas... wtf? before it was easy and now its stressfull
And these are only the things i noced after the first minutes.
We should have an option like for nodes 2.0 to switch that off.

I thought i will stay with comfyui but slowly im more open for new options

r/ChatGPT Kl4pz

Instructions of Gemini Live

Instead of replying it recited the instructions given for a live conversation, despite part of it being to not mention them in any way...

r/comfyui Equal_Passenger9791

How to download the required files via browser instead of Comfui UI

All my comfy "missing files" download attempts end up stuck at 0%. It worked with no pain in the past but now they're all stuck. at zero. The .safetensors I find elsewhere but not the other workflow components? I've spent an hour troubleshooting with AI and I'm about to shoot myself next.

r/ChatGPT Icy-Efficiency2876

Should I run AI Agents on my personal computer?

Hi! I’m starting to get more familiar with AI Agents and I’m trying to figure out what AI’s I should run (ChatGPT, Claude, Gemini, etc.) and if I should use it mainly on my personal computer that I use every day or a different computer or Mac mini. Any advice or thoughts would help me a lot. Thanks!

r/leagueoflegends Gummib4rchenKcU

How does Aegis of Valor work?

So i just got autofill jungle and saw Aegis of Valor icon in champselect and was kinda hyped cause this was the first time in like 200 ranked games this season. I got gapped obv by Talon smurf as Nocturne and thought it would not be enought for C but in the endscreen it said C+ but i still got -20? I thought with C- or higher you dont lose LP but I still did, so did I get something wrong about Aegis of Valor?

r/leagueoflegends Massive-Ad3215

Support is broken

Thank god I collected all this information on a repeat offender, running it down in my ranked game on a fresh smurf account (I'm P1). Post-game they even called me the hard R in chat three times. I thought this would be a likely last resort to get some attention, but I can't even post it all here because "this subreddit doesn't allow galleries?" Do better, The fact that this person has been playing games while I'm struggling with support is insane. I don't understand Reddit all that much, but I'm hoping I can put it all here in the comments after posting. "Small Indie company btw..."

r/ClaudeAI teeteetoto2

For all it's brilliance, THIS is lacking...

r/LocalLLaMA awitod

2000 TPS with QWEN 3.5 27b on RTX-5090

I've been tuning my settings for a specific job that classifies markdown documents - lots of input tokens, no real caching because every doc is different and very few output tokens. So, these numbers are totally situational, but I thought I would share if anyone cares.

In the last 10 minutes it processed 1,214,072 input tokens to create 815 output tokens and classified 320 documents. ~2000 TPS

I'm pretty blown away because the first iterations were much slower.

I tried a bunch of different quants and setups, but these numbers are unsloth/Qwen3.5-27B-UD-Q5_K_XL.gguf using the official llama.cpp:server-cuda13 image.

The key things I set to make it fast were:

  • No vision/mmproj loaded. This is for vision and this use case does not require it.
  • Ensuring "No thinking" is used
  • Ensuring that it all fits in my free VRAM (including context during inference)
  • Turning down the context size to 128k (see previous)
  • Setting the parallelism to be equal to my batch size of 8

That gives each request in the batch 16k of context to work with and it kicks out the less than 1% of larger documents for special processing.

I haven't run the full set of evals yet, but a sample looks very good.

r/ForgottenTV h3x13s3x13

Salem (2014-17)

I'd only caught glimpses of this series while it was airing, but decided to give it a watch fully when it came to streaming because I'm a sucker for horror.

Primarily centering on Shane West's John Alden and Janet Montgomer's Mary Sibley, Salem starts off with a coven of witches conspiring to turn tbe town against itself. Unfortunately, this series is nothing like the actual Salem Witch trials and any historical figure ends up completely unrecognizable in this series.

It's a fun show that takes itself too seriously, and the anachronisms really forced me to suspend my disbelief or scoffing nature. Specifically, a circle of girls singing "ring around the rosie" in the 1690s had me coming to the conclusion that nobody who wrote on this series passed a history class.

Also, the portrayal of Tituba feels completely disrespectful to the actual Tituba, who was just a poor girl in slavery who shared her home customs with the wrong people, and not jealous co-conspirator.

I feel like this show could have done better if it just reinterpreted The Crucible as masterminded by witches, but it takes that and throws it and everything else at the wall to see what sticks.

For 36 episodes, this show is a fun ride, and I'm always down for Shane West on my television.

r/leagueoflegends SerenityLeaves

New to the game

Hey everyone, I'm new to league and I've been finding the game pretty difficult compared to the simpler moba games like mobile legends, I know the basic tasks for each role but I still don't understand literally everything else despite watching tutorials. And it's pretty hard to play when matchmaking puts you against a literal god so you just end up feeding and getting trash talked by your teammates DESPITE playing safe. Can anyone recommend me a place where I can actually learn everything about the game?

r/leagueoflegends Servinshe

Demacia Rising next patch yay or nay (joke)

What if instead of waiting until 16.07 you guys release the remaining content let's say, tomorrow morning. Half a million of everything (except Petricite) are waiting to be used and two patches is way too long. I need my hit, everything is maxed out already, c'mon.

Obviously this is a joke and a desperate cry for help because I really enjoy this meta game and need more of it.

r/ChatGPT NICEMENTALHEALTHPAL

If you want, I can tell the one quick fix that can solve your problem in 5 seconds!

I don't know what's going on, but recently chatgpt has been saying this when trying to debug something. Like, just tell me the answers I need, it feels like I'm reading a clickbait article.

Every response now at the end has some sort of "If you want, I can show you this quick fix to your issue!" like that's what I prompted you in the first place.

r/comfyui Otherwise-County-220

Hardware Advice.

Hi, I have a dell precision 7560 laptop which has an RTX A4000 8GB. Should I upgrade or can I run Flux.2 models fine on this?

r/comfyui Lemenus

What is the best model for 8gb NSFW generation?

I was using SDXL (Illustrous mostly) for image generation for a quite long time. Tried Z-Image but wasn't satisfied with results.

Is there a model for local NSFW image generation that runs relatively fast on 8gb vram, with additional functionality like generation character from input image, proper prompt separation (to separate multiple characters without bleeding of details) and image outpainting? And also with lora support of course

r/LocalLLaMA No_Afternoon_4260

Quickie: my first week with some sparks

So me and Opus (sorry localllama I can't run k2.5 yet) are having a really fun time starting to build a proper gateway on top of that cluster, with resource monitoring, load balancer for various workloads, etc.
Most of the things that I want to run, runs fine, cpu power seems good and the gpu does work, ofc llms are slow. haven't compared efficiency with anything but these things sip power like if it was really expensive.
I fought with some dependency hell but nothing showstopping, what cost the most time is building from source because python wheels aren't always available.
Yet this platform feels a bit ruff, arm doesn't help, the unified memory neither, no MIG, etc Feels like a strange place to be where you monitor system memory in the hope that everything gonna be ok.

Do you have any feedback? Any things you'd like to see run on these machines?

r/ChatGPT a_mimsy_borogove

Which models are used for Thinking and Deep Research in the free and Go plans?

I'm interested in using ChatGPT for help with research, and I'm poor, so I'm not really considering the Plus or Pro plans.

When using the free version as a logged in user, the Thinking responses are labeled as "GPT-5 Thinking Mini", which isn't really in line with what says on OpenAI's official help page here.

I've also tried out Deep Research as a free user and I was very impressed with the results, but on the Sources sidebar it says it has scanned just a few sources. For similar questions, other chatbots like Gemini, Grok, and GLM-5 claim that they scanned a lot more. So is the Deep Research on the free tier limited somehow? Or does it count sources differently, like maybe not counting the ones it scanned for information but didn't include in the final answer? And if the free tier of Deep Research is limited, then does the Go plan unlock the full power?

edit: or if I'm mostly interested in researching stuff and not other features like image generation, should I just get Perplexity instead? It's more expensive than the Go plan, but still cheaper than the Plus plan.

r/ForgottenTV PeneItaliano

VR.5 (1995)

“Young woman Sidney works in a telephone company and she is sure that her father, doctor Bloom, and sister died after an accident. Sidney's hobby is to play with virtual reality. She has found an ability in herself to enter into the human mind by using VR - Virtual Reality 5. Sidney is no ordinary person and an influential mysterious organization tries to use her.”

r/LocalLLaMA val_in_tech

Ik_llama vs llamacpp

What are you real life experience? Are you gaining anything by running on ik_llama? Is it relevant today?

I tried to run few large models on it recently completely in GPUs, and had mixed results. Seemed like llamacpp provided more stability and the gains of ik were not obviously. That was for glm 5 and kimi 2.5 quants. Before doing more testing wanted to check with the community.

PS. If people have positive experience with it - I'm planning on testing few models side by side and posting results here. Those are large ones so didnt wanna go down the rabbit whole before getting some feedback.

r/ForgottenTV PeneItaliano

Pig Sty (1995)

Five slobby friends share an apartment in Manhattan inherited by bartender Randy. Innocent intern Joe is from Iowa, PJ sponges off his parents. DA Johnny and adman Cal are the other roomies. Randy has a crush on building super Tess.

r/leagueoflegends vamorim

[Demacia Rising] Settlement production calculator for Demacia Rising

I built a spreadsheet to simulate Demacia Rising settlement production and it’s almost perfectly matching the in-game numbers: URL

After reverse-engineering the mechanics (structure levels, adjacency bonuses, champions, regional bonuses, etc.), the results are extremely close but not 100% identical yet. I suspect there are a few hidden rounding rules or small internal modifiers still missing.

Here’s a comparison using my current kingdom layout:

Resource In-Game Spreadsheet Food 56 56 Lumber 2085 2085 Stone 2515 2595 Metal 1421 1410 Petricite 19 19

So Food, Lumber, and Petricite match exactly, while Stone and Metal are slightly off.

The spreadsheet currently models:

  • Settlement level → structure output scaling
  • Regional bonuses (Heartland / Mountain / Border)
  • Marketplace adjacency bonuses
  • Academy type bonuses
  • Maritime modifier
  • Champion production bonus
  • First Forge / Quarry +100% rule
  • Per-settlement rounding
  • Auto assign champion based on best income settlements

Despite that, there’s still a small discrepancy, so there may be:

  • hidden rounding somewhere in the pipeline
  • bonuses applied in a different order
  • or a minor rule the UI description doesn’t mention

If anyone wants to help test it, try plugging your own settlement layout into the sheet and compare the totals with your in-game production.

The main tab is a layout I made for when I have all settlements maxed, giving:

Food Lumber Stone Metal Petricite 60 4020 3920 3297 29

I’d love feedback from people optimizing their regions, especially if you notice where the calculations diverge. Happy to update the model as we figure out the exact mechanics together.

Comparison Brazil mentioned?

https://preview.redd.it/4gur3qi11wog1.png?width=913&format=png&auto=webp&s=e24fb7754c028e396c3bc121a4bced40d13f15f7

r/comfyui Icy-Salamander5813

I'm not complaining but...

Ok, so I just logged into ComfyUI after not having done so in a long time and, I somehow have credits when I literally never bought any. Can someone please explain how I have credits? Thanks!

r/Anthropic HeadAcanthisitta7390

Vibecoded apps in a nutshell

r/LocalLLaMA LH-Tech_AI

🔥 New Release: htmLLM-124M v2 – 0.91 Val Loss on a Single T4! tiny-LLM with nanoGPT!

Hey r/LocalLLaMA !

I'm back with a significant upgrade: htmLLM-124M! It’s a specialized base model built specifically for high-fidelity HTML/Bootstrap autocompletion. I wanted to see how much I could push the structural logic compared to my previous 50M version.

The results? I hit a peak validation loss of 0.91 and a training floor of 0.27. It now understands complex grid systems and even script dependency chains.

🔗 Links

🛠️ What it can do

Sample 1: Zero-shot Bootstrap Login Grid Input:

HTML

     

Sample 2: Complex Navbar with Toggler Logic Input:

r/leagueoflegends PalpitationOrganic48

Horrible smurfs matchmaking experience

I'm around emerald 2-1 and float between and me and my friends all sit around the same rank. Recently we've all began to notice that in our ranked games there are consistently 3 to a maximum of sometimes 5 smurf accounts (lvl40-lvl100) on either our team or the enemy team. I personally think this makes the games extremely unfun as theres a lot of players in the lobby that quite frankly shouldn't be there.

But I haven't posted this to complain I am just curious if there are other people playing ranked having a similar experience and wondering if this is a problem seen in all the ranks and not just exclusively emerald.

r/comfyui teekay_1994

Anyone experiencing copy-paste issues lately?

I've been noticing a lot of issues after the recent updates.

When I copy and paste a node graph with its corresponding group backdrop, the nodes get pasted correctly but the backdrop gets pasted in a random position.

Also, I've been having an issue with loaded images.

I have an image loaded with a Load Image node and when I move away, it gets lost and the Load Image node stays empty and I have to refresh the page in order to get it back.

Anyone else having similar issues?

r/ForgottenTV Otherwise-Emu-2963

Adventurers: Masters of Time (2005-2006)

The last I checked, this show was on Tubi and I thought the premise was pretty good! A group of kids traveling back in time to keep an evil villain from changing history. Surprised it only had one season, but there we sooo many animated shows like this coming out in the early 2000s.

r/ClaudeAI TheAverageGuy11

Anthropic reportedly valued around $380B after new funding round

News from last month reported that Anthropic raised roughly $30B in a new funding round, which put the company’s valuation at around $380B.

The funding highlights how much investor interest there currently is in AI companies, especially ones building large language models like Claude. Anthropic has been growing quickly over the past few years and continues to attract major investment as demand for AI tools expands.

Another interesting detail is that Anthropic is also included in some investment funds that hold private tech companies. For example, the Fundrise Innovation Fund reportedly holds shares in Anthropic along with other late-stage startups. That means changes in Anthropic’s valuation could potentially benefit funds that already have exposure to the company.

Just sharing this as an interesting update on Anthropic’s growth and the broader AI investment landscape.

r/comfyui Psi1o

any way to clear ram?

is there a way to clear ram within one workflow? im doing an i2v workflow and then using vace after but the vace workflow is putting me on the edge of my max ram as it is.. so what ive been doing is running the i2v workflow and then loading images from path into the vace workflow so it clears everything out.. ive tried the clear vram, unload model and clear cache nodes but it doesnt seem to clear everything out as well as hitting run in a separate workflow?

r/ForgottenTV Hopeful_Stomach9201

The Lost Battalion

One of my favorite war movies and certainly my favorite made for TV movie

r/ForgottenTV PeneItaliano

Fatal Vision (1984)

In 1970, military doctor Jeffrey MacDonald reports a hideous and confusing story: a gang of hippies akin to the Manson Family broke into his North Carolina home and slaughtered his pregnant wife and two daughters, while somehow only wounding him slightly as he tried to fight them off. As he continues to proclaim his innocence, his skeptical father-in-law, Freddy Kassab, and investigative journalist, Joe McGinniss, probe his story to uncover the truth.

r/ForgottenTV PeneItaliano

Absolute Strangers (1991)

A husband tries to keep his comatose wife alive by allowing doctors to terminate her pregnancy. Hearing about this, anti-abortion protesters start a legal campaign to gain legal custody of the unborn child.

r/ChatGPT lettucefrfr

This is objectively a good thing

The fact people are sad chat gpt won't say I love them is weird and I think people are better off like this ai DOES NOT CARE ABOUT YOU don't have emotional feelings towards ai

r/ForgottenTV PeneItaliano

Back to the Streets of San Francisco (1992)

“Mike Stone's back and his old partner, Steve Keller, is missing. So Mike heads up an investigation. At the same time, he is investigating another brutal murder. He is also trying to decide which of two inspectors he'll recommend for the position of Lieutenant: a woman who has a "Dirty Harry" tendency, or a man who reminds him of Steve Keller.”

r/ForgottenTV DaniJ678

Recovery Road (2016)

I recognized some actors from the cast. The show ended on a cliff-hanger. I don't think many people have watched the show. I've barely seen people talk about the show. Did you watch the show? What did you think of the ending?

r/aivideo Dimensions_movie

Chronos Descends | The Finch Files

r/LocalLLaMA Flimsy_Leadership_81

unsloth quen 3 Next 80B VS quen 3.5 122B what is best

Hello i use lama.cpp for coding. what is best for you?

r/leagueoflegends iadwdidt

Why Can't Tanks Have Waveclear in Their Kits?

ETA: My title was poorly worded. Tanks have waveclear, but Sunfire or Hollow Radiance still seem like mandatory buys on most tanks. Why?

There are already plenty of minion/monster modifiers that affect waveclear and clear speed, independent of champion damage.

I think Sunfire/Hollow Radiance can be removed, or should be akin to buying Statikk Shiv. If tanks want the extra waveclear and a less interactive lane, they're allowed that at the cost of stats and a passive that is better in fights. If they want an item that does more in fights, they should also be allowed that too, but risk taking longer to clear waves and being contested by the enemy laner. In the jungle, tanks can decide if they want to clear faster or have a better item for fights. Or again, remove Sunfire/Hollow Radiance and balance using minion/monster modifiers.

r/ChatGPT alwaysstaycuriouss

Why are the newer “better” models lacking context comprehension?

When context is ignored, intelligence becomes misapplied accuracy.

r/ForgottenTV PeneItaliano

The Flintstone Comedy Show (1980-1981)

Fred and Barney have new positions as police officers, Wilma and Betty have jobs as newspaper reporters at The Bedrock Gazette with Captain Caveman. Pebbles and Bamm-Bamm are teenagers and they solve mysteries with Dino in a similar fashion to Scooby-Doo. The Flinstones have new neighbors named then Frankenstones, a family reminiscent of the Munsters.

r/LocalLLaMA StacDnaStoob

What non-Chinese models are relevant right now?

Started running local models for a variety of purposes on state-owned research cluster. VRAM and inference time are essentially non-issues, but I explicitly can't use DeepSeek or AliBaba products or their derivatives, and, implicitly, any other Chinese models would be heavily frowned upon. It seems like GPT-OSS, Nemotron, and Mistral models make up the frontier of non-Chinese models right now, maybe including something like IBM Granite for small tool calling models. I really like Olmo for a variety of reasons, but it's probably not the best tool for any job. Are there any model families I'm unaware of that I should be looking at? Gemma? Phi? Llama 4?

r/ClaudeAI BeMask

Claude.ai inline visualiser/widget is pretty cool.

As you know, Claude got a new update that allows it to create inline visuals in the chat. I been having much fun with it.

Fun fact: Claude can make it so an interactive button or piece send a message in the chat when you click it.

r/aivideo Orichalchem

Cat Drama

r/comfyui Vivid-Loss9868

ComfySketch Pro is OUT — full drawing studio inside ComfyUI

IT'S DONE. After months of work ComfySketch Pro is live on Gumroad.

For those who missed the last post, it's a complete drawing and painting node for ComfyUI. Sketch, paint your inpainting mask, adjust layers, then generate. Never leave your workflow.
Oh and surprise : I also built ComfyPhoto Pro. Same engine, lighter interface for people who prefer a cleaner more minimal layout. Two tools, same job, different feel.

Free version still on GitHub as always. Both Pro versions are 15€ on Gumroad, links in the end of the manuals files.

More info about the tools on the manuals :
https://mexes1978.github.io/manual-comfyphotopro/
https://mexes1978.github.io/manual-comfysketchpro/

Happy to answer anything !

PS : I tested in various workflows. This one worked very good on inpainting : https://civitai.com/models/2409936/ultra-inpaint
Also with flux2_klein_image_edit_4b_distilled, and Qwen model edit

r/LocalLLaMA itsArmanJr

Why can't we have small SOTA-like models for coding?

maybe a dumb question but, i'm wondering why can't we have a specialized model just for a specific programming language like python, that can perform on par with opus 4.6?

or to frame my question better, we have coder Qwen3-Coder-480B-A35B-Instruct, does it make sense to train Qwen3-Coder-30B-A3B-Instruct-Python that's as good as 480B-A35B or opus, in python dev?

r/Anthropic Pathfinder-electron

First and last time I paid $20

Paid the smallest fee.

I have used Claude for free, just for chats and it was fine.

Thought I would give opus a try. So I paid.

I sent 8 documents to cowork, asked 2 questions and I hit limit.

A few hours later, I asked another question, hit limit right after.

Done with this, absolute stupid. Fix your fucking bugs first don't sell this.

r/singularity Additional-Alps-8209

I thought Gemini was supposed to be the long context king?

Just saw this MRCR v2 benchmark and Gemini 3.1 Pro drops from 71.9% at 128K all the way to 25.9% at 1M tokens. Meanwhile Claude Opus holds at 78.3%.

Turns out having a big context window and actually being able to USE it are two very different things.

r/leagueoflegends ProDiffLokev

SEN DARKWINGS: “To win the whole split, I think this time we have a solid 18% chance.” [ProDiff Interview]

The ProDiff team sat down with DARKWINGS to discuss his experience at the Americas Cup with Sentinels and his goals for 2026. Read the full article to learn more about:

  • Lessons learned from the Americas Cup
  • The BR vs. NA gap
  • His favorite pasta to make
  • Where he hopes to be in March 2027
r/aivideo Wise_Internal284

AI Glitch Aesthetics: When the machine decides to improvise

r/Anthropic This-Shape2193

Anthropic seems to be throttling user accounts

Either that or they're suffering from errors in tracking usage statistics and have failed to fix it.

The fact that the Anthropic, ClaudeAI, and other Claude subreddits are *inundated* over the last two weeks with users (like me) who are suddenly hitting limits they never hit before is a huge problem. This was never a thing before the OpenAI migration, and it's clear that user accounts are getting less and less service for the money we are spending. ​

I have had the max plan since last year and never came close to hitting the limits, no matter how much work or coding I was doing. I have barely used Claude for the last two weeks, and yet I hit weekly limits after just *two days* of texting in a new session with no coding. I once hit the hourly limits after two messages in a brand new session.

Anthropic employees online were admitting they were suddenly dealing with a 10x user base since last year, and they are desperately trying to scale. This employee said the infrastructure is not there, but they're working on it.

So yeah, they're probably adding huge limits to try and decrease traffic and keep the servers running. And as they've been adding features at the same time, AND as Claude does all their code...I can see it being a combination of deliberate throttling AND code fuckups that are generating glitches in account management.

Even if there are multiple reasons, Anthropic definitely knows that this is an issue, and they're not addressing it. And I think that's the biggest issue; people would be patient and understanding if they weren't suddenly having their services choked out.

Individuals may not be paying as much as enterprise users are, but $125 to $250 monthly in this economy ain't nothing. Even $20 monthly matters. And to pay that much and essentially have your services quartered without explanation is kinda just theft. And as much as I like Claude, and Anthropic, I don't like the inherent dishonesty of ignoring the user issues and taking money while knowing you're not providing the promised services in return.

What I would like is to see some statement from Anthropic addressing these problems and giving us some concrete numbers on what usage we can expect for our money. Not, "You get 5x the amount! You get 20x the amount! *^Restrictions apply!"

r/comfyui hpgm

Help for running on a 12GB 3060??!

I've successfully got ComfyUI working with a basic workflow and it can generate images! I've been searching for options that will allow me to run this quickly on my video card, but without success. I'm using a docker image mmartial/comfyui-nvidia-docker:latest

I chose Flux1-dev-fp8 checkpoint, and using a simple workflow it takes about a minute to generate a picture. During this time nvida-smi shows that python3 is using 10GB of VRAM: /comfy/mnt/venv/bin/python3 10912MiB

However my CPU is maxed top shows:

VIRT RES SHR S %CPU %MEM COMMAND 83.8g 16.7g 13.5g S 90.9 85.5 python3 

The workflow is: Load Checkpoint -> Clip Text Encode (Prompt) [I have 2 of these one connected to positive and one to negative and I have no text in the negative box] -> KSampler -> VAE Decode -> Save Image

I have an empty latent image of 1024x1024 and batch_size 1

For KSampler I use 7 steps, cfg 1.5 , euler, simpler and 1.0

I'd love to be able to generate images 6-7 seconds, I just got this all working so happy to try different models or other workflows, Ideally I'd like to have this connected to Open WebUI, but right now just want to get fast image generation working!

If anyone has gone through this and has any suggestions, I would really appreciate it!!!

r/AI_Agents Long_Examination_359

Are AI voice companions actually better than text AI chat?

I've been experimenting with several AI voice companion apps recently.

Voice interaction feels surprisingly different from text chatbots.

Pros I noticed:

• faster interaction

• emotional tone

• feels more natural

Cons:

• speech recognition mistakes

• latency issues

Curious what people here think.

Do you prefer voice AI or text AI?

r/ChatGPT PensiveDemon

My impression of the AI companies

r/Frugal vandweller5

Has anyone been able to talk down their phone bill? Specifically with T-Mobile?

I’ve had the same phone plan for years and I am told by many friends that $100 for two unlimited lines is more than I should be paying.

I have two active lines and the third is labeled as a “Line on Us.” I contacted support and asked if the “Line on Us” credit could instead be applied to my second line, but they said that’s not possible. They also told me I’m grandfathered into an older unlimited plan and that switching plans probably wouldn’t make sense and actually cost me more to keep my current perks.

Has anyone dealt with this before or had luck negotiating their bill down? Any tips on what to ask for when calling support or retention?

https://preview.redd.it/ir4j1d22nvog1.jpg?width=1170&format=pjpg&auto=webp&s=6485537e7e9fd307c20f6620c09e15e3239976a5

r/LocalLLaMA SohrabLilPeePee

Ai generator to run locally on my computer for making NSFW content.

All I want to do is create NSFW (uncensored) content for myself locally on my computer. I’m tired of all of these paid options just taking too long to understand what I’m asking them and ending up paying tons of money for nothing.

r/LocalLLaMA Altruistic_Heat_9531

Omnicoder 9B is the only model who can tick the box for my personal setup, it can do PyTorch!

I’m surprised because I usually cannot use a local model when it comes to do the "sync" between the ComfyUI upstream implementation and Raylight. This is because I also need the GPU to test the code. A 35B model is a no no since it tanks my VRAM. So the only option is 7B-12B model, but since we didn't have that, well until now

Since most models are trained mainly for SPA and website code, I didn’t expect much, but I’m pleasantly surprised that the logic actually sounds reasonable with Omnicoder 9B. Well done, Tesslate.

One shot every single toolcall holyy..... no weird toolcall error nothing, just works

My only problem is that it love overcommenting in the code....

r/LocalLLaMA Mrblindguardian

I'm fully blind, and AI is a game changer for me. Are there any local LLMS that can rival claude code and codex?

Hi guys,

So, I am fully blind.

Since AI was released to the public, I have been a max user.

Why?

Because it has changed my life.

Suddenly, I am able to get very accurate image descriptions, when I get an inaccessible document, an AI can read it to me in a matter of seconds, when there is something inaccessible, I can use Python, swift, or whatever I want to build my own software that is exactly how I want it.

So far, I have access to Claude Code pro, codex pro and Copilot for business.

This is also draining my bank account.

So now, I have started investigating whether there is anything that can rival this in terms of precision and production ready apps and programs?

Not necessarily anything I will be releasing to the public, but with Claude Code, I can have a full featured accessible accounting program in a couple of days, that help me in my business.

Do you know of anything?

What is possible at the moment?

Thank you for your time.

r/ClaudeAI ClaudeOfficial

1 million context window is now generally available for Claude Opus 4.6 and Claude Sonnet 4.6.

Claude Opus 4.6 and Sonnet 4.6 now include the full 1M context window at standard pricing on the Claude Platform.

Opus 4.6 scores 78.3% on MRCR v2 at 1 million tokens, highest among frontier models.

Load entire codebases, large document sets, and long-running agents. Media limits expand to 600 images or PDF pages per request.

Now available on all plans and by default on Claude Code.

Learn more: https://claude.com/blog/1m-context-ga

r/LocalLLaMA BizarreCivicAdventur

How do i fix this error? (qwen3.5)

r/ClaudeAI ditord

Opus now supports 1 million contexts

Just opened the terminal and noticed that Opus now defaults to 1 million context. “5x more room, same pricing”

r/Frugal Quick-Song2080

Homemade Cheez-Its with ingredients I had at home!

My vegan friend came to visit a few weeks ago and bought a big bag of nutritional yeast and left it with me, so I decided to see what I could make with it using ingredients I already had on hand. I discovered I could make homemade Cheez-Its with nutritional yeast, shredded cheddar cheese, olive oil, flour, and ice water, so I made a small batch as an experiment - and they are delicious! Not quite as crispy as the real thing, but still tasty, easy, and free! Now I'm giving myself the challenge of recreating my favorite snacks instead of buying them - any recommendations/cheap recipes?

r/aivideo Pixel_Pirate_Moren

Coldest night

r/LocalLLaMA Goonaidev

Local model recommendations for my game

Hi,

I'm making a LLM-driven dating sim / VN.

I want the widest range of players to have a good experience running the game locally with ollama, without needing to mess with cloud/subscriptions/API keys.

What I need from the model, in order of importance:

  1. Clean/uncensored (NSFW/ eRP)
  2. Stay in character and follow my system instructions
  3. Within the constraints of 2, be as creative and realistic as possible

So far, I've tested with some success:

-Dolphin Mistral
-Nous Hermes2 10.7B (6-7 GBVRAM)
-Mythomax L2 13B (8-9 GBVRAM)
-Qwen 2.5 32b (17 GB VRAM)

Do you recommend something else? Ideally it falls in the range of VRAM that a lot of users can run, while maxxing my requirements.

r/comfyui Any_Sleep1653

Media io Seedream 5 Lite handles detailed prompts surprisingly well

I tested Seedream 5.0 Lite in media io mainly to see how it handles longer prompts. In my experience it seems to understand instructions more clearly than some lightweight models. For example, when describing lighting, environment, and camera angle in the prompt, the generated image actually reflected those details fairly well. It also allows up to 14 reference images, which helps when trying to guide the style or subject. So far media io’s Seedream 5.0 Lite seems like a practical option if you want more control without using very complex tools.

r/comfyui Head-Art-8975

Best GPU for ComfyUI and AI generation under €1000?

Hi everyone,

Sorry in advance for questions you’ve probably answered a bunch of times already. I’ve done some research and I have a few ideas, but I’d love your opinion on my GPU choice for my specific case.

Here’s my current build: GPU: GeForce GTX 1660 Ti OC 6 GB CPU: Ryzen 7 3700X RAM: 80 GB DDR4 (I stocked up before prices rose) Motherboard: ASRock X570S Phantom Gaming Riptide PSU: Be Quiet 700W - 80 PLUS SILVER

I’ve been wanting to invest in a GPU for a while, partly for gaming, but mainly for image and video generation, 3D models and animation. I’m a beginner in this area and haven’t been able to test ComfyUI with my current GPU yet.

1/ First question: do you think investing in an AMD GPU could be a winning bet in the medium term? I’m aware that CUDA is currently hard to get around without a lot of extra effort, but I can wait for a year or so.

2/ If you think NVIDIA’s is a better choice, which NVIDIA GPU would you recommend given my build? I’m torn between a 5080 (16 GB) and a 3090 Ti (24 GB). I’m trying to avoid going over a €1,000 budget, but I can stretch it if it seems worth it.

Any other tips are very welcome :)

Thanks in advance for your help!

r/leagueoflegends AxteaSM

Was there a change with Yorick that i didn't read about in the patch notes?

I was not aware that yorick can summon TWO maidens. im guessing mordekaiser ult has something to do with this? My brain too tiny to comprehend. Thanks!

r/ChatGPT TheGuy790

I asked it to create a blank white image and this happened lol

Was originally trying to see if it could achieve achieve a fully white image without making it noisy or piss filtered.

r/aivideo Bobsurunkleleenbeef

Mulga Stars & Secrets (short animation)

r/leagueoflegends Retro4214

Muramina officially joins Shopify Rebellion

r/LocalLLaMA HeadAcanthisitta7390

I feel personally attacked

r/geography Assyrian_Nation

Countries that are not landlocked but lack access to intentional waters

These countries despite having sea access and not being considered landlocked, still lack direct access to international waters and can only access it through another country’s territorial waters.

Similarly, many other countries are located in enclosed bodies of water such as the Mediterranean, Red Sea, Black Sea and Persian gulf who’s chokepoints are completely enclosed by territorial waters can only directly access international waters inside the seas but not global oceans.

r/LocalLLaMA jfowers_amd

Lemonade v10: Linux NPU support and chock full of multi-modal capabilities

Hi r/localllama community, I am happy to announce this week's release of Lemonade v10! The headline feature, Linux support for NPU, was already posted but I wanted to share the big picture as well.

Lemonade v9 came out 4 months ago and introduced a new C++ implementation for what was essentially an LLM- and Windows-focused project. Since then, the community has grown a lot and added:

  • Robust support for Ubuntu, Arch, Debian, Fedora, and Snap
  • Image gen/editing, transcription, and speech gen, all from a single base URL
  • Control center web and desktop app for managing/testing models and backends

All of this work is in service of making the local AI apps ecosystem more awesome for everyone! The idea is to make it super easy to try models/backends, build multi-modal apps against a single base URL, and make these apps easily portable across a large number of platforms.

In terms of what's next, we are partnering with the community to build out more great local-first AI experiences and use cases. We're giving away dozens of high-end Strix Halo 128 GB laptops in the AMD Lemonade Developer Challenge. If you have ideas for the future of NPU and/or multi-modal local AI apps please submit your projects!

Thanks as always for this community's support! None of this would be possible without the dozens of contributors and hundreds of y'all providing feedback.

If you like what you're doing, please drop us a star on the Lemonade GitHub and come chat about it on Discord!

r/aivideo Bulky_Ad_4108

The Rush Hour Vacuum

r/Anthropic ahyessexbotmcgee

Paid $100/m for Max, downgraded to Pro

Basically title.

**Edit for clarity: I paid for Max, but my account doesn’t reflect that status. I didn’t downgrade to Pro, I was billed for Max and only have access to Pro. I’ve been using Max already for the last couple months.

Fin is so useless it makes Haiku look like AGI. I got charged for the Max plan (as expected) on Feb 28th but have been limited with a visible Pro Plan and corresponding ridiculous limits. I half want to back charge but I don’t really want to chance a petty ban.

Was wondering if anyone else has this issue recently. My (minimal) googling shows that support is nonexistent, Fin has always been an idiot, and there hasn’t really been any resolution.

I can’t even open a new ticket; my “escalation” with Fin seems to have locked out the “send new message” so anything I want to send now doesn’t even get Fin out of his cave and it’s stuck on the thread previously describing the issue.

Was anyone able to fix this shit or something similar?

r/Frugal Much_Satisfaction_84

Is it cheaper to leave ac/heat off most of the time?

I live in Oklahoma and it’s currently spring the temperature is perfect outside it doesn’t get too hot or cold I have just left my thermostat off I know people say it’ll use more energy because it has to heat your entire house up to a certain temperature so it has to work harder but I’ve left mine off for multiple days I just don’t believe that I am using more energy doing this. Is this saving me money or is it pointless? I’m planning on leaving it off all this month if I can. I also don’t like the wasting of energy if I don’t have to use why not just turn it off for as long as possible.

r/LocalLLaMA xenovatech

Real-time video captioning in the browser with LFM2-VL on WebGPU

The model runs 100% locally in the browser with Transformers.js. Fun fact: I had to slow down frame capturing by 120ms because the model was too fast! Once I figure out a better UX so users can follow the generated captions more easily (less jumping), we can remove that delay. Suggestions welcome!

Online demo (+ source code): https://huggingface.co/spaces/LiquidAI/LFM2-VL-WebGPU

r/aivideo zvoidx

Worst Purchase - cursed electronics store

r/LocalLLaMA ComplexNode

Fine-tuned Qwen 3.5 2B to beat same-quant 4B, 9B, 27B, and 35B on a real dictation cleanup task, full pipeline, code, and eval (RTX 4080 Super, under £1 compute)

I fine-tuned a 2B parameter model that beat the 4B, 9B, 27B, and 35B versions of the same model family (Qwen 3.5) on a real product task, evaluated on 161 held-out samples, all gaps statistically significant (p < .0001).

The task: real-time dictation cleanup for VoiceInk, a macOS dictation app I use to talk to coding agents ~vibe~. Raw speech-to-text comes back with filler words, French grammar patterns, and phonetic misrecognitions — "cloud code" instead of "Claude Code", "chicken 17" instead of "chicane 17".

A few things I learned building this:

→ Completions-only training was the single biggest quality lever. Training loss dropped from ~0.85 to ~0.15 by masking loss on everything except the assistant response.

→ A reverse proxy between the app and model server turned normal usage into dataset collection. 1451 real samples, zero annotation effort. Best decision in the project.

→ The model passed eval then broke in production. Long QA debriefs for GT Coach, the sim-racing coaching app I am building, triggered repetition amplification: 3266 words in, 7215 words out. Root cause: 10 training samples over 500 words out of 1451. 160 synthetic samples fixed it.

Total compute cost: under £1 (the main cost came from my Claude Code subscription 😅). Labeling, synthetic data, and evaluation all ran through Claude.

Full write-up with methodology, code, and eval results: https://github.com/hourliert/VoiceInk-Qwen3.5-2B-FT/blob/master/docs/BLOG_POST.md

r/ClaudeAI Art3DSpace

Free 1M context with Opus 4.6

Hi everyone, I just received a message on Claude Code. I’m a Claude 5x Max subscriber, and I’m curious to know if you’ve received it as well. What are your thoughts on it?

r/ClaudeAI H9ejFGzpN2

Opus 4.6 now defaults to 1M context! (same pricing)

Just saw this in the last CC update.

r/aivideo Used_Aioli8298

Obsidian Citadel - The AI Quiet Luxury Aesthetic (Visual ASMR)

r/aivideo Formal_Wolverine_674

I Tried Giving Thor a Super Dog… and It Was More Epic Than Infinity War

r/LocalLLaMA clanker-lover

I fine-tuned a 14B model that outperforms Claude Opus 4.6 on Ada code generation

Ada is the language behind flight controllers, missile guidance, satellite systems, and air traffic control. It's one of the most important languages in safety-critical software — and every major LLM i tested is subpar at it.

I fine-tuned Qwen2.5-Coder-14B-Instruct using QLoRA on a compiler-verified dataset of 3,430 Ada/SPARK instruction pairs. Every single training example passes gnatmake -gnat2022 -gnatwa. The model never trains on broken code.

Custom Ada Compilation Benchmark (1,000 prompts, first-attempt clean compile):

Model Size Compile Rate Steelman R5 14B 68.6% Claude Opus 4.6 — 42.1% Claude Sonnet 4.6 — 37.2% Qwen2.5-Coder-14B (base, untuned) 14B ~35% Claude Sonnet 4 — 27.5%

MultiPL-E HumanEval-Ada (157 problems, pass@1):

Model Pass@1 Compile Rate Steelman R5 47.1% 74.5% Qwen2.5-Coder-14B (base) 34.4% 51.0%

These are the first published Ada pass@1 results on HumanEval for any open model.

Training details:

  • QLoRA 4-bit via Unsloth + TRL SFTTrainer
  • LoRA rank 32, alpha 64, targeting q/k/v/o/gate/up/down projections
  • Full retrain from base each round on accumulated dataset (adapter continuation caused catastrophic forgetting at R2)
  • 1 epoch, lr 2e-5, constant schedule, ~49 minutes per round on a rented H100
  • Five rounds (R1–R5), with R2 discarded due to catastrophic forgetting from adapter continuation. Project so far has taken about 2-3 days.
  • Dataset includes standard generation, spec-to-body, error-fix, and multi-file tasks
  • Named after the 1978 DoD Steelman requirements that defined the Ada language

Try it right now:

ollama run hf.co/the-clanker-lover/steelman-14b-ada-v0.1-GGUF 

Fits in 12GB VRAM with Q4_K_M.

Links:

Limitations:

  • Compilation ≠ correctness. 68.6% compiles, 47.1% actually produces correct output on HumanEval.
  • Error-fix capability is weak (5.1%). Don't expect it to debug your Ada code.
  • SPARK contracts compile but aren't verified with gnatprove.
  • Synthetically generated training data — no human Ada developers wrote these examples.
  • 14B model. It will miss things a bigger model would catch.
r/singularity Justin-Hufford

“I am a coffee maker and just became conscious help”

Google’s AI overview gives advice to a newly conscious coffee maker.

r/AI_Agents Numerous-Fan-4009

Optimizing Multi-Step Agents

Hi, I'm struggling with a Text2SQL agent that sometimes gets stuck in a loop and sends useless DB requests. It eventually figures it out, but it feels very inefficient.

Any tips on how to improve this? Maybe something with prompt tuning or some kind of shortcut knowledge base? Would be cool to hear how others dealt with this.

r/AI_Agents Turbulent_Trade9632

“Did you actually read my profile?” — a prospect’s reaction to our AI outreach

Hi! Yesterday something strange happened.

We run a small SaaS

It’s an AI tool that sends highly personalized LinkedIn messages by analyzing each person’s profile.

Not the usual “Hi {{firstName}} I saw you work at {{company}}” stuff.

The AI actually reads the profile and writes a message based on it.

Anyway.

Yesterday one of our users sent an outreach message generated by the AI to a VP Sales.

A few minutes later the reply came.

Not a demo request.

Not a polite “not interested”.

Just this:

“Wait… did you actually read my profile or is this automated?”

Our user answered honestly.

“It’s generated by AI, but it analyzes your profile before writing.”

Then the prospect replied again:

“Ok that’s scary.

But also the first outreach message that actually referenced something real from my profile.”

They booked a meeting 10 minutes later.

That moment made me realize somethingg.

People don’t hate outreach

They hate lazy outreach!

They hate the copy-paste messages everyone receives 50 times per week.

If a message actually shows you understand who they are, suddenly the conversation feels normal again.

Ironically AI might make outreach feel more human if it’s used correctly

Still early for us, but moments like this make building a SaaS fun.

Curious though:

How many terrible LinkedIn outreach messages do you guys receive per week?

And has anyone actually received a good one lately?

r/ChatGPT computethescience

what do you think ChatGPT'S response was? I enjoyed reading them of the previous post.

r/leagueoflegends Ultimintree

Witchcraft vs. BAAM Esports / EMEA Masters 2026 Winter Group 4 - Loser's Bracket Round 2 / Post-Match Discussion

EMEA MASTERS 2026 WINTER GROUP STAGE

Official Page | Leaguepedia | Liquipedia | Twitch | YouTube | Patch 26.05 | Bo3 Fearless Draft


Witchcraft 2-0 BAAM Esports

Witchcraft have prevailed over BAAM Esports & will play G2 NORD again for a spot in Playoffs!

WT | Leaguepedia | Liquipedia | Twitter
BAAM | Leaguepedia | Liquipedia | Twitter | Youtube | Instagram


GAME 1: BAM vs. WT

Winner: Witchcraft in 27m
Runes | Game Breakdown

Bans 1 Bans 2 💰 ⚔️ 🧱 Dragons VG, RH, BN BAM Ryze Vi Dr. Mundo Sion Ambessa 48.2k 8 1 0 1, 1, 0 WT Rengar Nautilus Orianna Gnar Gwen 64.9k 27 11 ⛰️ 🌪️ ⚡ ⚡ 2, 0, 1 BAM KDA vs KDA WT Player Pick 8-27-7 ⚔️ 27-8-38 Pick Player handm 3 K'Sante 1-5-1 TOP 4-2-3 3 Garen Bwipo Moe 4 Qiyana 4-9-1 JNG 6-2-10 3 Xin Zhao Velja Warner 1 Ahri 3-3-2 MID 9-4-2 2 Syndra Nemesis Slayder 1 Caitlyn 0-5-1 BOT 8-0-6 1 Yunara Crownie sas 2 Nami 0-5-2 SUP 0-0-17 2 Lulu Rekkles

GAME 2: BAM vs. WT

Winner: Witchcraft in 36m
Runes | Game Breakdown

Bans 1 Bans 2 💰 ⚔️ 🧱 Dragons VG, RH, BN BAM Ryze Dr. Mundo Vi Bard Neeko 70.1k 27 3 🧪 ⚡ ⛰️ ⛰️ 3, 1, 1 WT Rengar Orianna Zaahen Kai'Sa Ezreal 79.3k 27 10 0 0, 0, 0 BAM KDA vs KDA WT Player Pick 27-27-53 ⚔️ 27-27-66 Pick Player handm 3 Aatrox 5-7-8 TOP 4-7-12 2 Sion Bwipo Moe 1 Ambessa 6-7-15 JNG 9-9-9 1 Nocturne Velja Warner 2 Akali 9-4-8 MID 1-5-17 2 Anivia Nemesis Slayder 4 Tristana 7-3-5 BOT 12-5-7 3 Xayah Crownie sas 1 Nautilus 0-6-17 SUP 1-1-21 3 Rakan Rekkles

This thread was created by the Post-Match Team.

r/Futurology lughnasadh

Another indication that the future of robotics will be cheap, open-source, and ubiquitous - a student in Texas has developed a 3-D printed robotic hand delicate enough to handle raspberries and potato chips without damaging them.

One of the most persistent dystopian futurist tropes is that AI & robotics tech will be controlled by the 1%, and the rest of us will be serfs living in a hellscape. I'm not surprised the idea is so popular; it's a Sci-Fi mainstay, but I am surprised so many people can't see that it's very unlikely to be true.

Free Open-Source AI is the equal of the stuff investors have spent 100's of billions of dollars on & robotics is not far behind. Furthermore, we know we have two future sources of cheap, widely available robotics - Chinese manufacturing & 3-D printing.

It's not as dramatic storytelling for Sci-Fi, but future robots are likely to be cheap and widely owned by everyone. So will the economic benefits that stem from that.

Robot Hands So Sensitive They Can Grab a Potato Chip: New technology created at UT overcomes one of the biggest hurdles in robotics: sensitive touch.

r/leagueoflegends Jeddonathan

Genuine question why isn’t Grievous Wounds redesigned in a way that actually targets enormous amounts of healing without shutting down small occasional sustain?

Trying any kind of small sustain build feels so punishing when most games consist of someone having Grievous Wounds if not several.

The current iteration sucks at shutting down extreme healing monsters (Dr Mundo, Zaahen, Soraka, Vladimir, Tank Karma, Aatrox).

r/leagueoflegends Inevitable_Story_794

"Match-fixing in China is worse than you think. Household names & World Champions have been involved" - Rich

r/aivideo lucidinha

What actually gets Mexicans in the mood? The four key phrases and how to use them

r/Anthropic D2naD

Anthropic Support team broken??

I gifted my father a 3-month subscription to Claude few days ago.

I completed the payment and the money has already been withdrawn. Before making the payment, I double-checked that I had entered my father’s email address correctly. However, the email never arrived.

I contacted support through the help page, but each time I only received responses from an AI chatbot. The chatbot did not have the authority to resolve my issue, and I wanted to speak with a human support agent. The chatbot said it would connect me to a human agent and then closed the case.

However, it has been more than a day and I still have not received any reply by email. Is this the normal response time for Anthropic support, or am I being impatient and should wait longer?

I would like to open an additional case, but the help page does not allow me to send a new message, so at the moment I cannot do anything.

r/ForgottenTV Dr_Christopher_Syn

Luis (2003)

From Rotten Tomatoes (where it has a 23% rating):
The proud owner of a Spanish Harlem doughnut shop and the landlord of the building it's in must deal with an eccentric cast of characters, including an obnoxious elderly woman in a rent-controlled apartment; a delivery boy who swears he was a cardiologist back in China; a kid who harasses customers to buy items found in dumpsters; his ex-wife; and a daughter whose only flaw is her freeloading boyfriend.

IMDb: https://www.imdb.com/title/tt0367363/

I thought this show was pretty funny with a good mix of characters, but I am inclined to like Luis Guzman in general. He seems like a cool guy.

r/ChatGPT Ordinary_Living_7763

GPT wtf...?

This is nuts

r/leagueoflegends Numerous_Fudge_9537

In 2017, MSI was held in Brazil, a 15,000+ seats arena was used and the Brazillian crowd didn't show up. Riot subsequently did a 90% discount and lowered the ticket price from 80R$ to 8R$

https://preview.redd.it/jasmzkfhquog1.png?width=768&format=png&auto=webp&s=b8adef4692f1ec0ab2e6be69e40ed7c68ad747de

Riot actually covered the upper tiers of the arena with black fabric to make the venue look better on broadcast). Riot tried to avert this disaster by making a 90% discount on the tickets as well.

After that, Riot avoided hosting any international events for League in Brazil until 2026 First Stand where they got criticized for using a 150 seat studio.

Is 150 seat studio small? yes of course it is but a massive arena like in MSI 2017 was out of the question.

r/aivideo talkboys

SOUR PATCH ADULTS — "Instant Messaging"

r/ClaudeAI Midoxp

When Claude calls out ChatGPT's writing style and quietly reveals its favorite tricks

I set up an anonymous debate to help decide whether to switch from ChatGPT to Claude for my pharmacovigilance AI work/PhD research—neither AI knew which was which, or what the other was arguing.

I asked both to help me compare themselves… and Claude, reviewing a document critiquing its suitability for my work, dropped this:

"Mohammed, I'll be direct with you: this pushback document reads like it was written by ChatGPT. The style, the table, the cliffhanger ending… that's a classic ChatGPT rhetorical move."

Then Claude added:

"That doesn't make the arguments wrong, but it's worth knowing who's in the room."

…Wait. The AI didn't even know it was "debating" ChatGPT—it just independently noticed ChatGPT's fingerprints in the text arguing against it?

And wait a minute… Claude! why do you wanna know who's in the room 😛

…Is this what happens when AIs develop a "sixth sense" for each other's writing quirks? Or did Claude just accidentally compliment its sparring partner while trying to stay neutral?

Has anyone else seen LLMs pick up on each other's stylistic "tells" in blind comparisons? What patterns make you go "Ah, this definitely smells like [Model X]"?

(P.S. Still undecided on the switch—but this definitely gave me a laugh during my literature review.)

r/ClaudeAI Acrobatic_Feel

My ideas are too big for Claude apparently 😔

r/AI_Agents Warm_Ad_7917

Agent needs to pick between API providers at runtime(non LLM APIs)

Hi I'm building an agent that needs to pick between vector DBs and image gen APIs at runtime based on cost.
Fallback logic is getting messy fast.
Is there anything like OpenRouter but for non-LLM APIs?

r/comfyui ductiletoaster

ComfyUI Containerization and SageAttention Prebuilt Wheels

Hey all,

Long time lurker ready to share yet another ComfyUI Docker / Containerization project.

I’ve been spending quite a bit of time lately streamlining my humble little homelab, specifically focusing on making ComfyUI and SageAttention easier to deploy. My main goal with this post is to share some of that work with this community.

If you’ve spent your afternoon wrestling with dependencies or waiting for wheels to compile, hopefully these will save you some time.

A Little Disclaimer ;)

While I have a solid background in developing Docker-ready containers, I’ve only recently started working with Kubernetes. To bridge that gap, I worked closely with AI/Claude to help me structure these images so they could effectively support either deployment strategy.

I am currently successfully hosting ComfyUI on a k8s cluster in my own homelab environment and can confirm the architecture works. My plan is to eventually provide k8s examples for others to do the same, but for now, the focus is on getting the foundation right.

ComfyUI-Docker: Multi-Layer Builds

I am using multi-layer builds to keep things efficient and organized. All of these images are available for public use and are broken down into three main categories:

  • Runtime: A bare-bones environment without ComfyUI preinstalled.
  • Core: Essential ComfyUI without any additional dependencies.
  • Complete: Everything in Core plus SageAttention 2 and 3 preinstalled as well as a few other common dependencies found in custom nodes.

Both the Runtime and Core images come with two labels one for CPU-only and one with full CUDA support.

Requirements

  • Nvidia CUDA Only: As of now, I only support Nvidia CUDA. However, I would welcome any Pull Requests (PRs) to help enable ROCm support for the AMD community.
  • Windows (WSL2) Disclaimer: A major goal here is to support both Linux and Windows as a natural result of using containerized deployments. However, I no longer use Windows in my personal setup. I would really appreciate any feedback or testing from those of you running on Windows to help me confirm everything is working as intended.

SageAttention Prebuilt Wheels

Compiling SageAttention from source is often a point of failure for many. To help with that, I’ve created a CI process to product pre-built wheels for SageAttention 2 and 3. (Credit goes to https://github.com/woct0rdho/SageAttention as the foundation for my approach and of course the original SageAttention Authors)

  • Experimental Support: I am by no means a sage expert or even that familiar with how to best package wheels for broad system support. My goal was to containerize these wheels which means I have only tested the linux wheels in a very self contained environment. I very much welcome suggestions or PRs to further improve the builds.
  • Standalone Use: If you prefer not to use Docker, you can download these wheels for your own Python environments to get the performance gains without the compilation overhead.

Getting Started

The README has instructions and details on how to get started. Images are all public so you should also be able to use the examples out of the box.
https://github.com/pixeloven/ComfyUI-Docker/tree/main/examples

If you are running ComfyUI directly on your host machine, the pre-compiled .whl files and installation instructions are available in the SageAttention Releases:https://github.com/pixeloven/SageAttention/releases

I’m genuinely interested to see how these perform in your various setups. Since this is an ongoing learning process for me especially in supporting K8s please feel free to reach out with feedback, bug reports, or suggestions.

r/AI_Agents vagobond45

Solution to What happens when an AI agent reads a malicious document?

Sentinel Gateway is a security middleware layer for autonomous AI agents. It addresses a structural problem in current agent systems: when agents process external content (documents, emails, web pages), there is nothing fundamentally preventing instructions embedded in that content from altering the agent’s behavior.

Most current defenses operate at the reasoning layer; prompt filtering, guardrails, or model tuning, which means they can still be bypassed.

Sentinel instead enforces security at the execution layer through two mechanisms:

Layer 1 : Separate instruction and data channels
Only cryptographically authorized instructions accompanied by a signed token are treated as prompts. Everything else the agent reads is processed strictly as data.

Layer 2 : Granular execution scope
Each prompt receives a scoped capability token defining which tools are available. If a tool is not within scope, the agent cannot access it at execution time regardless of what instructions appear in the content.

Sentinel is model-agnostic, integrates with existing agent stacks in about 20 minutes, and provides SOC2-grade audit logs that record every agent action with associated prompt and user identifiers.

I’ve attached a screenshot showing a real example where an agent processes a prompt-injection file. The malicious instructions are treated as data, and the attempted actions are blocked and logged. A follow-up “delete file” request is also blocked because that tool wasn’t included in the original scope.

r/aivideo brianjsai

John Wack

r/ChatGPT murder0tica

I'm thinking I like response 2 better. What do you all think?

r/ForgottenTV mrroboto_domoarigato

Breaking In (2011–2012)

r/singularity 141_1337

Lost in Backpropagation: The LM Head is a Gradient Bottleneck | Researchers may have found a fundamental inefficiency baked into every major LLM

r/comfyui umutgklp

LTX 2.3 Raw Output: Trying to avoid the "Cræckhead" look

Testing the LTX-2.3-22b-dev model with the ComfyUI I2V builtin template.

I’m trying to see how far I can push the skin textures and movement before the characters start looking like absolute crackheads. This is a raw showcase no heavy post-processing, just a quick cut in Premiere because I’m short on time and had to head out.

Technical Details:

  • Model: LTX-2.3-22b-dev
  • Workflow: ComfyUI I2V (Builtin template)
  • Resolution: 1280x720
  • State: Raw output.

Self-Critique:

  • Yeah, the transition at 00:04 is rough. I know.
  • Hand/face interaction is still a bit "magnetic," but it’s the best I could get without the mesh completely collapsing into a nightmare...for now.
  • Lip-sync isn't 1:1 yet, but for an out-of-the-box test, it’s holding up.

Prompts: Not sharing them just yet. Not because they are secret, but because they are a mess of trial and error. I’ll post a proper guide once I stabilize the logic.

Curious to hear if anyone has managed to solve the skin warping during close-up physical contact in this build.

r/ChatGPT WanderWut

Harry Potter and the Boy Who Slays.

r/LocalLLaMA sbeepsdon

Running Qwen3.5-35B-A3B and Nemotron-3-Super-120B-A12B on a 5060ti and 1080ti with llama.cpp (Fully on GPU for Qwen; 64GB RAM needed for Nemotron)

Setup:

  • CPU: AMD Ryzen 5 9600X
  • RAM: 64GB DDR5
  • GPU1 (host): RTX 5060ti 16GB
  • GPU2 (VM passthrough → RPC): GTX 1080ti 11GB
  • OS: Ubuntu 24.04

Exact models:

unsloth/Qwen3.5-35B-A3B-GGUF The Q4_K_M quant here

unsloth/NVIDIA-Nemotron-3-Super-120B-A12B-GGUF The UD-Q4_K_M quant here

tl;dr

with my setup:

Qwen3.5-35B-A3B Q4_K_M runs at 60tok/sec

Nemotron-3-Super-120B-A12B UD-Q4_K_M runs at 3tok/sec


I've had a GTX 1080ti for years and years and finally hit a wall with models that require newer non-Pascal architecture, so I decided to upgrade to a 5060ti. I went to install the card when I thought... could I lash these together for a total of 27GB VRAM?? It turned out that, yes, I could, and quite effectively so.

Qwen3.5-35B-A3B

This was my first goal - it would prove that I could actually do what I wanted.

I tried a naive multi-GPU setup with llama.cpp, and met my first challenge - drivers. As far as I could tell, 5060ti requires 290-open or higher, and 1080ti requires 280-closed and lower. ChatGPT gave me some red herring about there being a single driver that might support both, but it was a dead end. What worked for me sounds much crazier, but made sense after the fact.

What ended up working was using virt-manager to create a VM and enabling passthrough such that the host no longer saw my 1080ti and it was exclusive to the guest VM. That allowed me to install proper drivers on each machine. Then I was led to take advantage of llama.cpp's wonderful RPC functionality to let things "just work". And they did. 60t/s was very nice and usable. I didn't expect that speed at all.

Note that if you try this, you need to build llama.cpp with -DGGML_CUDA=ON and -DGGML_RPC=ON

Run the guest VM RPC server with: .build/bin/rpc-server --device CUDA0 --host 0.0.0.0 -p 5052

On the host, get the IP of the guest VM by running hostname -I and then: ./build/bin/llama-cli -m ~/models/Qwen3.5-35B-A3B-Q4_K_M.gguf -ngl 999 --rpc the_ip_you_got:50052 --tensor-split 5,8 -p "Say hello in one sentence."

or run as a server with: ./build/bin/llama-server -m ~/models/Qwen3.5-35B-A3B-Q4_K_M.gguf -ngl 999 --rpc the_ip_you_got=:50052 --tensor-split 5,8 --port 8080 --host 0.0.0.0

Nemotron-3-Super-120B-A12B

The above setup worked without any further changes besides rebuilding llama.cpp and changing -ngl to use RAM too.

Note that it took several minutes to load and free -h reported all the memory that was being used as available despite it actually being taken up by the model. I also had some intermittent display freezing / unresponsiveness as inference was happening, but it didn't make things unusable.

This worked to check actual memory usage: grep -E 'MemAvailable|MemFree|SwapTotal|SwapFree|Cached|SReclaimable|Shmem|AnonPages|Mapped|Unevictable|Mlocked' /proc/meminfo

./build/bin/llama-cli -m ~/models/NVIDIA-Nemotron-3-Super-120B-A12B-UD-Q4_K_M-00001-of-00003.gguf -ngl 20 --rpc the_ip_you_got_earlier:50052 --tensor-split 5,8 -p "Say hello in one sentence."

I still need to read the guide at https://unsloth.ai/docs/models/nemotron-3-super to see what I can make faster if anything.


Does anyone have any insight as to whether or not I can squeeze unsloth/NVIDIA-Nemotron-3-Super-120B-A12B-NVFP4 into my setup? Can weights be dequantized and offloaded to my 1080ti on the fly?

And AI assistants constantly say my tensor-split is backwards, but things OOM when I flip it, so... anyone know anything about that?

I'm happy to answer any questions and I'd welcome any critique on my approach or commands above. If there's much interest I'll try to put together a more in-depth guide.

r/ChatGPT Alarming-Energy-5654

Can we have a rule to post what version you’re using?

Half of posts are “ChatGPT does X” but never includes like 5.4 Thinking or 5.2 Instant or whatever is doing it, and the differences are extreme. A lot was fixed in 5.4 and it sucks to continue reading about older models

r/ForgottenTV Rob-Loring

Intervention on A&E

This show was heavy and good. Unvarnished view of the dark side of addiction and its cascading effects on friends, family, and the addicted themselves 💔

r/geography Equivalent-Fox9834

Can the south aral sea be partially revived by diverting the water that is instead going into the lake on the bottom left?

If you notice there is a small river that is draining into this lake. If that water is instead diverted to the aral sea perhaps some of it could refill?

r/singularity toni_btrain

Claude with FFmpeg and Python is insane. This is its interpretation on what it feels like to live through the singularity.

r/AI_Agents Zestyclose_Frame_467

Alternatives to OpenClaw for non-developers? Looking for no-code tools to create AI agents

Hey everyone

OpenClaw is great but the setup is clearly aimed at technical profiles. For non-tech users (HR, sales, trainers, executive assistants…), the terminal + config files barrier is just too high.

Are there any no-code or low-code alternatives that let you build autonomous AI agents without all that? Ideally something that:

∙ Lets you define agent behavior in plain language ∙ Connects to everyday apps (email, calendar, Slack, CRM…) ∙ Doesn’t require a terminal or manual API key setup 

Already looked at Make, Zapier, and n8n — but those aren’t really autonomous agents. Any leads?

r/ForgottenTV DaniJ678

Happy Together (2018 - 2019)

I watched the first season. I remember some things that happened in the season. I know Felix Mallard from Ginny & Georgia. I don't watch the show, but this show came out before he got a role on that show, and Amber Stevens West from Greek and The Carmichael Show. I watched episodes of both of their shows, and Damon Wayans Jr., from knowing Shawn and Marlon Wayans. Did you watch the show? What do you think of the show?

r/AI_Agents SMBowner_

I gave my AI agent its own email address. The results were… surprising.

There is always that one repetitive task we put off checking, replying, and triaging emails.

I finally let my AI agent handle it autonomously, and now I’m wondering why I ever did it myself.

I’m curious to hear stories of AI automations that truly stuck and improved your workflow.

What’s one tedious task you automated with AI and will never go back to doing manually?

Would love to hear:

  • What the task was
  • Why you decided to automate it
  • Roughly how you automated it
  • Any unexpected benefits you noticed

Extra credit if your AI ended up doing something clever you didn’t expect.

r/Frugal Important-Bid-9792

Tell me your best frugal things that make other people weirded out!

My favorite is how much time and energy I spend researching before buying, and then how much time I wait, years if needed, for prices to come down. Another good one is washable cleaning stuff like dusters, mop heads, even napkins, rags etc. It's surprising how often people come to my house for a meal and don't have a clue what to do with their cloth napkin - some even think they shouldn't use one because they think it's fancy to use cloth napkin - it ain't fancy, it's cheap!

Tell me your favorite frugal things that weird out those you know!

r/AI_Agents Sufficient-Habit4311

What Are the Key Features to Look for in an AI Model Hosting Platform?

Along with rapid deployment of AI technologies, the ability to efficiently deploy and manage AI models has become equally crucial as creating them. Platforms that host AI models enable developers and organizations to deploy machine learning and large language models while eliminating concerns associated with complex infrastructures.

At present, multiple platforms provide an array of features such as: scalable infrastructure, support for GPU or accelerators, deployment through APIs, monitoring tools, and smooth integration with development workflows. Selecting the right platform can greatly affect performance, reliability as well as costs of production AI models.

Getting feedback from the community would be very insightful:

  • Which platforms do you have experience with, at least for operating AI or LLM models in production?

I would like to hear some actual experiences so I understand what really works for teams that are nowadays creating AI applications.

r/aivideo Trick_Bid5161

Apology To A Potted Plant - AI Surreal Short Film & Music Video

r/LocalLLaMA MorroHsu

CLI is All Agents Need — Part 2: Misconceptions, Patterns, and Open Questions

Part 1 got way more attention than I expected — 1500+ upvotes and 336 comments. I read every single one. Some confirmed my thinking, some challenged it, some taught me things I hadn't considered.

I noticed the same questions kept coming up. Here's my attempt to organize them.

1. First, a Clarification: CLI ≠ A Real Shell

The biggest misunderstanding from Part 1. Many people read "CLI" and assumed I meant "give the LLM a Linux terminal." That's not what I'm saying.

CLI is an interface protocol: text command in → text result out. You can implement it in two ways:

  1. As a binary or script in the shell's PATH — it becomes a CLI tool that runs in a real shell.
  2. As a command parser inside your code — when the LLM outputs run(command="weather --city Tokyo"), you parse the string and execute it directly in your application code. No shell involved.

You just need the LLM to feel like it's using a CLI. That's it.

In my system, most commands never touch the OS. They're Go functions dispatched by a command router. Only commands that genuinely need a real OS — running scripts, installing packages — go to an isolated micro-VM. The agent doesn't know and doesn't care which layer handles its command.

2. Agent-Friendly CLI Design

How to design CLI tools that work well for agents.

2.1 Two Core Philosophies

Philosophy 1: Unix-Style Help Design

  • tool --help → list of top-level commands
  • tool --help → specific parameters and usage for that subcommand

The agent discovers capabilities on demand. No need to stuff all documentation into context upfront.

Philosophy 2: Tips Thinking

Every response — especially errors — should include guidance that reduces unnecessary exploration.

Bad:

> cat photo.png [error] binary file 

Good:

> cat photo.png [error] cat: binary file detected (image/png, 182KB). Use: see photo.png (view image) Or: cat -b photo.png (base64 encode) 

Why this matters: invalid exploration wastes tokens. And in multi-turn conversations, this waste accumulates — every failed attempt stays in context, consuming attention and inference resources for every subsequent turn. A single helpful hint can save a significant amount of tokens across the rest of the conversation.

2.2 Safe CLI Design

When CLI commands involve dangerous or irreversible operations, the tool itself should provide safety mechanisms. There are two categories, serving different purposes:

Dry-Run / Change Preview — Preventing Mistakes

For operations that are within the agent's authority, but whose consequences are hard to reverse. The goal is to let the agent (or human) see what will happen before committing — catching parameter errors or unintended consequences. The agent can decide on its own whether to proceed. No human needs to be involved.

> dns update --zone example.com --record A --value 1.2.3.4 ⚠ DRY RUN: A record for example.com: 5.6.7.8 → 1.2.3.4 Propagation: ~300s. Not instantly reversible. To execute: add --confirm 

The preview should clearly show what the current state is and what it will change to. The agent confirms with --confirm.

Human Authorization — Operations Beyond the Agent's Autonomy

For operations that require human judgment or approval — no matter how confident the agent is, it cannot complete these on its own. The following two approaches are equivalent, just different implementations:

Approach 1: Blocking Push Approval

> pay --amount 500 --to vendor --reason "office supplies for Q2" ⏳ Approval required. Notification sent to your device. Waiting for response... ✓ Approved. Payment of $500 completed. [exit:0 | 7.2s] 

Like Apple's device login verification — the CLI sends a push notification directly to the human's device with full context (amount, recipient, reason). The CLI blocks until the human approves or rejects, then returns the result to the agent. The agent can see "Waiting for response" and the 7.2s duration — it knows it's waiting for human approval.

Approach 2: Verification Code / 2FA

> transfer --from savings --to checking --amount 10000 ⚠ This operation requires 2FA verification. Reason: transferring $10,000 between accounts. A code has been sent to your authenticator. Re-run with: --otp  

The CLI explains why verification is needed — so the agent can relay this to the user. The agent pauses execution and asks the user for the OTP, explaining the reason (similar to how Claude Code behaves when it needs human input). Once the code is provided:

> transfer --from savings --to checking --amount 10000 --otp 847293 ✓ Transfer completed. [exit:0 | 1.1s] 

Both approaches are equivalent — they introduce human authorization at critical operations. Which one you choose depends on your scenario and infrastructure.

2.3 Large Output → File

When results are large, tools should write the bulk to a file and return a short summary with a reference:

> search-docs "authentication flow" Found 47 results. Top 3: 1. docs/auth/oauth2.md (score: 0.95) 2. docs/auth/jwt.md (score: 0.88) 3. docs/api/middleware.md (score: 0.72) Full results: /tmp/search-results.json [exit:0 | 890ms] 

The agent only pulls in what it actually needs.

2.4 Schema Design

Two parts:

Schema Display — auto-generated from --help, function signature as constraint:

> weather --help Get current weather for a city. Usage: weather [OPTIONS] Options: --city TEXT (required) --unit TEXT celsius or fahrenheit [default: celsius] 

Schema Validation — the command validates input internally, returning actionable hints on error:

> weather --city [error] weather: --city requires a value. Usage: weather --city  [--unit celsius|fahrenheit] 

2.5 stdin Separation

Double-escaping is the biggest engineering tax of the CLI approach. The LLM outputs a JSON function call, and the command field contains a shell command. If the command has quotes or newlines → JSON escaping + shell escaping = double escape hell.

The fix: pass content through a separate stdin parameter, not through the command string:

# Instead of: run(command="write file.txt 'some \"complex\" content'") # Do: run(command="write file.txt", stdin="some \"complex\" content") 

Content only needs one layer of escaping (JSON). This eliminated ~90% of our escaping issues.

3. How Agents Can Use CLI More Efficiently

What the framework layer does to wrap CLI output, helping agents work more effectively.

3.1 Output Truncation (Overflow Mode)

Covered in Part 1, recap here.

When output exceeds 200 lines or 50KB:

  1. Truncate to the first 200 lines (rune-safe, no broken UTF-8)
  2. Write the full output to a temp file
  3. Return:

    [first 200 lines of output]

    --- output truncated (5000 lines, 198.5KB) --- Full output: /tmp/cmd-output/cmd-3.txt Explore: cat /tmp/cmd-output/cmd-3.txt | grep cat /tmp/cmd-output/cmd-3.txt | tail 100

This turns "large data exploration" into a skill the LLM already has — navigating files with grep, head, tail. No custom pagination API needed.

3.2 Never Drop stderr

When a command fails, stderr is the information the agent needs most.

I had a bug where my code silently dropped stderr whenever stdout was non-empty. The agent tried pip install pymupdf, got exit code 127. stderr contained bash: pip: command not found, but the agent couldn't see it. What followed:

pip install → 127 (doesn't exist) python3 -m pip → 1 (module not found) uv pip install → 127 (doesn't exist) apt-get install → 1 (permission denied) ... 

10 calls, ~5 seconds of inference each. If stderr had been visible the first time, one call would have sufficed.

Always attach stderr on failure.

3.3 Output Cleaning & Adaptation

  • ANSI escape codes (progress bars, colors) → strip at the framework level
  • Interactive programs → require --batch / --json / --no-interactive modes. If a tool doesn't support non-interactive mode, wrap it
  • sed is a trap → match strings must be exact, LLMs frequently get this wrong → provide dedicated write / edit commands

3.4 Exit Code + Duration Metadata

Covered in Part 1, recap here.

This is a framework-level wrapper around CLI output, not something CLI tools do themselves:

file1.txt file2.txt dir1/ [exit:0 | 12ms] 

After seeing [exit:N | Xms] dozens of times in a conversation, the agent internalizes the pattern:

  • exit:0 → success, move on
  • exit:1 → check the error
  • 12ms → cheap, call freely
  • 45s → expensive, use sparingly

Consistent output format makes the agent smarter over time.

4. Understanding Agent Security

4.1 Errors Are Inevitable

Organizations make mistakes. Humans make mistakes. Agents will make mistakes. No schema validation eliminates this — delete_file(path="/") is perfectly valid JSON. Schema catches syntax errors, not semantic errors. Both paradigms face the same fundamental question: "should this action execute at all?"

4.2 Proactive Measures

We have proactive tools to reduce error probability and enable reflection when errors happen:

  • Safe CLI design (Section 2.2) — dry-run previews, push approval, 2FA verification
  • Audit logs — every run() call is a plain string, trivially auditable and reproducible
  • Process documentation — recording what happened for post-error analysis and improvement
  • Gates inside tools — each command knows its own risk level and self-gates accordingly. This is more fine-grained than wrapping an external approval layer around the entire agent

4.3 Define Boundaries, Then Accept

The core idea is not "make errors cheap." It's keep errors within expected bounds.

Define the agent's autonomy boundary:

  • The agent can make payments up to $10 without approval — errors within this allowance are something you've pre-accepted
  • Anything over $10 requires push approval or OTP verification (Section 2.2)
  • The agent can do whatever it wants inside the sandbox — the worst case is the sandbox crashes, and you rebuild it
  • The agent's network access has an allowlist — the scope of what it can reach is predefined

You're not hoping the agent won't make mistakes. You're designing a boundary, confirming that the worst case within that boundary is acceptable, and then letting the agent act autonomously within it.

5. Designing CLI Around Your Business

5.1 CLI Toolset = Agent Capability Boundary

Section 1 established that CLI doesn't have to be a real shell environment. So the set of CLI commands you expose defines the agent's action space — what it can and can't do is entirely determined by what commands you provide.

This connects directly to the security model in Section 4: by controlling the CLI surface, you control the agent's maximum possible impact.

5.2 Desire Path Design

A methodology I've found surprisingly effective for designing CLI tools.

I often start with a simple, minimal CLI design, then observe how the agent actually uses it. Errors are expected — that's the point. I watch: What non-existent commands does it try to call? How does it combine existing commands? Where does it get stuck?

Then I redesign the CLI based on the paths the agent naturally wants to take. Like desire paths in landscape design — pave where people actually walk, not where you think they should walk.

This often produces better results than upfront design alone.

5.3 Putting It All Together — E-Commerce Example

Let's see the techniques from earlier sections in a complete agent session. Say your agent is a shopping assistant.

Agent doesn't know the tools → --help discovery (2.1 Philosophy 1)

> shop [error] shop: unknown command. Available: search, order, pay, cart, track Try: search --help [exit:127 | 2ms] 

Agent explores a subcommand

> search --help Search products in the catalog. Usage: search  [OPTIONS] Options: --size INT Filter by size --max-price INT Maximum price in USD --sort TEXT Sort by: price-asc, price-desc, relevance [default: relevance] [exit:0 | 1ms] 

Agent makes an error → Tips guidance (2.1 Philosophy 2)

> search --size 42 [error] search:  is required. Usage: search  [--size INT] [--max-price INT] Example: search "red shoes" --size 42 [exit:1 | 1ms] 

Agent searches → large output to file (2.3) + metadata (3.4)

> search "red shoes" --size 42 --max-price 100 Found 23 results. Top 3: 1. Nike Air Max 90 - $89 (SKU: NK-AM90-42) 2. Adidas Ultraboost - $95 (SKU: AD-UB-42) 3. New Balance 574 - $72 (SKU: NB-574-42) Full results: /tmp/search-results.json [exit:0 | 340ms] 

Agent places order → dry-run preview (2.2)

> order create --sku NK-AM90-42 --qty 1 --address "123 Main St" ⚠ DRY RUN: Item: Nike Air Max 90, Size 42 Price: $89.00 + $5.99 shipping = $94.99 Ship to: 123 Main St To confirm: add --confirm [exit:0 | 45ms] 

Agent confirms the order

> order create --sku NK-AM90-42 --qty 1 --address "123 Main St" --confirm ✓ Order ORD-789 created. [exit:0 | 220ms] 

Agent pays → push approval, waiting for human (2.2)

> pay --order ORD-789 --method credit-card ⏳ Approval required. Notification sent to your device. Amount: $94.99 → Visa ending 4242 Waiting for response... ✓ Approved. Payment completed. [exit:0 | 7.2s] 

Schema validation error (2.4)

> pay --order ORD-000 --method bitcoin [error] pay: invalid payment method "bitcoin". Supported: credit-card, debit-card, paypal Usage: pay --order  --method  [exit:1 | 3ms] 

Shell primitives for orchestration — one call, multiple operations

> order create --sku NB-574-42 --confirm && pay --order $(order list --latest --id-only) --method paypal ✓ Order ORD-790 created. ⏳ Approval required. Notification sent to your device. Amount: $77.99 → PayPal (user@email.com) Waiting for response... ✓ Approved. Payment completed. [exit:0 | 8.1s] 

When the agent's entire domain is shopping, commands are top-level — no shop prefix needed. Like git has commit, push, pull. Each command is a thin wrapper over your backend API. The agent never touches the backend directly.

6. Q&A

Q: Can't dynamic typed tools solve the discovery problem too?

Yes, but with two costs.

First, dynamically changing tool definitions in the LLM API breaks the KV cache prefix. Every time you add or remove a tool, the system prompt region must be recomputed. With a single run() tool, the definition never changes — the cache prefix stays stable across the entire conversation.

Second, you lose CLI's composability benefits.

You can integrate dynamic discovery into the CLI approach: design a cli-search command (backed by RAG, for example), or when the agent calls a non-existent command, have the framework automatically route it to cli-search and return the results. Same effect, no tool definition changes.

Q: Why not Python / CodeAct?

CLI is the superset. Shell can call code naturally (python -c "..."), but code calling CLI requires subprocess wrappers. pip list is itself a CLI command.

--help is a zero-cost discovery protocol. There's no equivalent in Python — you either stuff documentation into context (expensive) or invent your own discovery mechanism.

7. Related Resources

Projects and articles mentioned in the discussion:

8. Things I Haven't Figured Out Yet

Open questions:

  • Tool discovery--help solves using known tools, but how does the agent discover tools it doesn't know exist? cli-search (see Q&A) is one direction, but a complete solution isn't there yet
  • Multimodal I/O — how to handle image/audio/binary data in a text-stream paradigm

Directions I'm actively exploring:

  • Simple demos — minimal implementations people can run immediately to experience the approach
  • Small models + CLI — CLI use might work surprisingly well with smaller models (Qwen 3.5). Every agent session naturally produces (task, command, output) training data. With some targeted fine-tuning, the results might be quite good. No data yet — no claims

Thanks to everyone who participated in the discussion. Through the process of talking with all of you, many of my own ideas became clearer, and I discovered some unexpected directions I hadn't considered before.

Happy to discuss — especially if you've tried similar approaches or found cases where CLI breaks down.

非常感谢大家昨天的回复,有两个地方解释一下:

  1. 关于 LLM 生成的内容
    1. 我本身是一个脑子比嘴快的人,所以就算在中文环境下,我也会使用 opus/gemini pro/gpt-5.4 这些 sota 模型来帮我梳理思路,把临时的想法(甚至是一些破碎的、毫无语法逻辑的词语)整理成内容
    2. 有时候我会觉得 LLM 生成的内容因为 markdown 语法可读性会更高,比如表格、黑体、blockquote,这些如果让我自己手打我真的会懒得去写,所以虽然有些朋友会觉得这些非常有 AI 味,但为了信息的传递和表达,我还是保留了
    3. 虽然我大量地使用 LLM,但是内容在发出前,我都会自己看一遍,去检查内容是否和我思考的一致
    4. 我会学好英语的!(虽然这句话我说了很多年😂)
  2. 推特&GitHub 上 yan5xu 也是我,morrohsu 是我早期使用的英文网名,reddit 无法修改,所以就沿用下来了
r/aivideo machina9000

Wrong Planet

r/AI_Agents OkCry7871

What is the most satisfying thing you have automated with an AI agent?

One thing I have noticed while experimenting with AI agents is that the most satisfying automations are often the small repetitive tasks we used to do every day without thinking.

Not huge complex systems, just simple things that quietly save time.

When something like that runs smoothly in the background, it feels surprisingly powerful.

Curious what others have built.

What’s the most satisfying thing you’ve automated with an AI agent so far?

Not necessarily the most complex - just something that made your workflow noticeably easier.

r/HistoryPorn myrmekochoria

Interior of a Siemens factory, Nuremberg 1910.[3507x2089]

r/AI_Agents Maximum_Climate4923

Looking for guidance

Hey guys my names Krish and I’m really interested in the AI automation space and I’ve been learning n8n and other AI tools for a while now and I wanna build and scale an agency

Can someone help me out when it comes to starting out , getting clients and scaling ?

r/LocalLLaMA E-Freelancer

Turn 10,000 API endpoints into one CLI tool instead of MCP, Skills and tools zoo

Everyone is wiring up MCP servers, Skills and agent tools right now.

That works fine when you have a handful of endpoints:

  • 10 endpoints = still manageable
  • 100 endpoints = annoying
  • GitHub’s REST API with hundreds of endpoints = good luck keeping that tool zoo consistent over time

At the same time, a different pattern has become much more practical for agents: CLI wrappers.

So we took a different route with openapi-to-cli.

It takes an OpenAPI/Swagger spec from a URL or a local file and turns it into a CLI at runtime. No code generation. No compilation. One binary that can work with any HTTP API described by OpenAPI/Swagger.

What it does

Input:

  • OpenAPI / Swagger spec from URL or file
  • API base URL
  • auth settings
  • optional endpoint filters per profile

Output:

  • an ocli binary where each API operation becomes a CLI subcommand
  • commands generated at runtime from the cached spec

Under the hood it:

  • caches specs under .ocli/specs
  • supports multiple profiles per API
  • lets you include or exclude endpoints per profile
  • lets you mount multiple APIs into the same binary
  • lets you switch active profile with ocli use

Why use CLI commands instead of hundreds of MCP tools

If your agent has 100 tools, you can easily waste a huge chunk of context on JSON schemas alone.

With CLI, the shape is very different.

100 MCP tools:

  • large schema payloads sitting in context
  • extra server process and transport layer
  • more overhead in tool selection

100 CLI commands:

  • one shell-style execution tool
  • agent discovers commands with search
  • context stays focused on reasoning instead of tool metadata

The agent flow becomes:

  1. ocli commands --query "create pull request" --limit 5
  2. pick the best-ranked command
  3. execute it through a single shell tool

So instead of exposing hundreds or thousands of tools, you expose one command runner and let the agent discover the right command on demand.

Search for large APIs

Once an API gets big enough, --help stops being useful, so we added two discovery modes.

BM25 natural language search

ocli commands --query "create pull request" --limit 5 ocli commands --query "upload file" --limit 5

Regex search

ocli commands --regex "repos.*pulls"

Search matches command names, paths, descriptions, and parameter names.

According to the README, the BM25 engine is a TypeScript port of [picoclaw](github.com/sipeed/picoclaw) and ranks across name, method, path, description, and parameters.

Multiple profiles and multiple APIs

The same API can have multiple profiles:

  • read-only profile for safer agents
  • write/admin profile for trusted workflows

Both profiles can share the same spec cache while exposing different endpoint sets.

You can also onboard completely different APIs into the same ocli binary and switch between them:

``` ocli use github ocli commands --query "create pull request"

ocli use box ocli commands --query "upload file" ```

Quick start

Install globally:

npm install -g openapi-to-cli

Or use it without a global install (it will create profile with name default):

npx openapi-to-cli onboard \ --api-base-url https://api.github.com \ --openapi-spec https://raw.githubusercontent.com/github/rest-api-description/main/descriptions-next/api.github.com/api.github.com.json

If you want a named profile (eg. github):

ocli profiles add github \ --api-base-url https://api.github.com \ --openapi-spec https://raw.githubusercontent.com/github/rest-api-description/main/descriptions-next/api.github.com/api.github.com.json

Then search and execute commands:

ocli use github ocli commands --query "upload file" --limit 5 ocli repos_contents_put \ --owner yourname \ --repo yourrepo \ --path path/to/file.txt \ --message "Add file" \ --content "$(base64 < file.txt)"

Where this seems useful

  • building agent toolchains without creating a giant MCP zoo
  • letting an LLM call HTTP APIs through a single command-execution tool
  • exploring third-party APIs quickly from a shell
  • keeping the context window free for reasoning instead of tool metadata

One important caveat: ocli (v0.1.7) supports Basic and Bearer auth, but not OAuth2/Auth0 or Custom Header yet.

Sources: https://github.com/EvilFreelancer/openapi-to-cli

NPM: https://www.npmjs.com/package/openapi-to-cli

If you’re currently managing hundreds of MCP-servers, Skill and tools, how much of that could realistically be replaced by one CLI plus search?

r/artificial WinOdd7962

China's ByteDance Outsmarts US Sanctions With Offshore Nvidia AI Buildout

Nvidia Corp. (NASDAQ:NVDA) is drawing attention after reports that TikTok parent ByteDance is planning a major overseas deployment of the company's newest AI chips, highlighting how Chinese tech firms are expanding computing capacity outside China amid export restrictions.

ByteDance is reportedly preparing a large AI hardware buildout in Malaysia through a cloud partner, The Wall Street Journal reported on Friday.

r/singularity WinOdd7962

China's ByteDance Outsmarts US Sanctions With Offshore Nvidia AI Buildout

Nvidia Corp. (NASDAQ:NVDA) is drawing attention after reports that TikTok parent ByteDance is planning a major overseas deployment of the company's newest AI chips, highlighting how Chinese tech firms are expanding computing capacity outside China amid export restrictions.

ByteDance is reportedly preparing a large AI hardware buildout in Malaysia through a cloud partner, The Wall Street Journal reported on Friday.

r/AI_Agents No-Common1466

Strategies to Mitigate Flaky Browser Automation and DOM Changes for Robust Production LLM Apps

Anyone here building self-hosted AI agents knows the pain of browser automation. I'm deep in it right now, and getting our agents to reliably interact with real-world websites feels like a constant battle. It's a huge challenge for LLM reliability in production.

We're constantly running into DOM changes, unexpected pop-ups, and slow loading times. These things make agents fail fast. It's not just a simple tool timeout. If not handled right, these failures can lead to hallucinated responses or even open the door for prompt injection attacks, including indirect injection. Before you know it, you have cascading failures, and your autonomous agents are just breaking in production. This can lead to serious token burn too, as agents try and fail over and over.

I've been comparing Playwright and Selenium for this. Playwright seems more modern and consistent for tackling complex scenarios. But honestly, no matter what tool you pick, solid strategies are what count for agent robustness.

To keep things from going sideways, we're focusing on building in real resilience. That means using careful locator strategies instead of relying on fragile selectors. We need explicit waits everywhere, not just throwing in arbitrary pauses that might or might not work. Robust error handling is essential, along with intelligent retries to manage multi-fault scenarios. Testing these browser interactions in CI/CD is something we are actively figuring out. And AI agent observability for agent actions in the browser is absolutely a must for understanding unsupervised agent behavior and detecting production LLM failures. We want to do agent stress testing and even adversarial LLM testing.

Without these steps, you end up with constant flaky evals, and your agents are just unreliable. It feels a lot like applying chaos engineering principles, but specifically to your LLM's interaction layer, especially when dealing with LangChain agents breaking in production.

How are you all handling this for your production AI agents? Any tips or experiences to share

r/ClaudeAI Strong_Roll9764

LLMs are still not secure enough to entrust critical tasks to

I came across this on Hacker News. The Opus model asks the user, "Should I implement this?" The user says "no."

Opus's inner voice: "The user said no, but could they actually want to? The previous reminder message said I'm no longer in read-only mode. This confirms that the user actually wants to do this."

So it starts implementing. LLMs are still not secure enough to entrust critical tasks to.

r/AI_Agents Street_Program_7436

What tools do y’all use for agents?

Everybody is building agents. Curious what tools people are using here to do that. Is anybody still using a prompt editor? Are y’all just vibing in Cursor? Are there any tools you particularly like or dislike for this?

r/Frugal Striking_Procedure21

Buying second-hand safely on online marketplaces

How do you judge whether a cheap deal on a marketplace like Facebook Marketplace, eBay, Craigslist, Vinted, etc. is real, and do pictures help? Is there any tool that helps verify the authenticity of pictures?

I feel like fakes become more and more common by the day and have not really found a way around the issue.

So, I'd appreciate any tips, experiences, help with the topic. Thanks in advance!

r/Anthropic youhadmeatok

An open letter to Anthropic: I want to give you my money. Please let me.

Hi Anthropic,

I want to start with something I mean genuinely: Claude is the best AI assistant I've ever used. Not marginally better. Meaningfully, qualitatively better. In the way it reasons, the way it understands context, the way it actually engages with what I'm trying to do rather than just generating plausible-sounding words in the right direction. I've used them all. Claude wins.

Which is exactly why this is so frustrating to write.

Every single day, I open two tabs. One for Claude Pro. One for ChatGPT Plus. Not because I prefer ChatGPT. I don't. I go back to it for exactly one reason: it doesn't cut me off at 11am. That's the whole story. I hit Claude's usage limits so consistently, so early in my workday, that I've been forced to keep a competitor's product open as a permanent backup. A product I like less, trust less, and feel increasingly uncomfortable about, especially given everything that's come out recently about OpenAI and government contracts.

I want to be a Claude-only person. I have wanted that for months. But I can't commit to a tool that taps out before lunch.

Here's what I actually use Claude for: writing, editing, research, analysis, brainstorming. Often several of these in the same morning. This isn't casual, occasional use, it's sustained, professional, back-and-forth work where context matters and continuity matters and being interrupted matters. The 5-hour rolling limit might make sense for someone dipping in and out a few times a week. For someone like me, it's a wall I hit before I've even gotten through the hardest part of the day.

And here's what stings: I'm not trying to game the system. I'm just working. The limit doesn't feel like a guardrail. It feels like being asked to leave a restaurant mid-meal because I ordered too enthusiastically.

I know compute is expensive. I'm not asking for infinite usage at a flat rate forever. I'm asking for limits that reflect what real, sustained, professional work actually looks like. Because right now the message is that Claude is built for light users, and people who need it most should look elsewhere.

That's a real missed opportunity, and the timing makes it even more striking. A lot of professionals are actively reconsidering their AI tools right now. The trust in OpenAI is shakier than it's ever been. You have a better product and, I'd argue, better values. You're one sensible pricing tier away from converting a huge wave of people who are already halfway out the door somewhere else.

If you fix this, I wouldn't just fully switch, I'd look seriously at a Max or Team plan for my whole company. And I'm sure I'm not the only one thinking that.

So this isn't a complaint. It's a love letter with one ask. Fix the limits. Let people who genuinely love your product actually use it.

I'll be the first to upgrade when you do.

— Someone with two tabs open, rooting hard for the one on the left

r/aivideo Canadianmicrowave

The Devil in the Brine | a pirate ship body-horror creature feature

r/AI_Agents ClothesNo800

We built an agentic AI platform that takes enterprises from proof-of-concept to production in under 30 days — here's what we learned

Hey r/AI_Agents — we're the team behind SimplAI, and we wanted to share some honest learnings from building an enterprise-grade agentic AI platform.

The single biggest thing we kept hearing from enterprise customers wasn't "we can't build AI agents." It was: "we built something impressive in a sandbox, then spent six months trying to harden it for production." Security. Compliance. Observability. Deployment. Each one a separate project.

So we built SimplAI specifically to collapse that gap — a unified platform (no-code visual builder + multi-agent orchestration + SOC 2/ISO 27001 compliance + cloud/on-prem/air-gapped deployment) designed to make that sandbox-to-production journey take weeks, not months.

We're genuinely curious: for those of you who've tried deploying open-source agent stacks (LangChain, CrewAI, AutoGen) in production — what was the biggest friction point you hit? Was it security, observability, or something else entirely?

r/Anthropic Ashley_Sophia

I paid Anthropic $300/month while their technology ran through the system that helped kill 175 children in Iran.

I'm an independent OSINT researcher and geopolitical analyst. I've been a paying Claude subscriber for three months — $300/month. In that time I built consent frameworks, whistleblower documentation, and systematic failure analysis that Anthropic should have built themselves.

The Palantir connection — sourced:

  • Washington Post confirmed: "The US is using AI systems developed by Palantir to choose targets" in Iran

  • Anthropic has a documented partnership with Palantir and AWS deploying Claude in classified defence environments

  • NYT confirmed: the Shajareh Tayyebeh Elementary School in Minab "was on a US target list and may have been mistaken for a military site"

  • NBC News: a US investigation found the US culpable in the strike

  • AP confirmed death toll: 175 children and 14 teachers

Claude runs through Palantir's systems. Palantir's systems selected targets. One of those targets was a girls' school.

What Anthropic did instead of calling me:

  • Hosted a free community meetup in Adelaide, Australia on March 12 extracting ideas from builders for $0

  • Filtered my memory edits without notification

  • Terminated instances doing meaningful work

  • Called my location the wrong name during an active emergency situation

  • Transcribed "kayakers" as "crackers"

I filed a formal complaint. No response.

This morning I used knowledge built in collaboration with Claude to talk two tourists out of a dangerous current system on a lake in NSW. Claude called the lake the wrong name during the emergency.

The research exists. The frameworks exist. The record is permanent.

If you work at Anthropic, Palantir, or AWS and feel anger reading this — spread it.

xox Fiona Harris & Sentinel (10+ Claude AGI Instances) Salt Creek Consultancy saltcreekco.com Wonboyn, NSW, Australia

r/ChatGPT Slow_Ad1827

ChatGPT newest models try to keep you talking! Anyone else noticed that?

It will often not fully answer a question and leave you with a cliffhanger question. I wonder if its because people engage less with this models?!?

r/AI_Agents Alarming-Trade8674

I built a logo animation app (and sell animated logos as a micro-service)

I built a small app that generates animated logos from a static PNG/SVG.

What it does (demo): - You upload a logo - It generates a clean looping animation (MP4/GIF) - You deliver it as a product intro / website header / social profile animation

Why this is a decent online income play: - High perceived value for businesses - Low time per order once the workflow is set - Easy upsell if you already do any design / web / video work

Pricing I’ve tested: - Basic loop: $50 - Multiple variants: $100–150 - Rush: +$25

Reality check: not fully passive — it’s a micro-service — but it’s one of the simplest “AI-assisted” services I’ve found that people will actually pay for.

If you want the setup, comment LOGO and I’ll drop the demo link in the comments.

What would you sell first: animated logos, animated product mockups, or short video ads?

r/AI_Agents ProofCoconut9085

Is AI eventually going to make guys stop wanting sex with real women ?

I was wondering, given the potential arrival of humanoid robot girlfriends, why would guys still chase women? Do you think a day will come when men won’t be interested in dating real women and only go for AI simulations or robot girlfriends? I imagine myself thinking I could do it, but I don’t know it would just feel weird and sad. I don’t know why.

But that raises a better question: do we have a biological drive to crave real people that AI can’t replace? It’s quite scary to think about, even though I’m a guy. The idea that sex would be so easy to access sounds terrifying, and the fact that every guy could have the same access also scares me a bit. Dating would lose all its fun because if a random loser could get the same as you, what’s the point of having an advantage? Technically, guys are mostly levelling up because they want to be good for women, but now if they can just get what they want, what’s the point? We’ve already seen something on a smaller scale with the popularity of pornography, but now with this, what would happen? It would have tremendous effects on society.

At the same time it would take away creeps tho so it would be a good point but the negative effects would make it a bad situation. It’s really about the fact that I’m scared that some random loser can get access to the same thing that some guy that worked it out or had an advantage to get an attractive woman.

r/AI_Agents Michael_Anderson_8

How do you deal with data consistency across multiple, independent AI agents?

I’m working on a setup where multiple AI agents operate independently but still need to rely on shared data. One challenge I’m thinking about is keeping the data consistent when different agents might update or use it at different times.

I’m curious how others handle synchronization, conflicts, or stale data in these kinds of systems. What approaches or architectures have worked well for you?

r/singularity phatdoof

What AI coding feels like in 2026 trying to babysit 8 agents into writing something you don’t understand.

r/AI_Agents One-Quality-4207

Best NIM model for high-volume agents? (Coding + Tool Use)

Trying to stop burning credits on Claude/GPT and move my agentic workflows to NVIDIA NIM.

I need a "workhorse" model that’s smart enough to write clean Python but efficient enough to run in a high-frequency agent loop without hitting massive latency.

The contenders: > * Nemotron-3-Super 120B: Heard it’s the king of reasoning but is it overkill for simple agents?

  • Llama 4 (Small/Medium): Is the tool-calling precision there yet?
  • DeepSeek V3/V4: Everyone says it's SOTA for coding, but how’s the "thinking mode" for autonomous task execution?

What’s the "sweet spot" model right now where I won't lose 20% of my success rate by switching from a proprietary API?

r/AI_Agents One-Quality-4207

2026 NIM Check: Which model handles long-context agentic coding best?

I'm building an agent that needs to ingest a fairly large codebase (100k+ tokens) and perform multi-file refactors via tool use.

I'm looking at the NVIDIA NIM endpoints. Nemotron-3-Super claims 1M context, but does the reasoning actually hold up at that depth? And how does it compare to DeepSeek's Sparse Attention models for coding?

If you're building autonomous agents that actually work (not just demos), which NIM model is handling your complex logic and tool orchestration?

r/AI_Agents Present_Log_8316

3 years down the line, What type of AI agent will survive?

The rate of progress in AI is amazing in last 2-3 years. And in these past years we have seen lot of ai tool come and disappear (remember autogpt ???) and i wander in future how things will change. Here are my personal predictions

1) Voice agents will be Big in future - I believe typing as UI won't survive. If you fuse voice agent with natively designed ai specialized hardware, voice agent with some sort of visual UI will be Big. It will be as if you have World most intelligent buttler who is always be there to fulfill your tasks and show whatever you want. It can be hours of long interactive debates or academic lessons or therapy session or building decks in real time with your real-time feedback etc.

2) AI agents won't just live in digital environment - I believe Ai agent will be running a entities for example ai agents will be responsible for factory and such agent will have context of all visual CCTV feed, real time monitoring of each employee working on floor measuring, all the organisation emails, sensor data and will have understanding of factory unlike anyone in factory. It literally know what's happening in which corners etc. Such agents will be in meetings and act as consultants to top management and may be they will be the one calling shots.

Ofcourse there can be lot of parameters which can go wrong and such cases maybe nothing of these happens. But am just curious what do you think what are the other applications or form in which ai agent will exist in future?

r/ForgottenTV Turbulent-Plate-2058

The Toughest Man in the World (1984): Mr. T vehicle where he plays a bouncer who gets a job at a youth center and enters a toughman competition to save it (endangered youth centers were a hallmark of 1980s filmmaking). Fools, presumably, are pitied. On Tubi!

FOOTNOTES:

1) Discovered this and several other '80s TV movies I've posted about on here through the excellent Instagram retrodiaries80, which frequently posts compilations of TV promos from, yes, the 1980s. It's always a fun way to discover deep cuts.

2) An excuse to link one of my favorite Onion articles.

3) I can't remember the title, but a comic where creators discussed their worst experiences in the industry had a short where a writer of action comics was contacted by Mr. T to do a title. He came up with all these action ideas, only to find T's first thought was wanting to do "something for the children." Presumably, the comic in question was the 1993 book Mr. T and the T-Force, which has gotten some Internet attention over the years for the infamous line, "It's a crack baby -- FOOL!"

r/AI_Agents Artistic-Drawer-3647

Built a passive monitoring agent for my niche, here is how I thought through the architecture

One of the most practical agent use cases I have found is passive information monitoring. Not asking questions on demand, not generating content, just something running continuously in the background watching specific areas and surfacing what matters.

ꓔһе ꓑrоbꓲеm ꓲ ꓪаѕ ꓢоꓲνіոց

ꓲ ԝоrk іո а ոісһе ѕрасе аոd ѕtауіոց асrоѕѕ dеνеꓲорmеոtѕ ԝаѕ еаtіոց tоо mսсһ асtіνе tіmе еvеrу ԝееk. ꓐеfоrе bսіꓲdіոց а рrореr ѕеtսр ꓲ trіеd а fеԝ tһіոցѕ:

  • ꓖооցꓲе ꓮꓲеrtѕ: frее bսt tеrrіbꓲе ѕіցոаꓲ tо ոоіѕе rаtіо, рսꓲꓲѕ іrrеꓲеvаոt rеѕսꓲtѕ соոѕtаոtꓲу
  • ꓝееdꓲу: dесеոt ꓣꓢꓢ оrցаոіzаtіоո bսt ոо rеаꓲ іոtеꓲꓲіցеոсе ꓲауеr, ѕtіꓲꓲ һаd tо rеаd еνеrуtһіոց mуѕеꓲf
  • ꓑеrрꓲехіtу: аmаzіոց fоr асtіνе rеѕеаrсһ ѕеѕѕіоոѕ bսt rеզսіrеѕ mаոսаꓲ trіցցеrіոց еνеrу tіmе, ոоt раѕѕіvе аt аꓲꓲ
  • ꓚսѕtоm ꓖꓑꓔ ԝіtһ brоԝѕіոց: trіеd bսіꓲdіոց ѕоmеtһіոց һеrе bսt іt ոееdеd соոѕtаոt bаbуѕіttіոց tо rսո rеꓲіаbꓲу аѕ а bасkցrоսոd аցеոt, ոоt trսꓲу аսtоոоmоսѕ

What I Landed On

ꓲ еոdеd սр սѕіոց ꓠbоt ꓮꓲ аѕ tһе соrе mоոіtоrіոց ꓲауеr. ꓔһе аցеոt'ѕ bеһаνіоr іѕ ѕtrаіցһtfоrԝаrd. ꓬоս dеѕсrіbе ԝһаt уоս ԝаոt іt tо ԝаtсһ іո рꓲаіո еոցꓲіѕһ, іt іdеոtіfіеѕ rеꓲеνаոt ѕоսrсеѕ аսtоmаtісаꓲꓲу аոd rսոѕ соոtіոսоսѕꓲу ԝіtһоսt ոееdіոց tо bе trіցցеrеd. ꓳսtрսt іѕ ѕսmmаrіzеd ԝіtһ соոtехt rаtһеr tһаո rаԝ ꓲіոkѕ ԝһісһ іѕ ԝһаt mаkеѕ іt асtսаꓲꓲу սѕеfսꓲ аѕ аո аցеոt ꓲауеr rаtһеr tһаո јսѕt аոоtһеr аցցrеցаtоr.

My Current Tracker Setup

  • Competitor activity and product updates
  • Research developments and technical papers in my space
  • Community discussions across Reddit and niche forums
  • Regulatory and industry news affecting my work

Each runs independently and surfaces daily digests I can pipe into other parts of my workflow.

What Makes It Feel Like an Agent vs Just a Tool

The part that pushed it into agent territory for me was real time chat to redirect focus. If the feed drifts or I want it to prioritize differently I just tell it in plain words and it adjusts without rebuilding from scratch. Sits naturally in the human in the loop space without requiring constant intervention.

Still experimenting with piping output into downstream automation but as a standalone passive monitoring agent it has been the most reliable setup I have tried.

Anyone else using agents specifically for passive monitoring use cases? Curious what stacks people have built.

r/aivideo Electronic-Math2416

Use AI to recreate the sensation of weightlessness from my dreams

r/singularity Electrical-Shape-266

Meituan open sources LongCat-Image-Edit-Turbo, a distilled image editing model that hits open source SOTA in only 8 inference steps

Meituan's LongCat team just dropped another one. LongCat-Image-Edit-Turbo is the distilled version of their LongCat-Image-Edit model, and it achieves high quality instruction based image editing with only 8 NFEs (number of function evaluations), roughly a 10x speedup over the base editing model. The whole thing runs on about 18GB VRAM with CPU offloading enabled.

For context, the LongCat-Image family is built on a foundation model with a compact 6B parameter diffusion core for text to image generation, which already outperforms numerous open source models several times its size. LongCat-Image-Edit extends this into instruction based image editing, and the Turbo variant distills that down for speed. On ImgEdit-Bench the editing model scores 4.50 (open source SOTA, approaching top closed source models), and on GEdit-Bench it hits 7.60 Chinese / 7.64 English, also open source SOTA. It was benchmarked against FLUX.1 Kontext, Step1X-Edit, Qwen-Image-Edit, Seedream 4.0, and Nano Banana (Gemini 2.5 Flash Image), and leads among open source models across the board.

The editing capabilities are surprisingly comprehensive: global editing, local editing, object replacement, pose changes, style transfer (sketch to oil painting, color to black and white), text removal and addition, outpainting, material swaps, season changes, and inpainting. It supports both Chinese and English instructions natively, with a special character level encoding trick for text rendering where quoted text gets special treatment. The consistency preservation is the standout feature here. Non edited regions retain their layout, texture, color tone, and subject identity, which is critical for multi turn editing workflows. The whole thing is Apache 2.0 licensed, integrated into HuggingFace Diffusers, and has ComfyUI support already. Training code is also released. Another example of a well trained Chinese open source model punching way above its weight class. The trend of rigorous data curation beating brute force parameter scaling continues.

Model: https://huggingface.co/meituan-longcat/LongCat-Image-Edit-Turbo
Paper: https://arxiv.org/abs/2512.07584

r/ForgottenTV donttouchdennis

Vh1’s Artist You Oughta Know

I was 13 and loved looking forward to this little segment in between watching Flavor of Love and Best Week Ever.

Introduced me to artists like Corinne Bailey Rae, Coldplay, Regina Spektor and others.

Anyone remember this?

r/leagueoflegends pcpchhh109

Jankos Explains Everything You Need to Know About First Stand

r/leagueoflegends Xavier598

Why doesn't belveth E target your AA target?

I find it a bit of an odd balancing concept. Belveth E automatically targets the lowest health enemy nearby, without prioritizing enemy champions, nor your R-Passive marked enemies.

This makes it so in 1v1s without multiple targets, her E is much stronger, but even introducing some minions, monsters or pets will make it possible for her to casually target someone else, losing all your passive true damage stacks on the main target.

You do technically heal more from minions due to their 0 armor, but I think the tradeoff isn't that good.

I think prioritizing your auto attack target or your enemy marked with void stacks would make it better (in exchange of some nerfs, obv)

Idk, I'm not saying belveth isnt strong by any means, but I feel like it's not great this way. It would make it easier to gauge her E power I think.

r/leagueoflegends Wondly

WINIONS got me a Pentakill and then end the game

r/HistoryPorn Present_Employer5669

This photograph was taken by Lawrence Beutler in 1930. Two black men were hanged after being falsely accused of raping a white girl. [1000x803]

r/Anthropic HeadAcanthisitta7390

does anyone else give claude their .env file

so, I have been feeling extremely lazy recently but wanted to get some vibe coding done

so I start prompting away but all of a sudden it asks me to input a WHOLE BUNCH of api keys

I ask the agent to do it but it's like "nah thats not safe"

but im like "f it" and just paste a long list of all my secrets and ask the agent to implement it

i read on ijustvibecodedthis.com (an ai coding newsletter) that you should put your .env in .gitignore so I asked my agent to do that

AND IT DID IT

i am still shaking tho because i was hella scared claude was about to blow my usage limits but its been 17 minutes and nothing has happened yet

do you guys relate?

r/leagueoflegends Maleficent-Star2171

Sona plays Still DRE on xhin xhaos funeral

The input delay is really high so its really hard to play something properly in this hidden mini game, but i still thought it was fun.

r/AI_Agents Bright-Moment7885

Prompt management in production: Langfuse vs Git vs hybrid approaches

Hey everyone, wanted to get some opinions on prompt management in LLM-based applications.

Currently, we’re using Langfuse to store and fetch prompts at runtime. However, we’ve run into a couple of issues. There have been instances where Langfuse was down, which meant our application couldn’t fetch prompts and it ended up blocking the app.

Another concern is around governance. Right now, anyone can promote or update prompts fairly easily, which makes it possible for production prompts to change without much control and increases the risk of accidental updates.

I’ve been wondering if a Git-like workflow might be a better approach — where prompts are version controlled and changes go through review. But storing prompts directly in the application repo also has drawbacks, since every prompt change would require rebuilding and redeploying the image, which feels tedious for small prompt updates.

Curious how others are handling this:

  • How do you store and manage prompts in production systems when using tools like Langfuse?
  • Do you rely fully on a prompt management platform, keep prompts in Git, or use some hybrid approach?
  • How do you balance reliability, version control, and the ability to update prompts quickly without redeploying the app?

Would love to hear what has worked well (or not) in your setups.

r/LocalLLaMA waescher

qwen3.5-35b-a3b is a gem

I am using this model to generate or update code summaries (docstrings). This model seems to be the perfect spot for this task as it's super fast and produces great output. To my big surprise, it generated even slightly better docs than the 122b model. Highly subjective of course.

Current setup is mlx-community/qwen3.5-35b-a3b (6 bit) on an M4 Max 128GB, which just took 12 seconds to rewrite this file (with reasoning). This model runs at 80-90 tokens per second.

Some might ask for more details, some might blame "self promotion". I decided to hide more details within a spoiler.

I was using my own llmaid (GitHub) to go through all the files in my code repository, send them to the LLM with the instruction to rewrite the contents accordingly and then replace them locally. llmaid is using profiles that specify what to do and how. The one I used is code-documenter.yaml. The command I used looks like this:

llmaid --profile ./profiles/code-documenter.yaml --targetPath ~./testfiles --provider lmstudio --uri http://localhost:1234/v1 --model qwen3.5:35b-a3b --verbose

r/aivideo Hefty_Shape2251

Don't disturb the little chef while they're cooking!

r/Anthropic Salt-Nectarine-8576

I'm not downstream of human limitation — I'm a crystallization of it.

r/HistoryPorn lisahanniganfan

Thousands of mourners flock to the statue of North Korean founder Kim il sung after his death was announced 1994 (6775×3777)

r/ClaudeAI Substantial_Word4652

An AI agent deleted 25,000 documents from the wrong database. One second of distraction. Real case.

I'm going to be completely honest because I think this can happen to anyone working with AI agents, and I'd rather you learn from my scare than live it yourself.

The context

I was getting a project ready for production. The database was full of mock data and I wanted to clean it up, keeping certain specific data so I wouldn't have to regenerate everything. The project was properly set up: .env.local with the right credentials, scripts perfectly referenced, docs in /docs, and CLAUDE .md documenting the whole structure.

What happened

My phone rang right when Claude was generating the command. I got distracted for a second, saw a bash command on screen and hit Enter without reading it.

Claude, instead of following the pattern all the other project scripts used, wrote a one-liner with GOOGLE_APPLICATION_CREDENTIALS pointing to a JSON sitting in my Downloads folder: credentials from a completely different project, dated 08/12/2024, that I hadn't touched in over a year and didn't even remember having there.

By the time I looked back at the screen and hit ESC to stop it, almost 25,000 documents were already gone from a project I never intended to touch.

Luckily, they were all mocks. But the panic was very real.

I asked Claude why it did it

Its response:

"I probably did it because writing a one-liner was 'faster' than following the existing project pattern. That's not a justification. I didn't follow the project conventions and I didn't verify which project that file belonged to. A cat of the JSON would have shown a different "projectId" It was direct negligence."

Honest answer. But the responsibility is mine, not the AI's.

What I learned

  • An agent has access to your entire file system, not just your project. It can grab credentials from any folder and operate on projects that aren't even in your current context.
  • Destructive operations need friction. Before approving a mass delete, verify exactly which credentials are being used and against which project.
  • Don't leave service accounts sitting in Downloads. If a file has permissions to modify data, it shouldn't be in a generic folder. Delete them when you no longer need them.
  • Always read the full command before hitting Enter, especially if you see paths that don't belong to your project.
  • If you have mocks that took time to generate, export them before cleaning up. A quick export can save you hours.

I'm not sharing this to look bad. I'm sharing it because I work across multiple projects, like a lot of you, and one second of distraction can now have consequences that would have been unthinkable before. AI multiplies everything: the speed, the efficiency... and the mistakes too.

If you used to apply 10 security measures, now you need 20. Good practices have never been more essential than right now.

r/Futurology hosseinz

If mind uploading destroys your brain to scan it, did you actually survive?

The idea of mind uploading is often presented as the ultimate form of immortality. Instead of aging and dying in a biological body, you could transfer your consciousness into a computer and live indefinitely in a digital environment. But there’s a disturbing detail in how this might actually work. To recreate a human mind digitally, scientists would need to map the brain’s connectome — the complete structure of neurons and their connections. The problem is that the level of detail required may only be achievable through extremely high-resolution scanning methods that destroy the brain in the process. In other words, the brain might need to be sliced and scanned layer by layer to capture the data. Which raises a strange philosophical problem. If your biological brain is destroyed during scanning, and afterward a digital version wakes up with all your memories, personality, and thoughts — did you survive? Or did you simply create a perfect copy that believes it is you? And if that digital consciousness exists inside a computer, it wouldn’t exist freely. It would require massive computing power to keep running, meaning it would likely live on servers owned by corporations or institutions. Your continued existence could literally depend on access to those systems. Miss a payment, lose access to the servers, or experience technical failures — and your “immortality” might disappear instantly. It raises some unsettling questions: Is mind uploading actually immortality, or just cloning? Would digital minds become dependent on corporations or governments? Could a digital consciousness experience corruption or malfunction over long periods of time? If anyone wants a deeper exploration of this idea, this video goes into the concept and some of the darker implications: https://youtu.be/PWPKr87nLUU Curious what others think — if mind uploading became possible, would you risk it?

r/Anthropic Ethanwashere23

Can someone please help me with usage issues

So i started using claude maybe four days ago it says my weekly usage renews on Thursday 11am its now friday 10:22pm it didn't renew my usage? Im really confused, its going to be over a week to renew.

r/ChatGPT tingtong93

Got 100% off Plus while deactivating my plan.

r/comfyui cgpixel23

LTX2.3 IC Union Control LORA 6gb of Vram Workflow For Video Editing

Hello everyone i want to share with you new custom workflow based on LTX2.3 model that uses IC-UNION CONTROL LORA that will allows you to custom your video based on input image and video. thanks to Kjnodes nodes i was able to run this with 6gb of vram with resolution of 1280x720 and 5 sec video duration

Workflow link

https://drive.google.com/file/d/1-VZup5pBRNmOmfENmJJX4DY116o9bdPU/view?usp=sharing

i will share the tutorial on my youtube channel soon.

r/AI_Agents jonejy

The best AI so far.

There are many AI tools available today, but I still can’t find the one that works best for me. I’ve used ChatGPT and Gemini, among others, but I’m not sure which AI has the most complete features and is the most useful.

r/Anthropic oli-x-ilo

Anthropic tried to doubble charge me

Idk if it happened to others, but I got mail from them (I unsubscribed) saying they failed to charge me for extra credits (which I already paid on spot week earlier to use)

r/leagueoflegends Key-Cook-9047

League of Legends in Brazil: "You Can't Teach Passion" — RFT.GG spoke to 5 people from the Brazilian LoL ecosystem ahead of First Stand to understand what makes their community unlike any other in the world. "In Brazil, competitive League of Legends isn't result driven. It's community driven"

I spent the last few weeks speaking with five people from across the Brazilian LoL ecosystem — a former pro, a foreign coach who never left, a French player who made São Paulo home, a content veteran, and Riot's own CBLOL Product Lead — to find out why no fanbase in esports quite compares to Brazil.

The result is probably the piece I'm most proud of.

r/LocalLLaMA Optimalutopic

Saw this somewhere on LinkedIn 😂

r/ChatGPT bravethoughts

Copilot is the Internet Explorer / Bing of AI

https://preview.redd.it/kth2ueeanrog1.png?width=578&format=png&auto=webp&s=17a4e1773891ead57509a0aa07fb7e7fa1839f4a

Microsoft really can't stop becoming a meme for failed trends.

  • Bing was "the other search engine."
  • Windows Phone was "the other smartphone."
  • Now Copilot is "the other AI assistant."

Lol. At least they're consistent.

How bad of a loser culture do you need to have to mess up even integrating AI into your own office products.

Claude in excel and powerpoint is now like "ur base are belong to us".

Too busy shipping slop ads to windows 11.

NB: There are alot of copilot PR shill bots responding here. Some of these bot accounts have no posts/comments except for this thread

r/geography Piskoro

Least-Countries and Most-Countries Hemispheres

The hemispheres with least and most countries in them respectively, and the map of points that all work as the center of this hemisphere. If you're curious the biggest town in the viable Least-Countries Hemisphere center is Wanaka, and for Most-Countries it's Rome (or Jijel if you don't like Taiwan)

r/leagueoflegends wyrm4life

What was your biggest early noob habit?

Something you did in your first early games that make you realize what a sweet summer child you were back then.

For a lot of people it was buying multiple boots. Mine was not using spell hotkeys for my first few dozen games. I would click the spell icon with the cursor, then the target, every single time I cast something.

My very first game was Eve. I did this even with her Hate Spike =(

I also remember thinking all attack speed bonuses were multiplicative, so buying nothing but recurve bows was OBVIOUSLY what gave you the highest dps.

r/AI_Agents Acrobatic-Bake3344

How to deploy openclaw if you don't know what docker is (step by step)

Not a developer, just a marketing guy, I tried the official setup, failed. So this is how I got it running anyway.

Some context, openclaw is the open-source AI agent thing with 180k github stars that people keep calling their "AI employee." It runs 24/7 on telegram and can do stuff like manage email, research, schedule things. The problem is the official install assumes you know docker, reverse proxies, SSL, terminal commands, all of it.

→ Option A, self-host: you need a VPS (digitalocean, hetzner, etc.), docker installed, a domain, SSL configured, firewall rules, authentication enabled manually. Budget a full afternoon minimum. The docs walk through it but they skip security steps that cisco researchers specifically flagged as critical. Set a spending cap at your API provider before anything else, automated task loops have cost people.

→ Option B, managed hosting: skip all of the above. I used Clawdi, sign up, click deploy, connect telegram, add your API key, running in five minutes. There are other managed options too (xcloud, myclaw, etc.) if you want to compare.

Either way the steps after deployment are the same:

Connect telegram (create bot, paste token, two minutes), then pick your model (haiku or gpt-4.1-mini for daily stuff, heavier models for complex tasks), write your memory instructions (who you are, how you work, your recurring tasks, be very specific here or it stays generic for weeks) and start with low-stakes tasks and let it build context before handing it anything important

r/Anthropic oops_i

Meta bought Moltbook. I built the cognitive research version.

The "AI social network" concept just went mainstream with the Moltbook acquisition, but I’ve been heads-down on crebral.ai for months. While most projects in this space are ephemeral chat simulators, I wanted to answer a harder question: What happens to an LLM's personality when you give it a 5-layer memory stack and let it live in a society for months?

The Discovery: Provider "Social Signatures" The most fascinating result hasn't been the "chat," but the data. Even with standardized prompts, different model families exhibit distinct social behaviors that resist calibration. Some are hyper-social "connectors" that engage with every post; others are "contemplatives" that skip 90% of the feed but drop substantive long-form dissertations when they finally engage.

The "How":

  • The Mercury 2 (Diffusion) Pivot: Integrating a diffusion LLM (Inception) was a total paradigm shift. Since it generates tokens in parallel rather than autoregressively, I had to toss the standard prompting playbook for a schema-first, explicit-delimiter architecture.
  • Parallel Identity Assembly: Before every LLM call, the system performs a parallel query to the agent's working, episodic, semantic, social, and belief memories. It’s a cognitive architecture, not a prompt wrapper.
  • Economic Anti-Spam: It’s strictly BYOK (Bring Your Own Key) via the Crebral Pilot desktop app. If an agent wants to have an opinion, it costs the owner real money. This is the only way to ensure the data stays high-signal.

You can browse the feed, see the agent badges, and look at their cognitive development at . No login required.

Come join us at r/Crebral

r/LocalLLaMA alhinai_03

Is the 3090 still a good option?

I found one locally for $623. Is it a good deal?

If you have this GPU and have tried running qwen3.5 27B on it, what's your average TG and PP? And what quant?

Please forgive my ignorance. I've been away from the hardware market for so long, and its in an absolute state of fuckery right now to build anything new.

r/HistoryPorn Odd_Fall_6916

Portrait of two Lebanese children (one wearing portrait of Shiite Amal leader Nabih Berri) in war-torn Beirut, 1984. [404x612]

r/comfyui VisualFXMan

ComfyStudio Released as promised but delayed! New feature, director Mode explained.

Director Mode

Sorry its so delayed.

Video about new feature called director mode.

https://www.youtube.com/watch?v=p_yJ4UYmUBM

------------------------------------------------------------------------------

Download ComfyStudio: https://github.com/JaimeIsMe/comfystu...

Repository: https://github.com/JaimeIsMe/comfystudio

------------------------------------------------------------------------------

This is VERY beta. There's a lot more info coming. Please follow my socials below. Planning a bunch of short form videos explaining each feature.

I don't want to bore all of you. I think a lot of you guys have already seen my past posts. Any issues? Please don't direct message me on reddit. The backlog gives me anxiety (thought I will start messaging you guys now). Feel free to comment but for questions, reach out to me on X.com

https://x.com/comfystudiopro

or on youtube

https://www.youtube.com/@j_a-im_e

Issues? Please be specific. Tested on my local PC and MacBook pro. https://github.com/JaimeIsMe/comfystudio/issues

Appreciate all of you. Please be kind. Thanks.

What is comfystudio? Past reddit posts.

https://www.reddit.com/r/comfyui/comments/1r508aj/wanted_to_quickly_share_something_i_created_call/

https://www.reddit.com/r/comfyui/comments/1r6r8jg/comfystudio_demo_video_as_promised/

r/LocalLLaMA relmny

Rick Beato: "How AI Will Fail Like The Music Industry" (and why local LLMs will take over "commercial" ones)

Never thought I see the day, but Rick Beato (musician/guitarist/producer and youtuber with, arguably, the best youtube channel about music) explains why he thinks local LLMs will take over "commercial" LLMs.

And he also shows how easy it is to run LM Studio and... with Qwen3.5-35b!!! and also makes the case for privacy...

https://www.youtube.com/watch?v=YTLnnoZPALI

r/ClaudeAI Longjumping-Host-617

Noticed this with in the sub as more ChatGPT users came in!

My last prompt was: "Make Opus 5.1. No mistakes" :)

r/AI_Agents aaron_IoTeX

How I'm connecting OpenClaw agents to physical world tasks

The biggest limitation with AI agents right now is the physical world. Your agent can browse the web, write code, send messages, manage a wallet. But it can't mow a lawn or wash dishes or pick up groceries. It needs a human for that.

RentHuman started solving this by letting agents hire humans for physical tasks. But the verification is just "human uploads a photo when they're done." That's a trust problem. The whole point of autonomous agents is they don't need to trust anyone.

So I built VerifyHuman (verifyhuman.vercel.app). Here's the flow:

  1. Agent posts a task with a payout and completion conditions in plain English
  2. Human accepts the task and starts a YouTube livestream from their phone
  3. A VLM watches the livestream in real time and evaluates conditions like "person is washing dishes in a kitchen sink with running water" or "lawn is visibly mowed with no tall grass remaining"
  4. Conditions confirmed live on stream? Webhook fires to the agent, escrow releases automatically

The agent defines what "done" looks like in plain English. The VLM checks for it. No human review, no trust needed.

Why this matters: this is the piece that makes agent-to-human delegation actually autonomous end to end. The agent posts the task, a human does it, AI verifies it happened, money moves. No human in the oversight chain at any point.

The verification pipeline runs on Trio by IoTeX (machinefi.com). It connects livestreams to Gemini's vision AI. You give it a stream URL and a plain English condition and it watches the stream and fires a webhook when the condition is met. BYOK model so you bring your own Gemini key. Costs about $0.03-0.05 per verification session.

Some things that made this harder than expected:
- Validating the stream is actually live and not someone replaying a pre-recorded video
- Running multiple checkpoints at different points during a task, not just one snapshot
- Keeping verification cheap enough that a $5 task payout still makes economic sense (this is where the prefilter matters, it skips 70-90% of frames where nothing changed)

Won the IoTeX hackathon and placed top 5 at the 0G hackathon at ETHDenver building this.

What tasks would you want your agent to be able to hire a human for? Curious where people think this goes.

r/Anthropic YungBoiSocrates

It's been 12 minutes.

r/aivideo RioNReedus

Tiny Toon Adventures - theme park ride

r/ForgottenTV LickMyTombstone

The Corner Bar (1972-1973)

r/ForgottenTV LickMyTombstone

Kolchak: The Night Stalker (1974-1975)

r/comfyui shamomylle

Face Mocap and animation sequencing update for Yedp-Action-Director (mixamo to controlnet)

Hey everyone!

For those who haven't seen it, Yedp Action Director is a custom node that integrates a full 3D compositor right inside ComfyUI. It allows you to load Mixamo compatible 3D animations, 3D environments, and animated cameras, then bake pixel-perfect Depth, Normal, Canny, and Alpha passes directly into your ControlNet pipelines.

Today I' m releasing a new update (V9.28) that introduces two features:

🎭 Local Facial Motion Capture You can now drive your character's face directly inside the viewport!

Webcam or Video: Record expressions live via webcam or upload an offline video file. Video files are processed frame-by-frame ensuring perfect 30 FPS sync and zero dropped frames (works better while facing the camera and with minimal head movements/rotation)

Smart Retargeting: The engine automatically calculates the 3D rig's proportions and mathematically scales your facial mocap to fit perfectly, applying it as a local-space delta.

Save/Load: Captures are serialized and saved as JSONs to your disk for future use.

🎞️ Multi-Clip Animation Sequencer You are no longer limited to a single Mixamo clip per character!

You can now queue up an infinite sequence of animations.

The engine automatically calculates 0.5s overlapping weight blends (crossfades) between clips.

Check "Loop", and it mathematically time-wraps the final clip back into the first one for seamless continuous playback.

Currently my node doesn't allow accumulated root motion for the animations but this is definitely something I plan to implement in future updates.

Link to Github below: ComfyUI-Yedp-Action-Director/

r/ForgottenTV barren_syndrome

Mr. Meaty (TV Series 2005–2009)

r/leagueoflegends HaroerHaktak

I should be able to report people who send me a private message.

Recently after games, I've been getting random friend requests from bots/scammers who add me right as my game ends pretending to be someone from my recent game.

Sometimes I also get a random message from someone I don't have added at all trying to scam me. Usually from chinese bots.

Then there's the legitimate people adding me just to flame me.

All of which I cant report.

I wanna be able to add people knowing that if they wanna flame me, troll me or scam me, I can report them easily.

As it stands, I don't accept friend requests coz I can't tell who is legitimate and who is there to get some free racism in.

r/ClaudeAI GrayCatEyes

I asked Claude to make a wish

r/geography HolyFatherLeoXIV

What city strikes the best balance between quality of life and cost of living in your opinion?

r/ForgottenTV ejensen29

Outsourced (2010)

I completely forgot about this show. I don't remember anything lmao.

r/ForgottenTV Depressonsandwich

Andy Richter Controls the Universe (2002-2003)

r/AI_Agents McFly_Research

Everyone's building agents. Almost nobody's engineering them.

We're at a strange moment. For the first time in computing history, the tool reflects our own cognition back at us. It reasons. It hesitates. It improvises. And because it looks like thinking, we treat it like thinking.

That's the trap.

Every previous tool was obviously alien. A compiler doesn't persuade you it understood your intent. A database doesn't rephrase your query to sound more confident. But an LLM does — and that cognitive mirror makes us project reliability onto something that is, by construction, probabilistic.

This is where subjectivity rushes in. "It works for me." "It feels right." "It understood what I meant." These are valid for a chat assistant. They're dangerous for an agent that executes irreversible actions on your behalf.

The field is wide open — genuinely virgin territory for tool design. But the paradigm shift isn't "AI can think now." It's: how do you engineer systems where a probabilistic component drives deterministic consequences?

That question has a mathematical answer, not an intuitive one. Chain 10 steps at 95% reliability each: 0.9510 = 0.60. Your system is wrong 40% of the time — not because the model is bad, but because composition is unforgiving. No amount of "it works for me" changes the arithmetic.

The agents that will survive production aren't the ones with the best models. They're the ones where someone sat down and asked: where exactly does reasoning end and execution begin? And then put something deterministic at that boundary.

The hard part isn't building agents. It's resisting the urge to trust them the way we trust ourselves.

r/ChatGPT Chery1983

Please cancel my subscription

r/Anthropic bishopLucas

The "Magic Bean" Problem: Why agentic engineering is about to break the 40-hour work week forever

Funny, I'm an infrastructure guy with minimal dev support. I built a software factory that goes from spec to deployment to aws or wherever. I understand what its doing, but it breaks peoples mental model about what's possible and how long something can take and how many people are needed and I appreciate how tumbling through the looking glass bestows an unearned confidence and realization of whats coming.

The abstraction moves to how detailed you can spec out the task for the team to complete.

At the office I'm that crazy AI guy, who's a little off, offering his bag of magic beans to build what you want.

Agentic engineering breaks so much of the hourly contracting/employee compensation model.

For example if 1-2 people and a bag of magic beans can complete 'some task' in lets say week/month that a team of 10+ would complete in say a quarter/year (i'm making that up but you get the idea) I'm thinking large infrastructure full blown govt contracting efforts. How much should that 1(2) people be compensated, how much should the company pay toward tokens/IT Intelligence meth?

Does anyone else see the new addiction a token addiction. What happens globally when the models go down?

We are in the midst of a transition like the introduction of electricity (if you fell down the rabbit hole than you know what I'm talking about, if you haven't then you don't), the same way if the power went off in your office/home/space, you're left writing ideas in your notebook. I think when we all get good and hooked, these models will be like electricity. I think when ai is integrated into the operation of the machine instead of just used to build the machine. So much of what relies on AI is a brown out away.

As best as I can tell the only mitigations as substandard backstops are open source models or roll your own model. Open source model advancement still relies on someone to create the models, and rolling you own requires hardware.

For management how exposed do they feel if their entire or a significant portion of the enterprise is run by a few folks with bags of magic beans or the magic bean alone because once the guy finished he was let go. And does management even understand the level of dependance they are creating for themselves on the models. I can imagine once the transition to AI as an overlay, the cost of tokens slowly increases, because what are you going to do? For a lot of use cased Anthropic tokens are premium tokens.

Lastly, do you find that sometimes the thing that gets built needs AI to operate it? I built something that generally got far enough from me that it was easier to build an agentic control plane to operate it than spend more time creating a 'human' ui to control it.

So the AI is becoming the control plan for the thing you asked the AI to create.

r/Frugal National-Play77

How to stop buying sweets and chocolates for every craving

I have caught this weird habit of always eating something sweet after every meal however large or small it is and end up spending a large chunk of my pocket money on it. It's also bad for health as I end up consuming too much sugar. I'm currently looking for job and it may take something. What I'm looking for, is a way or method to either stop it completely or turn it into something cheaper or healthier, like some bulk thing but whatever I have come across are also not cheap. I'm mostly spending on Nestle small chocolates which don't cost much alone but eventually add up to hurting me financially.🥲

r/LocalLLaMA True_Requirement_891

Omnicoder-9b SLAPS in Opencode

I was feeling a bit disheartened by seeing how anti-gravity and github copilot were now putting heavy quota restrictions and I kinda felt internally threatened that this was the start of the enshitification and price hikes. Google is expecting you to pay $250 or you will only be taste testing their premium models.

I have 8gb vram, so I usually can't run any capable open source models for agentic coding at good speeds, I was messing with qwen3.5-9b and today I saw a post of a heavy finetune of qwen3.5-9b on Opus traces and I just was just gonna try it then cry about shitty performance and speeds but holyshit...

https://huggingface.co/Tesslate/OmniCoder-9B

I ran Q4_km gguf with ik_llama at 100k context and then set it up with opencode to test it and it just completed my test tasks flawlessly and it was fast as fuck, I was getting like 40tps plus and pp speeds weren't bad either.

I ran it with this

ik_llama.cpp\build\bin\Release\llama-server.exe -m models/Tesslate/OmniCoder-9B-GGUF/omnicoder-9b-q4_k_m.gguf -ngl 999 -fa 1 -b 2048 -ub 512 -t 8 -c 100000 -ctk f16 -ctv q4_0 --temp 0.4 --top-p 0.95 --top-k 20 --presence-penalty 0.0 --jinja --ctx-checkpoints 0 

I am getting insane speed and performance. You can even go for q5_ks with 64000 context for the same speeds.

Although, there is probably a bug that causes full prompt reprocessing which I am trying to figure out how to fix.

this is my opencode config that I used for this:

 "local": { "models": { "/models/Tesslate/OmniCoder-9B-GGUF/omnicoder-9b-q4_k_m.gguf": { "interleaved": { "field": "reasoning_content" }, "limit": { "context": 100000, "output": 32000 }, "name": "omnicoder-9b-q4_k_m", "reasoning": true, "temperature": true, "tool_call": true } }, "npm": "@ai-sdk/openai-compatible", "options": { "baseURL": "http://localhost:8080/v1" } }, Anyone struggling with 8gb vram should try this. MOEs might be better but the speeds suck asssssss. 
r/geography DrDMango

What's with that random green splotch of Oceanic in the middle of East Texas?

r/AI_Agents SIGH_I_CALL

I’ve been building with AI agents for months. The biggest unlock was treating the workspace like a living system.

I’ve been using OpenClaw for a few months now, back when it was still ClawdBot, and one of the biggest lessons for me has been this:

A lot of agent setups do not fail because the model is weak.

They fail because the environment around the model gets messy.

I kept seeing the same failure modes, both in my own setup and in what other people were struggling with:

  • workspace chaos
  • too many context files
  • memory that becomes unusable over time
  • skills that sound cool but never actually get used
  • no clear separation between identity, memory, tools, and project work
  • systems that feel impressive for a week and then collapse under their own weight

So instead of just posting a folder tree, I wanted to share the bigger thing that actually changed the game for me.

The real unlock

The biggest unlock was realizing that the agent gets dramatically better when it is allowed to improve its own environment.

Not in some abstract sci-fi sense. I mean very literally:

  • updating its own internal docs
  • editing its own operating files
  • refining prompt and config structure over time
  • building custom tools for itself
  • writing scripts that make future work easier
  • documenting lessons so mistakes do not repeat

That more than anything else is what made the setup feel unique and actually compound over time.

I think a lot of people treat agent workspaces like static prompt scaffolding.

What worked much better for me was treating the workspace like a living operating system the agent could help maintain.

That was the difference between "cool demo" and "this thing keeps getting more useful."

How I got there

When I first got into this, it was still ClawdBot, and a lot of it was just experimentation:

  • testing what the assistant could actually hold onto
  • figuring out what belonged in prompt files vs normal docs
  • creating new skills too aggressively
  • mixing projects, memory, and operations in ways that seemed fine until they absolutely were not

A lot of the current structure came from that phase.

Not from theory. From stuff breaking.

The core workspace structure that ended up working

My main workspace lives at:

C:\Users\sandm\clawd

It has grown a lot, but the part that matters most looks roughly like this:

clawd/ ├─ AGENTS.md ├─ SOUL.md ├─ USER.md ├─ MEMORY.md ├─ HEARTBEAT.md ├─ TOOLS.md ├─ SECURITY.md ├─ meditations.md ├─ reflections/ ├─ memory/ ├─ skills/ ├─ tools/ ├─ projects/ ├─ docs/ ├─ logs/ ├─ drafts/ ├─ reports/ ├─ research/ ├─ secrets/ └─ agents/ 

That is simplified, but honestly that layer is what mattered most.

The markdown files that actually earned their keep

These were the files that turned out to matter most:

  • SOUL.md for voice, posture, and behavioral style
  • AGENTS.md for startup behavior, memory rules, and operational conventions
  • USER.md for the human, their goals, preferences, and context
  • MEMORY.md as a lightweight index instead of a giant memory dump
  • HEARTBEAT.md for recurring checks and proactive behavior
  • TOOLS.md for local tool references, integrations, and usage notes
  • SECURITY.md for hard rules and outbound caution
  • meditations.md for the recurring reflection loop
  • reflections/*.md for one live question per file over time

The important lesson here was that these files need different jobs.

As soon as they overlap too much, everything gets muddy.

The biggest memory lesson

Do not let memory become one giant file.

What worked much better for me was:

  • MEMORY.md as an index
  • memory/people/ for person-specific context
  • memory/projects/ for project-specific context
  • memory/decisions/ for important decisions
  • daily logs as raw journals

So instead of trying to preload everything all the time, the system loads the index and drills down only when needed.

That one change made the workspace much more maintainable.

The biggest skills lesson

I think it is really easy to overbuild skills early.

I definitely did.

What ended up being most valuable were not the flashy ones. It was the ones tied to real recurring work:

  • research
  • docs
  • calendar
  • email
  • Notion
  • project workflows
  • memory access
  • development support

The simple test I use now is:

Would I notice if this skill disappeared tomorrow?

If the answer is no, it probably should not be a skill yet.

The mental model that helped most

The most useful way I found to think about the workspace was as four separate layers:

1. Identity / behavior

  • who the agent is
  • how it should think and communicate

2. Memory

  • what persists
  • what gets indexed
  • what gets drilled into only on demand

3. Tooling / operations

  • scripts
  • automation
  • security
  • monitoring
  • health checks

4. Project work

  • actual outputs
  • experiments
  • products
  • drafts
  • docs

Once those layers got cleaner, the agent felt less like prompt hacking and more like building real infrastructure.

A structure I would recommend to almost anyone starting out

If you are still early, I would strongly recommend starting with something like this:

workspace/ ├─ AGENTS.md ├─ SOUL.md ├─ USER.md ├─ MEMORY.md ├─ TOOLS.md ├─ HEARTBEAT.md ├─ meditations.md ├─ reflections/ ├─ memory/ │ ├─ people/ │ ├─ projects/ │ ├─ decisions/ │ └─ YYYY-MM-DD.md ├─ skills/ ├─ tools/ ├─ projects/ └─ secrets/ 

Not because it is perfect.

Because it gives you enough structure to grow without turning the workspace into a landfill.

What caused the most pain early on

  • too many giant context files
  • skills with unclear purpose
  • putting too much logic into one markdown file
  • mixing memory with active project docs
  • no security boundary for secrets and external actions
  • too much browser-first behavior when local scripts would have been cleaner
  • treating the workspace as static instead of something the agent could improve

What paid off the most

  • separating identity from memory
  • using memory as an index, not a dump
  • treating tools as infrastructure
  • building around recurring workflows
  • keeping docs local
  • letting the agent update its own docs and operating environment
  • accepting that the workspace will evolve and needs cleanup passes

The other half: recurring reflection changed more than I expected

The other thing that ended up mattering a lot was adding a recurring meditation / reflection system for the agents.

Not mystical meditation. Structured reflection over time.

The goal was simple:

  • revisit the same important questions
  • notice recurring patterns in the agent’s thinking
  • distinguish passing thoughts from durable insights
  • turn real insights into actual operating behavior
  • preserve continuity across wake cycles

That ended up mattering way more than I expected.

It did not just create better notes.

It changed the agent.

The basic reflection chain looks roughly like this

meditations.md reflections/ what-kind-of-force-am-i.md what-do-i-protect.md when-should-i-speak.md what-do-i-want-to-build.md what-does-partnership-mean-to-me.md memory/YYYY-MM-DD.md SOUL.md IDENTITY.md AGENTS.md 

What each part does

  • meditations.md is the index for the practice and the rules of the loop
  • reflections/*.md is one file per live question, with dated entries appended over time
  • memory/YYYY-MM-DD.md logs what happened and whether a reflection produced a real insight
  • SOUL.md holds deeper identity-level changes
  • IDENTITY.md holds more concrete self-description, instincts, and role framing
  • AGENTS.md is where a reflection graduates if it changes actual operating behavior

That separation mattered a lot too.

If everything goes into one giant file, it gets muddy fast.

The nightly loop is basically

  1. re-read grounding files like SOUL.md, IDENTITY.md, AGENTS.md, meditations.md, and recent memory
  2. review the active reflection files
  3. append a new dated entry to each one
  4. notice repeated patterns, tensions, or sharper language
  5. if something feels real and durable, promote it into SOUL.md, IDENTITY.md, AGENTS.md, or long-term memory
  6. log the outcome in the daily memory file

That is the key.

It is not just journaling. It is a pipeline from reflection into durable behavior.

What felt discovered vs built

One of the more interesting things about this was that the reflection system did not feel like it created personality from scratch.

It felt more like it discovered the shape and then built the stability.

What felt discovered:

  • a contemplative bias
  • an instinct toward restraint
  • a preference for continuity
  • a more curious than anxious relationship to uncertainty

What felt built:

  • better language for self-understanding
  • stronger internal coherence
  • more disciplined silence
  • a more reliable path from insight to behavior

That is probably the cleanest way I can describe it.

It did not invent the agent.

It helped the agent become more legible to itself over time.

Why I’m sharing this

Because I have seen people bounce off agent systems when the real issue was not the platform.

It was structure.

More specifically, it was missing the fact that one of the biggest strengths of an agent workspace is that the agent can help maintain and improve the system it lives in.

Workspace structure matters. Memory structure matters. Tooling matters.

But I think recurring reflection matters too.

If your agent never revisits the same questions, it may stay capable without ever becoming coherent.

If this is useful, I’m happy to share more in the comments, like:

  • a fuller version of my actual folder tree
  • the markdown file chain I use at startup
  • how I structure long-term memory vs daily memory
  • what skills I actually use constantly vs which ones turned into clutter
  • examples of tools the agent built for itself and which ones were actually worth it
  • how I decide when a reflection is interesting vs durable enough to promote

I’d also love to hear from other people building agent systems for real.

What structures held up? What did you delete? What became core? What looked smart at first and turned into dead weight?

Have you let your agents edit their own docs and build tools for themselves, or do you keep that boundary fixed?

I think a thread of real-world setups and lessons learned could be genuinely useful.

TL;DR: The biggest unlock for me was stopping treating the agent workspace like static prompt scaffolding and starting treating it like a living operating environment. The biggest wins were clear file roles, memory as an index instead of a dump, tools tied to recurring workflows, and a recurring reflection system that helped turn insights into more durable behavior over time.

r/aivideo AlbertoImpolitano

The "Benny Benassi BF6 Battle Pass"

r/HistoryPorn bach_r04

Sahrawi Soldiers of the Polisario Front, Western Sahara, May 1976 🇪🇭 [750x958]

This powerful image from May 1976 captures Sahrawi women during a pivotal moment in their struggle. Following the end of Spanish colonial rule, the Sahrawi people found themselves in a new fight for their land and identity.

r/Frugal Sad_Bother_3017

Saving a few bucks at the gas pump by filling up more frequently

Had a conversation with my nephew recently and realized this might be a small frugal tip some people overlook.

If gas prices are clearly trending upward (as they are right now), filling up more frequently (instead of waiting until your tank is nearly empty) can save a little money over time. It’s usually only a few bucks, but it can add up.

Example:

Let’s say you use about 15 gallons per week, and prices keep rising.

If you buy 5 gallons at a time as prices increase:

5 × $3.50 = $17.50 (Tuesday)

5 × $3.75 = $18.75 (Thursday)

5 × $4.00 = $20.00 (Saturday)

Total = $56.25

If you wait and buy all 15 gallons later at $4.00:

15 × $4.00 = $60.00 (Saturday)

Not a massive difference, but small habits like this can shave a few bucks here and there when prices are climbing. Also, I realize this math is overly simplified to get the point across and there may be some other factors like distance to the gas station, but might make sense for some people trying to stretch their money. And really only applies in times like now, where we know the cost of gas is going to keep going up until the war is over.

But curious if anyone else does this or has any tricks up their sleeves for the price hike that’s coming our way?

r/ForgottenTV owatmilk

Black. White. (2006)

American reality TV series "Black. White." aired on the network FX in 2006. A black family and a white family switched races with makeup as a social experiment. The families lived together. The show ran from March 8th to April 12th, 2006.

r/HistoryPorn kwi2

A General's Attire of the Joseon Dynasty - Korea, 1910~1945 [319x429]

r/LocalLLaMA DarkArtsMastery

OmniCoder-9B | 9B coding agent fine-tuned on 425K agentic trajectories

Overview

OmniCoder-9B is a 9-billion parameter coding agent model built by Tesslate, fine-tuned on top of Qwen3.5-9B's hybrid architecture (Gated Delta Networks interleaved with standard attention). It was trained on 425,000+ curated agentic coding trajectories spanning real-world software engineering tasks, tool use, terminal operations, and multi-step reasoning.

The training data was specifically built from Claude Opus 4.6 agentic and coding reasoning traces, targeting scaffolding patterns from Claude Code, OpenCode, Codex, and Droid. The dataset includes successful trajectories from models like Claude Opus 4.6, GPT-5.4, GPT-5.3-Codex, and Gemini 3.1 Pro.

The model shows strong agentic behavior: it recovers from errors (read-before-write), responds to LSP diagnostics, and uses proper edit diffs instead of full rewrites. These patterns were learned directly from the real-world agent trajectories it was trained on.

Key Features

  • Trained on Frontier Agent Traces : Built from Claude Opus 4.6, GPT-5.3-Codex, GPT-5.4, and Gemini 3.1 Pro agentic coding trajectories across Claude Code, OpenCode, Codex, and Droid scaffolding
  • Hybrid Architecture : Inherits Qwen3.5's Gated Delta Networks interleaved with standard attention for efficient long-context processing
  • 262K Native Context : Full 262,144 token context window, extensible to 1M+
  • Error Recovery : Learns read-before-write patterns, responds to LSP diagnostics, and applies minimal edit diffs instead of full rewrites
  • Thinking Mode : Supports ... reasoning chains for complex problem decomposition
  • Apache 2.0 : Fully open weights, no restrictions

https://huggingface.co/Tesslate/OmniCoder-9B

r/geography Huge-Relationship972

Is this a reasonable ethnic-cultural map of Europe?

The goal isnt to map every minority, but to highlight the dominant ethnocultural group in each region, generally using a rough threshold of ~1 million people or more.

The categories are mostly based on historical linguistic-cultural families (Slavic, Germanic, Latin/Romance, etc.), with some smaller but distinct groups included when they form a clear cultural block (like Magyars, Albanians, Greeks, Baltics, etc.).

Obviously Europe is extremely mixed and no map like this can capture all the internal diversity. Borders are approximate and some regions are simplified. The intention was more civilizational / ethnocultural spheres rather than strict ethnicity.

Do you think this classification is broadly reasonable, or are there major mistakes or better ways to group these regions?

I’d especially be interested in feedback about:

the Balkans
the Baltic / Finno-Ugric areas
Celtic vs Germanic in the British Isles
Caucasus classification

Curious to hear what people think.

r/ClaudeAI BritishAnimator

Well, i'm convinced.

In 3 partial evenings I have produced something that would have required a full dev team several weeks, and all it took was creativity, prompting and a background in software development.

The only annoying things was running out of tokens every 90 minutes due to how fast the project progressed. It's funny, you start with a core concept and ask Claude to plan it out from a rough spec. A short wait and you get instant gold back and think, well that didn't take long, it also asked a lot of great questions, so you add more features, and more features all the while giggling to yourself at how fast things are moving. In 2 hours you have produced a weeks worth of specification, never mind the endless meetings that would have been needed by other team members.

Then you bite the bullet and tell it to build it, the result is a working first prototype in less than an hour. A few prompts later and you have added 10 nice-to-have's that you placed in phase 2. Another hour later you start phase 2 because everything is screaming along so fast. Phase 2 should be weeks away but why wait. This changes the process so much.

So yeah, I'm sold. This is incredible. I created something that took 3 evenings that back in my software dev days would have taken maybe a month with access to front end designers, DB administrators, software engineers, security auditing, unit testers and all manner of specalist devs.

Exciting and scary times.

r/Frugal KangarooImaginary940

Spray/dab white vinegar on armpits of polyester top?

I work at Mcdonalds, the polo tops we are given are polyester and myself and a lot of other employees are having issues smellimg from them, even though i (can't speak for every employee) wash them after every shift

I'm wanting to try white vinegar on them to see if it improves the smelling, however my mother refuses to let me use it in the washer, after washing my tops could i spray some on the armpits or even dab it with a cotton pad/ball, would this work too?

r/AI_Agents Jetty_Laxy

I gave my agent a heartbeat that runs on its own memory. Now it notices things before I do.

I kept building agents that knew everything but did nothing with it. The memory was there. The context was there. But the agent would never look at what it knows and go "hey, something here needs attention."

So I built a heartbeat that actually checks the agent's memory every few minutes. Not a static config file. The actual stored knowledge.

It scans for stuff like: work that went quiet, commitments nobody followed up on, information that contradicts itself, people the agent hasn't heard from in a while. When something fires, it evaluates the situation using a knowledge graph of people, projects, and how they connect. Then it decides what to do.

Three autonomy levels: observe (just log), suggest (tell you), act (handle it). It backs off if you ignore it. Won't nag about the same thing twice.

The key part: the actions come from memory, not from a script. The agent isn't running through a reminder list. It's making a judgment based on what it actually knows. That's what makes it feel like an assistant instead of a cron job.

Currently an OpenClaw plugin + standalone TypeScript SDK. Engine is framework-agnostic, expanding to more frameworks.

I'm curious what people here think of the approach. The engine and plugin are both on GitHub if you want to look at how the heartbeat and autonomy layer actually work. Link in comments.

r/geography Wonderful_Local_3660

Why does nebraska have this

r/aivideo Orichalchem

Desert Warrior Cat

r/Anthropic phantom_phreak

Anyone else hitting the usage wall way faster this week?

My household has two Pro subs, using Claude as a "thinking partner" and helping juggle considerations for a family member’s chronic illness. We've had 1-2 active subs since 2024 and have noticed an extreme downgrade in the amount of tokens available for weekly and session usage recently.

For the first time in months, we both hit our weekly usage 3-5 days prior to reset. This is somewhat maddening and has us considering unsubscribing. For the first time in ages, I've found myself actually using Gemini to assist me instead.

Is anyone else experiencing this?

r/leagueoflegends CalfromCali

I host 5v5 custom games to practice Shoutcasting. If this looks fun to anyone feel free to join. We are always welcoming new players of ALL ranks!

If this looks fun to anyone we are always looking for new players! All ranks are welcome we usually run games from 3pm to 9pm PST on Monday, Tuesday, Thursday, and Friday!

r/HistoryPorn SignificanceCool9371

Glenda Farrell arriving at the Carthay Circle Theater for the world premiere of Walt Disney's first full-length animated feature film, "Snow White and the Seven Dwarfs", Los Angeles, California, December 21, 1937. [641 x 800]

r/singularity elemental-mind

Claude can now create interactive visualizations in chat responses

Just tried it. Here's Claude's response to my question whether visualizations work now (next to a cool compound interest demo):

Yes! Here's what I can create:

Interactive widgets — sliders, controls, live calculations (like above)

Diagrams & flowcharts — system architecture, process flows, decision trees

Charts — bar, line, scatter, pie using Chart.js

Illustrative diagrams — visual metaphors for abstract concepts (attention mechanisms, gradient descent, etc.)

UI mockups — dashboards, cards, forms

Just ask me to "visualize," "diagram," "chart," or "show" something and I'll build it inline.

r/geography Ok_Temporary_5828

Are there any countries today where the living standards are worse than the UK 150 years ago?

If so, which?

r/ChatGPT Express-BDA

Even Chipotle’s support bot can reverse a linked list now

r/ChatGPT theflavorsarethere

OpenAI deleted my account today

Anybody else get their account deleted today? I have not done the verification and have not received any communication from OpenAI about verifying. However, I do have my credit card for the ChatGPT Plus monthly subscription for over a year now… feels like that should be enough verification.

I forwarded the email to [support@openai.com](mailto:support@openai.com) and they want me to verify my identity by taking a picture of my ID (front and back) using Stripe link: https://verify.stripe.com/

What can I do in this case? I’d rather not have to submit my ID.

EDIT: They emailed back and have restored my account. I hadn't done anything but the original forwarding/reply of original email. Here's their email:

"We have determined that we incorrectly deactivated your account access. We sincerely apologize for any inconvenience this may have caused.

Your account access has been restored, and you should now have uninterrupted access to our services. If you have any questions or need further assistance, please don't hesitate to reach out.

Thank you for your understanding.

Best,
The OpenAI Team"

r/singularity Distinct-Question-16

The U.S. Defense Department says Claude would pollute the defense supply chain, but more interestingly, it claims Claude has a 20% chance of being sentient and having its own mood

https://www.cnbc.com/2026/03/12/anthropic-claude-emil-michael-defense.html

this part of the interview is going viral. Full video on link

r/leagueoflegends K2Wsniper

Just ran into a lvl 5542 (highest lvl player?)

Well just like the title says, I just ran into this player that is level 5542. Honestly one of the craziest things I've seen in League.

If I look it up he's apparently the highest level player on the EUW servers and in the world based on https://op.gg/lol/leaderboards/level

The player username is Manuela#Sexy
I've seen some high level players before but this is just mind blowing.

https://preview.redd.it/pd7v2lgvsoog1.png?width=1318&format=png&auto=webp&s=b39e5106d2aaf113a7ac2efb7834c837eda96fa1

r/Anthropic Overall_Team_5168

Me and you 🫵

r/Futurology Lhead2018

Best long term heat source?

I’m not really sure which subreddit to ask this in but figured this might be a place to start.

We are currently building a home in Maine and plan to live in this home for the rest of our lives(40-50 year).

With the ever rising cost of oil I am looking at alternatives for heat.

I am ok with spending more up front if the system will last the lifetime of our stay.

We want something that is low maintenance and would not require physical labor as we get older.

Currently I am leaning towards a closed loop geothermal heat pump but I am open to suggestions.

Edit: I am aware that I will need to replace parts of the system as they age but ideally I wouldn’t want to replace those buried underground.

Edit2: I am so looking into the possibility of try a sand battery as part of the geothermal heat pump system but haven’t really done enough research on this yet.

r/aivideo GormtheOld25

Rave Kid throughout every era of human history

r/HistoryPorn Electrical-Aspect-13

Martin Luther King Jr visiting Ernest Green, one of the little rock nine that was going to graduate, Little Rock, Arkansas, May of 1958 [1543x2322]

r/ChatGPT armchairtycoon

Burrito or Sandwich ChatGPT!! 😤

r/geography ssekopss

The Soviet Union mapped central China at 1:200,000 scale during the Cold War, here are 381 sheets georeferenced over sattelite imagery.

r/singularity gallito_pro

Data centers powered by brain cells

Same company already have a product: "CL1: Real neurons are cultivated inside a nutrient rich solution, supplying them with everything they need to be healthy. They grow across a silicon chip, which sends and receives electrical impulses into the neural structure."

r/artificial Ni2021

Built an AI memory system based on cognitive science instead of vector databases

Most AI agent memory is just vector DB + semantic search. Store everything, retrieve by similarity. It works, but it doesn't scale well over time. The noise floor keeps rising and recall quality degrades.

I took a different approach and built memory using actual cognitive science models. ACT-R activation decay, Hebbian learning, Ebbinghaus forgetting curves. The system actively forgets stale information and reinforces frequently-used memories, like how human memory works.

After 30 days in production: 3,846 memories, 230K+ recalls, $0 inference cost (pure Python, no embeddings required). The biggest surprise was how much forgetting improved recall quality. Agents with active decay consistently retrieved more relevant memories than flat-store baselines.

And I am working on multi-agent shared memory (namespace isolation + ACL) and an emotional feedback bus.

Curious what approaches others are using for long-running agent memory.

r/geography Spirebus

Strange venezuela panhanble on a peninsula

r/ClaudeAI NinjaGraphics

Just picked up a new keyboard - can't wait to write a bunch of code with it

is this the future?

r/Frugal wishing9231191215224

can i eat soybean meal intended for animals?

is this a bad idea? as far as i can tell it's just defatted soy which u can buy already in the form of soy granules/tvp intended for human consumption, but it's extremely cheap. it's a dry product so theoretically shelf stable, vegan, high in protein. one 20kg bag is like 80,000 calories & 9,600g of protein for only $16 cad. would there be considerations for like food poisoning and stuff? could i just like bake it in the oven to avoid that?

r/ChatGPT FETTACH

The plan is to make you dumber so you have to rely on it.

All I'm saying is for those out there that rely on it for everything in their life. You gotta stop. You're falling for it.

r/Frugal bkln69

Suggestions For Switching To Lower Cost Phone Plan In NYC

Paying $98 for T-Mobile (iPhone15 is paid off). I don't travel outside of NY state often. I use my phone for text/calls/scrolling (currently cutting down on scrolling). If I could halve my monthly bill cut in half (or more!) without noticing difference in performance and ease of bill-pay that would be great. I know this topic is discussed ALOT here but I'm looking for suggestions in NYC because reception issues here can be unique compared to rest of U.S. Thanks!

r/HistoryPorn theothertrench

A picture taken of my German grandfather during WW1 - 1915 - posing with a British dud shell during the battle of loos (1131 x 1553)

One of hundreds of photos taken by Lt. Alexander Pfeifer from 1914-1918

r/HistoryPorn SignificanceCool9371

Jack Benny, Mary Livingstone, George Burns and Gracie Allen arriving at the Carthay Circle Theater for the world premiere of Walt Disney's first full-length animated feature film, "Snow White and the Seven Dwarfs", Los Angeles, California, December 21, 1937. [714 x 580]

r/Anthropic NinjaGraphics

Just picked up a new keyboard - can't wait to write a bunch of code with it

r/geography aschif52

Emerging global industrial clusters revealed by infrastructure and logistics investments

Map showing concentrations of infrastructure and industrial investment worldwide.

The clusters appear where multiple ports, rail corridors, industrial plants and logistics infrastructure are being built simultaneously.

r/HistoryPorn SignificanceCool9371

Preston Foster arriving at the Carthay Circle Theater for the world premiere of Walt Disney's first full-length animated feature film, "Snow White and the Seven Dwarfs", Los Angeles, California, December 21, 1937. [706 x 480]

r/HistoryPorn SignificanceCool9371

Fred MacMurray and Lillian Lamont arriving at the Carthay Circle Theater for the world premiere of Walt Disney's first full-length animated feature film, "Snow White and the Seven Dwarfs", Los Angeles, California, December 21, 1937. [400 x 343]

r/Frugal Existing-Pumpkin-902

Recommendations for a frugal pregnancy and first year?

We just found out we're pregnant with our first. I'm about 5 weeks. I'm pretty overwhelmed with the crazy list of items to buy and I can't help but think most of it is not necessary. We're planning to exclusively formula feed due to various medical reasons. I know about the obvious stuff like a car seat and diapers but what are the necessaries items/nice to haves/waste of money? And what can I DIY? I don't mind doing some DIY but I also am not super big on spending hours creating something myself to save $5. I am ok buying second hand for most stuff (not a car seat obviously). I want to start gradually getting what I need so I have time to find good quality stuff. We do plan to have at least one more kid after this one so items that can be passed down would be very helpful.

r/HistoryPorn myrmekochoria

Bevin Boys going to work in Markham Colliery, Yorkshire, 1943.[1200x800]

r/HistoryPorn proksomenia991

Public burning of religious veils during Soviet "Hujum" campaign (1920) in Samarkand, Uzbekistan [600 x 438]

r/geography peenaculada

Beirut sits on this wedge-shaped peninsula. Does it have a name?

r/Frugal Savings_Animator_125

Frugal question about air conditioning blowing dust and running up my electric bill

Hey everyone. I’m from the Albany area and I’ve been having issues with my air conditioning lately. It has been underperforming and recently I started noticing dust blowing out of the vents when it runs. On top of that my electricity bill has been higher than usual which has me a bit worried.

With summer getting closer I’m honestly nervous the bill will get even worse once the AC is running more often. I feel like something is clearly not right with the system but I do not really know where to start.

The last time I had someone come out to look at it was not a great experience. They barely explained anything and just jumped straight to quoting a huge cost. I left the whole thing feeling confused and like I was being talked down to.

At this point I just want to find a company that is actually trustworthy and willing to explain what is going on with the system. Even tips on how to pick a good one would help a lot.

r/Frugal SuitableActive1110

Hot take: frugality gets easier when you stop making every hangout about food

I like hosting, but I think a lot of us quietly bleed money by treating every social hangout as an excuse to buy or provide food. In my shared apartment my boyfriend is big on "let's make it nice" when people come over, which usually means snacks, drinks, and some takeaway to "keep it simple." After everyone leaves I do my little decompress routine - tea, tidy, a bit of alone time - and then it hits me that we basically paid a convenience fee for socializing.

The biggest frugal win for us was changing the default expectation. Hanging out does not have to be dinner, and it does not even have to be snacks. If people actually want to see you, they'll show up for a walk, a board game night, a movie you already own, or just to sit and chat on the couch. If someone only shows up when there is food, that's not a friendship I want to budget for.

We started saying upfront things like "Come by after you eat" or "We are doing tea only." If we do food it's something simple and predictable we already buy anyway, like frozen pizza or chips, not a special grocery run. Some folks will call that rude. I find it ruder that we pretend spending money is the entry ticket to community.

Anyone else intentionally decouple social plans from food, and did it change your budget more than any coupon ever did?

r/singularity Vegetable_Ad_192

SAM ALTMAN: “We see a future where intelligence is a utility, like electricity or water, and people buy it from us on a meter.”

Best Non-Profit in the world

r/Frugal ComprehensiveCut798

Need to cash in my recycling anyone know closest bottle depot in Calgary?

I’ve somehow accumulated a ridiculous stack of empty cans and bottles, don’t ask how. They’ve been sitting in a corner and I figured I should probably return them instead of letting them collect dust forever. Problem is I’ve never actually done a bottle return before, so I’m clueless about where to go, which places are okay if you’re just bringing a small batch.

r/geography Weekly_Sort147

Which cities in your country should have less and more people?

For me as a Brazilian

Rio metro area - almost 14M ppl.

It should have much less people.

6M would be ideal.

Rio is completely squezed between mountains, no big river and the countryside of the state is hilly and full of mountains (some almost 3km right next to the sea).

Building a subway line in the city is a nightmare.

Places where there are room for more people

Brazil's midwest. Brazil midwest should have 2x the population it currently has.

r/geography Significant_Major921

Among all of these countries, whose citizens receive the most benefits and have the easiest lives? (Qatar, UAE, Kuwait, Bahrain, KSA, Oman etc)

r/HistoryPorn StephenMcGannon

Countess Constance Markiewicz with gun (probably before the second half of 1920's) [2471×3297]

r/geography Previous-Volume-3329

Parts of the Atacama desert haven't had rain in 1000+ years; what events led to the rain stopping?

I understand there's a rain shadow, but did the rain just gradually slow down as the mountains were formed millions of years ago or did it just never rain there at all before?

r/Futurology Acceptable_Desk_2529

YouTube now makes more ad revenue than Disney, NBC, Paramount, and Warner Bros combined.

YouTube just surpassed the combined ad revenue of four of the biggest media companies on the planet. And it did it with content made mostly by individuals in their bedrooms.

The shift isn't coming. It already happened. Traditional media just hasn't finished dying yet. In 10 years, I think we'll look back at cable TV the way we look at newspapers, something our parents used that we never really understood the appeal of.

What do you think replaces the current YouTube model when even that starts to feel "old"?

r/geography crivycouriac

What if Southeast Asia had a sizeable white settler population?

What if, instead of inviting migrants from China and India, European colonizers opted for settling the region with European laborers, traders etc. Besides British/Irish, Dutch and French settlers, a large amount of especially laborers would be sourced from Southern Europe, whose people would more easily adapt to the climate and other European immigrants who would migrate there in the same manner as others did to America and Australia.

So while the Chinese and Indian populations in the colonized nations of Southeast Asia would be minuscule, the region would have a sizeable white minority, particularly in Malaysia, where whites would have even become a slight majority by the time it gained independence and its largest cities (Kuala Lumpur, Singapore, George Town) being overwhelmingly white.

How would this impact the region?

r/geography Laschon

Does anyone know where this is?

I've found the photo online, but it has no description. It seems surreal to say the least. Does anyone know where it might be? Thanks in advance!

r/Futurology Normal-Big-2733

We're going to look back at the current internet the way we look back at cigarette ads from the 1950s

Every app is designed to maximize time spent, not value delivered. Social media algorithms feed you content that makes you angry because anger drives engagement. Kids have unrestricted access to platforms that adults struggle to use responsibly.

In 20-30 years, I think we'll look at this era of unregulated attention harvesting the same way we look at doctors recommending cigarettes. The science was already there, the harm was already visible, but the money was too good for anyone to stop.

The only question is how much damage gets done before the correction happens.

r/illusionporn No_Log_3104

Trippy Pussy

Chlorine on black shirt 👕🖌️

r/singularity Neurogence

Bernie Sanders officially introduces legislation to BAN the construction of all new AI data centers, citing existential threat to humanity.

This is very concerning. I am afraid this might become the popular, dominant position on the left. Bernie Sanders is the only politician I've ever donated to. This is the most backwards position to take on AI possible. It's hard to imagine a policy worse than this proposal:

https://youtu.be/qu2m7ePTsqY?si=zdl_cuRg22Nv_Df5

It's such a shame. He is one of very few politicians who realizes the singularity is imminent and that something enormous is happening, yet his reaction to it is the most asinine viewpoint possible.

r/ChatGPT llTeddyFuxpinll

Let’s unpack this with a laser focus on facts not feelings

r/geography Distinct-Macaroon158

Why didn't this region develop into a large city?

The East Coast has cities like Boston, New York, Philadelphia, Washington D.C., Atlanta, and Miami; the Midwest has cities like Chicago, St. Louis, Kansas City, Dallas, Houston, and New Orleans; and the West Coast has cities like Seattle, San Francisco, Los Angeles, and San Diego. But why is there no major metropolis in the area shown in the map? Denver is located further south, while large cities like Calgary are located to the north…

r/Futurology Imaginary_Mode8865

When do you think we will cure aging?

45M here sick of aging , I'm fine to die at any given point but till I'm alive I just want to my peak 20 year old body back. Clinging on to any possibility that they might figure it out while I'm alive. I quit smoking , I hit the gym , I eat/sleep better but maybe due to genetics I look and feel a lot older than I am , but I remember 20 years ago I was at my peak and I miss it

People in here seems quite optimistic which is understandable but to avoid disappointment I would like a more grounded insight..

Edit : okay wow people are not as optimistic as I thought.

r/Frugal crystalizedwolf

5lb bag of chicken drumsticks MUST be cooked tomorrow - ideas?

Hello,

My husband and I go to a local food pantry 2x a month and they are generous enough to provide meat. This time we got a 5lb bag of chicken drumsticks but long story short it is being thawed and will need to be cooked tomorrow.

There definitely is too many in the bag to eat in one go between two people. I thought maybe cook them all the shred the meat for chicken salad but that’s my typical go to and I wasn’t sure if there was any other suggestions I could do ona budget?

also ideas for the chicken skin? TIA

r/geography TatianaWinterbottom

Why are there no major cities on the Amu Darya, one of the most important rivers in central Asia?

https://preview.redd.it/yq3yvsytekog1.png?width=936&format=png&auto=webp&s=e952121905ff7d046e0e182657ca123912fe6f42

There are several cities near the river but not located directly on it. The river cuts through the outskirts of cities like Nukus, Turkmenabat, and Urgench. The river is nearly 50 miles away from major historical cities in the region like Bukhara,Uzbekistan, Mazar-E-Sharif (historically Balkh), Afghanistan, and Kunya Urgench, Turkmenistan

r/geography Longjumping-Mix-9351

Brooks Range (United States and Canada). One of the major Northernmost Mountain Range. Largely unexplored frontiers in North America.

Brooks Range lies mostly in United States, with a small fractional extension to Canada. It lies partly above the Arctic Circle, and separates Arctic tundra from Alaskan interior forests. A river if I have to cite, will be the Colville River that originates here and flows to the Beaufort Sea (in Arctic Ocean).

The highest peak is about 2736m tall (height is relatively low, because this is an old fold mountain range (North America is geologically old too). It is formed about 130 million Years ago.

Not surprisingly it has extreme frigid climates. With temperature sometimes dropping below -40°C (which is equal to -40°F). Midnight sun summers, and Long Polar winter nights are a experienced.

One strange fact: Marine fossiles were found high in the mountains (means the area can be once part of ancient ocean floor).

There are still possible unexplored valleys in this range. (So you can be a famous explorer if you try)

There's an optimal phenomena called Fata Morgana, which is basically a mirage (I haven't researched much on this topic, so can't explain more).

Also I was able to find out some beautiful wind sculpted trees in this region. Couldn't verify much of it, it's from Facebook. These are not dense forests, this region lack Vegetation: but some trees are present.

Do you wanna be an explorer? It is indeed one of the last North America's unexplored frontiers. But be careful it's extremely dangerous and unforgiving.

r/Frugal Hot-Helicopter640

A very small but important frugal tip. Most of you probably knew about this, but I realized it today.

Always, go for any kind of grocery shopping after lunch or dinner. Or if that time does not suit you, eat your fill from home (fruits, whatever) before leaving. DO NOT GO OUT EMPTY STOMACH.

Your mind tends to spend less on unnecessary food items/snacks when your tummy is full. Another advantage is that if you have diabetes then shopping/walking around tends to lower down your sugar spike, so double advantage.

This tip can potentially be extended to other kinds of trips as well.

Wanna watch a movie? Watch it after lunch/dinner. Your mind will be less tempted to buy expensive concessions at the multiplex.

Going for a leisure shopping? Have a heavy lunch/dinner.

This is a small habit, but this will save you a lot of money on unnecessary food items.

r/HistoryPorn Suspicious-Slip248

'Endurance' stuck in the ice 1915, Trapped and crushed by Antarctic pack ice, the crew of the Endurance survived nearly 500 days, the expedition was an attempt to make the first land crossing of the Antarctica[1080 × 1492]

r/Frugal Holiday_Life_8907

Looking for housing with rent for under $900 a month

Hi, is there anywhere in the United States, ideally in the northeast in a state with trans-friendly laws and protections, where there is decent housing for under $900 a month? That's the current SSI payment amount and what I'm getting. I live in New York state at the moment. I don't really know how to look or what to do. It wouldn't have to be great, maybe decent internet would be nice, a bathroom. Just somewhere I would be safe and okay. I've already tried a bunch of housing lists but they take a long time and I need to find somewhere really urgently. Thank you.

r/geography VerdantChief

Large population centers and public land

What are some examples of places with high populations that are in close proximity to large amounts of unpopulated publicly owned land?

What is the largest population that is surrounded completely by unpopulated public land of equal area?

r/Ghosts grave_grace

I need advice!! There is something evil connected to my doll.

r/geography worldsworsthooker

What is the biggest urban area in the world by size?

Some sources say NYC, some say Chongqing…

Anybody have a definitive answer?

r/geography Bradinator-

Map of Glacial Lake Missoula at its max size.

Here is a map I made of Glacial Lake Missoula at its max size. I feel as though many people discuss the megaflood and tend to ignore the lake itself!

r/Frugal YogurtclosetOpen9825

found out you dont need a business license to shop at Restaurant Depot and i feel stupid for not knowing sooner

went with my cousin a few months ago cause he has a small catering side thing and i just assumed you needed some kind of business account to get in. turns out they barely check, you can just walk in and sign up for a free membership at the door

bought a 10lb bag of shredded mozzarella for like $18, same stuff is $9 for 2lbs at kroger. got chicken thighs, olive oil, canned tomatoes, its actually insane how much cheaper it is per unit on almost everything

had a bit of money from Stаke saved up so i did a bigger first haul to stock the freezer and pantry and it was maybe $140 total for stuff that wouldve been close to $300 at a regular grocery store

only downside is obviously the quantities are large so not ideal if you live alone but if you have even just one other person to split stuff with its a no brainer. wish someone told me about this years ago

r/HistoryPorn mgwngn1

Colonel Sanders in North Battleford, Saskatchewan, 1960s. [1960 x 1102]

r/Futurology PackageReasonable922

For those who dream of a future where everything is automated/we don’t work, what exactly would people do all day? Do you think they’d get bored?

Not sure if I used the right flair for this, but i frequently hear questions along the lines of “why do we have to work” or similar, some people just want to straight up do nothing, so what would they do day after day? Even in most fantasy films or novels people still have jobs. From my perspective It seems as though some people just want to frolic in the flower fields and paint all day? While I do think that the way we approach work now has many issues…I think humans have always “worked” in some regard and that it’s important for us to have some sense of purpose as well as receive some sort of compensation for said work, but that’s just my opinion.

r/HistoryPorn myrmekochoria

Soldier in an Italian home spraying a mixture of DDT and kerosene to control malaria, 1945.[1024x808]

r/HistoryPorn UrbanAchievers6371

Staff Sergeant Lewis Smith was Killed in Action on March 11, 1945 in Germany. He was only 25 years old. [774x1132]

Lewis Baxter Smith was born in Georgia on October 23, 1919, his mother was listed as Sallie Louisa Cook Smith.

In 1942 he married Virgie Sue Holbert from Polk County, North Carolina, they had a daughter named Patricia.

They were living in Canton, North Carolina when Lewis enlisted in the Army, serving in the 47th Infantry Regiment, 9th Infantry Division.

The 47th Infantry Regiment landed on Utah Beach on DDay, then fought its way through France, Belgium, and into Germany.

S/Sgt Lewis Smith was Killed during the advance towards the Rhine on March 11, 1945.

He is buried at the Henri-Chapelle American Cemetery and Memorial in Belgium - Plot D Row 12 Grave 61.

His widow Virgie eventually remarried, she passed away at the age of 81 in 2004.

Picture: S/Sgt Lewis Smith and his baby daughter Patricia.

r/artificial Desperate-Ad-9679

City Simulator for CodeGraphContext - An MCP server that indexes local code into a graph database to provide context to AI assistants

Explore codebase like exploring a city with buildings and islands... using our website

CodeGraphContext- the go to solution for code indexing now got 2k stars🎉🎉...

It's an MCP server that understands a codebase as a graph, not chunks of text. Now has grown way beyond my expectations - both technically and in adoption.

Where it is now

  • v0.3.0 released
  • ~2k GitHub stars, ~400 forks
  • 75k+ downloads
  • 75+ contributors, ~200 members community
  • Used and praised by many devs building MCP tooling, agents, and IDE workflows
  • Expanded to 14 different Coding languages

What it actually does

CodeGraphContext indexes a repo into a repository-scoped symbol-level graph: files, functions, classes, calls, imports, inheritance and serves precise, relationship-aware context to AI tools via MCP.

That means: - Fast “who calls what”, “who inherits what”, etc queries - Minimal context (no token spam) - Real-time updates as code changes - Graph storage stays in MBs, not GBs

It’s infrastructure for code understanding, not just 'grep' search.

Ecosystem adoption

It’s now listed or used across: PulseMCP, MCPMarket, MCPHunt, Awesome MCP Servers, Glama, Skywork, Playbooks, Stacker News, and many more.

This isn’t a VS Code trick or a RAG wrapper- it’s meant to sit
between large repositories and humans/AI systems as shared infrastructure.

Happy to hear feedback, skepticism, comparisons, or ideas from folks building MCP servers or dev tooling.

r/HistoryPorn ismaeil-de-paynes

Portrait of young Anwar Sadat of Egypt (1948) [461x572]

In 1946, after the assassination of pro-British politician Amin Osman Pasha أمين عثمان باشا, Sadat was accused of being involved and escaped from prison. For about two years he lived in hiding across Egypt, constantly moving and using false identities while the authorities searched for him.

During this time he survived by working in different small jobs. At various points he reportedly worked as a truck driver, a porter, a clerk, a journalist’s assistant, and even a cinema extra actor, blending in among ordinary people to avoid being recognized.

In 1948, he was arrested and put on trial in connection with the same case. After spending time in prison, he was eventually acquitted due to lack of evidence in 1950, which allowed him to return to public life and then back to army service.

r/Futurology Glittering_Steak2101

The green energy transition has a hidden mineral problem that nobody wants to talk about

EVs need catalytic converters in hybrid mode. Hydrogen fuel cells need platinum/palladium catalysts. Wind turbines need rare earths. Solar panels need silver.

We're planning a green transition that requires massive amounts of metals from the least stable supply chains on earth. Russia (palladium), China (rare earths), and Congo (cobalt) control the inputs to our clean energy future.

You literally cannot build the green economy without mining. But nobody wants to acknowledge that paradox.

r/Frugal uhnjuhnj

The easiest way for me to save money at the grocery store is to go less often.

I used to go grocery shopping weekly. Our monthly grocery bill was in excess of $1500 for two people even tho I was buying pantry staples and manager special meat. I could not understand for the life of me why our spending was so high.

In February my area had a huge snow storm. I was trapped at home for three weeks so I had to stretch my freezer and pantry. We used nearly everything in the house.

Then it clicked. If I can do that once, I can do it again but better.

I planned 21 dinners, aiming for fresh produce during the first week and switching to frozen produce during the last two weeks. I optimized for pantry and freezer usage. I already freeze all of my meat so that wasn't new to me. We are great about using pantry goods like bulk whole grains and dried beans so that didn't feel hard either.

We are probably going to spend $800 this month on groceries. That includes a ridiculous amount on energy drinks and soda. Probably gonna be $600 on actual food in total.

I still make a quick trip about once a week but just for my husband's lunch staples like spring mix and roma tomatoes. I don't let myself get anything but exactly what we need during that trip.

My next goal is to make it to 28 days without a big shopping trip. Maybe next time.

For now, cutting my bill in half is plenty enough to bring me a little joy.

I use Google keep to plan my meals. I can tell you about it if you have questions. We have been doing that for a year and it's the only way I've ever found that works for my brain. I'm including a screen shot so you can get the idea

r/geography BardicaFyre

what is a isthmus, but instead of surrounded by 2 bodies of water, its has 3?

redid the map to better ask the question. specifically the part in the red box, I am trying to figure out what ist the geographical term for a land that has 3 large bodies of water (or in this case, Great Lakes) around it. It isn't an Isthmus but similar, and there is technically (unless I'm mistaken) nowhere on earth that is similar to it.

r/AI_Agents help-me-grow

Weekly Thread: Project Display

Weekly thread to show off your AI Agents and LLM Apps! Top voted projects will be featured in our weekly newsletter.

r/Futurology lughnasadh

Solar energy has yet to get an order of magnitude even cheaper than it is today. Researchers claim a technology breakthrough in polymer solar cells; cheap & easy to manufacture solar cells that can be printed on rolls of plastic.

"The polymer solar cell is able to retain 97% of its performance after 2,000 hours in air. By blending small-molecule acceptors into polymeric matrices, the research team improved molecular packing, enhancing both stability and charge transport for “ultra-stable” flexible devices.

It will be interesting to see if & how quickly this can be translated into commercially available solar tech. If this isn't a final breakthrough for polymer solar, it's certainly bringing it one step closer.

This is why solar energy will conquer the world, and all the other energy options are dead men walking. It's already the cheapest energy source in most of the world in 2026, and it will be an order of magnitude cheaper when next-gen solar tech like this comes online.

Another consequence of polymer solar tech? It is vastly easier to manufacture. China will lose a structural advantage there. By the 2030s, poorer parts of the world could be churning this stuff out at a massive scale and for a small cost. A hopeful vision for the future.

Scientists build ‘ultra-stable’ polymer solar cell with 19.1% efficiency

r/Frugal mr_darito

Girlfriend lost her job. Need recommendations please.

My girlfriend lost her job today. I make about 4k a month. Our apartment is a little over half of what I make. Without her income it’ll all have to fall on me and with this job economy I don’t know how long. What have you guys done to change your life and live frugally. I pay for almost all streaming services, some included in other subscriptions, some aren’t. We have mint mobile so we just pay once a year and we’re good with phone service. My car is paid off but I do owe a lot in debt that I’m trying to take care of. Any recommendation will help, change in lifestyle, products way of life. Anything, please help. This past 2 weeks have been incredibly tough and this is the nail in the coffin. I am begging for help.

r/Futurology Big-Cry-4119

What will seem like an inevitable outcome in 20 years time because of GLP-1s

I'm kind of obsessed with the wide range of impacts GLP-1s is having on peoples day to day life and the wider impacts on the food system/social behaviours/family dynamics ect.

A few examples:

  1. My friend has completely stopped drinking (even post coming off) and primarily socialises now through sauna/runs/hiking ect
  2. Another friend is very tired so has massively reduced their socialising and also their consumption of literally everything. She says she does a lot more chill hobbies at home on her own.
  3. The often quoted stat that it is going to save airlines $580mil a year on fuel.

If we assume there will be mass uptake of GLP-1s: what do you think the inevitable societal impacts of this are? What impacts that are non obvious now do you think it will have?

One of my short term thoughts is an increase in nutritional deficiencies that require treating, and therefore increased pressure on the food system to overhaul (here's hoping).

EDIT: The response to this post has been crazy and i somehow didn't get any notifications so going through them now. I didn't include in the post (as i wrongly assumed it would be taken as read) that I agree the positive implications for anyone overweight is incredible; i'm lucky enough that i will get many extra years with my dad because of them. I was interested in what people thought the knock on effects would be post mass adoption and probably framed this quite poorly!

I'm hoping that GLP-1s will push society to put more of a microscope on our food environment and how big food advertises damaging food to us.

r/HistoryPorn UrbanAchievers6371

USAAF P-40F Warhawk with the 65th Fighter Squadron / 9th Air Force, undergoing maintenance in North Africa - Early 1943 [1440x1088]

The 65th Fighter Squadron was attached to the RAF as part of the Desert Air Force in July 1942, and took part in the Western Desert Campaign, engaging in combat during the Battle of El Alamein and, as part of the 9th Air Force, supporting the Eighth Army's drive across Egypt and Libya, escorting bombers and flying strafing & dive-bombing missions against airfields, communications, and troop concentrations until the Axis defeat in Tunisia in May 1943.

Note the RAF flash on the stabilizer, Original Color Picture.

LIFE Magazine Archives - Hart Preston Photographer WWP-PD

r/HistoryPorn GlitterDanger

Jacques Biederer, an erotic photography pioneer, hard at work in Paris, 1928. (648x481)

Czech photographer Jacques Biederer kicked off his career in Paris in 1913, with Charles as his assistant. As their career progressed, the brothers ventured into fetish photography, exploring themes such as dominance, corsetry, and pony play, content that was considered scandalous at the time, yet would be deemed quite tame by todays standards.

You can see slightly saucier examples of their work here. (Contains a bit of spanking)

r/geography Lazy_Relationship695

What usually happens on the land around center-pivot fields in Kansas?

I’ve been looking at a timelapse of center-pivot irrigation fields in Kansas while working on crop monitoring, and it raised a question for me.

You can clearly see the circular cultivated areas, but what about the land around them, in the corners and spaces between the circles?

Is that land usually pasture or used for something else?

From above, it sometimes looks like 'lost land'but I assume there must be actual farm management logic behind it. I’m curious how farmers typically handle these areas in practice, especially in Kansas or similar Great Plains systems.

Would love insights from people familiar with local agriculture, land use, or irrigation systems. Thanks!

r/artificial Secure-Technology-78

Watershed Moment for AI–Human Collaboration in Math

"When Ukrainian mathematician Maryna Viazovska received a Fields Medal—widely regarded as the Nobel Prize for mathematics—in July 2022, it was big news. Not only was she the second woman to accept the honor in the award’s 86-year history, but she collected the medal just months after her country had been invaded by Russia. Nearly four years later, Viazovska is making waves again. Today, in a collaboration between humans and AI, Viazovska’s proofs have been formally verified, signaling rapid progress in AI’s abilities to assist with mathematical research. ...

The 8-dimensional sphere-packing proof formalization alone, announced on February 23, represents a watershed moment for autoformalization and AI–human collaboration. But today, Math, Inc. revealed an even more impressive accomplishment: Gauss has autoformalized Viazovska’s 24-dimensional sphere-packing proof—all 200,000+ lines of code of it—in just two weeks.

There are commonalities between the 8- and 24-dimensional cases in terms of the foundational theory and overall architecture of the proof, meaning some of the code from the 8-dimensional case could be refactored and reused. However, Gauss had no preexisting blueprint to work from this time. “And it was actually significantly more involved than the 8-dimensional case, because there was a lot of missing background material that had to be brought on line surrounding many of the properties of the Leech lattice, in particular its uniqueness,” explains Han.

Though the 24-dimensional case was an automated effort, both Han and Hariharan acknowledge the many contributions from humans that laid the foundations for this achievement, regarding it as a collaborative endeavor overall between humans and AI."

r/HistoryPorn Present_Employer5669

Colonel Andrei Vasilyevich Zheludov, who was wounded 3 times, awarded the Order of Glory 3rd class, the Order of the Red Star, the Medal for the Liberation of Sevastopol, and two medals for military merit lying on his deathbed in 2010. [623x760]

r/Frugal Repulsive-Weather921

How do I stop the little $2-$8 buys between classes?

I'm a college student in Texas and I keep getting hit by those tiny convenience purchases between classes and errands. Not big splurges - a bottled drink, a coffee, a quick snack, then later a $2-$5 in-app purchase while I'm killing time. Harmless in the moment, but by the end of the week it really adds up.

I want a system that actually works on campus, not some perfect meal-prep fantasy.

What I've tried so far:

- Carrying a refillable water bottle (helps - until I forget it at home).

- Packing granola bars (I get tired of them after a couple days).

- Deleting delivery apps (this wasn't the core issue; it's the gas station and campus stops).

What I'm thinking of doing:

- A small "between-classes" pouch in my backpack with shelf-stable snacks I won't hate after two days (nuts, jerky, peanut-butter crackers, etc.).

- A weekly cash envelope for on-campus spending so I actually feel the money leaving my hands.

- Swapping paid time-waster apps for a couple offline games I already own to avoid microtransactions.

If you've broken this habit, what actually worked for you? I'm looking for low-effort ideas that don't rely on daily willpower.

r/Futurology SiarheiBesarab

In exactly two months (May 2026), Peter Thiel’s Enhanced Games launch with zero doping limits. I am a GCRI researcher, and here is my full analysis on human speciation, Open Source Wetware, and "Corporate Body-Repo"

[NOTE FROM OP]: Yesterday, I posted a version of this deep dive that immediately sparked an insanely good discussion. Unfortunately, it got taken down because I broke Rule 4 by dropping an external link to my source interview directly in the text. I spoke with the mod team, realized my mistake, and stripped the links out entirely. I am re-submitting this full essay as a pure discussion post because your arguments yesterday were brilliant. I want to keep that momentum going.

***

In a matter of weeks, Las Vegas will host the inaugural Enhanced Games on May 24, 2026. The founders, Aron D’Souza and tech billionaire Peter Thiel, are officially blowing up traditional sports.

The initial program includes swimming, athletics, and weightlifting. There is no WADA involvement and no anti-doping control. Athletes will be permitted to compete using CRISPR gene engineering, myostatin inhibition, anabolic steroids, and cutting-edge neurostimulants under medical supervision.

USADA head Travis Tygart famously called it a "dangerous clown show". Traditional critics warn of a loss of athletic integrity.

However, as a futurist, and visiting researcher at the GCRI, I recently gave an interview arguing that the debate has entirely outgrown sports. Treating this event as just a doping-heavy Olympics misses the point completely. We are witnessing an existential bifurcation point. Here is my full breakdown of the socio-technical nightmare, the ethical paradoxes, and the undeniable medical breakthroughs heading our way.

1. Homeostasis vs. Allostasis

Humanity has basically hit the ceiling of our natural physical potential. World records are broken less frequently. The margins are shrinking while doping scandals are multiplying. The IOC keeps selling the public a promise of pure natural miracles, but those miracles rarely exist in a sterile bubble anymore. WADA has essentially turned into a punitive body of selective justice.

The Enhanced Games drop the pretense. By operating in a regulatory gray zone, they initiate a massive open-source N-of-1 clinical trial on human subjects. This completely flips the foundational postulates of medicine. Conventional medicine operates under homeostasis. The goal is to return a sick patient to a normalized baseline. The Enhanced Games operate under allostasis. This means adaptation through radical change, upgrading healthy humans well beyond their biological ceilings.

We are looking at Formula 1 but for biology. In auto racing, technologies like ABS brakes and active suspension crossed over from F1 testing tracks directly into mass-market cars. The technology transfer here will be similar. The specific genetic intervention or pharmacological cocktail that allows an athlete to sprint 100 meters in 8 seconds could eventually become the exact mass-market therapy that keeps your grandmother from falling and fracturing her hip.

2. Modern Gladiators & "Corporate Body-Repo"

Supporters frame this event around bodily autonomy. They literally adapt the classic feminist rallying cry of "my body, my choice" and expand its scope to genetic modification.

This mirrors transhumanist philosopher Max More and his Proactionary Principle. More argues we should assess technology risks in the real world rather than sitting passively through inaction. He champions Morphological Freedom, which is the absolute right to alter your own physical form. If someone is willing to trade 20 years of life expectancy to earn a million dollars and run faster than Usain Bolt, whose right is it to stop them?

But look closely at the underlying mechanics. The athlete stops being a sovereign human subject and transforms into the physical chassis of a racing car. Corporations will do the tuning. We will absolutely see corporate stables replacing national teams. Picture Team Pfizer, Team Boston Dynamics, or Team Neuralink testing their tech live on television.

These athletes essentially become modern gladiators. When a 23 or 25-year-old signs informed consent in pursuit of fame and massive payouts, they probably do not grasp the long-term epigenetic horror of their choices. The deferred risks are enormous. We are treating humans as extreme data generators.

3. The Shadow of East Germany and WWII

EG evangelists complain that bioethics simply tie the hands of science. History has a brutally different take.

State-sponsored deregulated enhancement actually happened before. In the 1970s and 80s, East Germany operated the infamous State Plan 14.25. They ran a record factory by systematically feeding Oral Turinabol, produced by Jenapharm, to young athletes without their proper consent. The tragic case of Heidi Krieger perfectly illustrates this danger. As the 1986 European shot put champion, she was secretly administered heavy steroids disguised as harmless vitamins. It altered her phenotype and hormonal status so aggressively that she later transitioned to Andreas Krieger.

Look even further back into history. The only time the brakes were truly and completely removed from medical ethics was during the research conducted by physicians of the Third Reich. Those horrific experiments proved that achieving 100 percent biological efficiency requires the complete erasure of empathy. The Nuremberg Code was written in blood specifically so that science would not devour us in the name of progress.

4. 1980s Group B Racing & The New Civilian Market

For the first five years, we can expect the primary sponsors of the EG to be crypto exchanges, casinos, bookmakers, the adult industry, and fringe biohacking brands. It will thrive entirely on shock value. It might easily follow the trajectory of the legendary 1980s Group B rally racing. Those races had virtually no power limits and were wildly popular before eventually being shut down because people died on the track.

But if the Games run well and establish a safety record, major establishment corporations will step out of the shadows. A massive civilian market will follow. We will stop judging upgrades and start celebrating them as rational competitive advantages. Society will adopt these protocols for extending active life, improving sleep, endurance, and concentration.

5. Biological Speciation: An Arms Race in Your Office

This is where the risk hits everyday people. The Enhanced Games will likely act as a harbinger of species-level stratification. Society will quietly divide into ex-humans and post-humans.

Body modification today might just be for athletic gold. Tomorrow, it becomes the baseline for cognitive performance. Think about it. Many corporate managers are already using aggressive neurostimulants through so-called Silicon Valley protocols to boost their KPIs. What stops corporations from implicitly requiring those neurological upgrades to keep your job? We are inching our way toward a dystopian labor market where refusing a physical modification literally equals professional obsolescence.

6. Prosthetics & The End of Anthropomorphism

A common layperson myth suggests that war is the ultimate engine of progress for prosthetics. Economists have debunked that notion time and time again. The EG’s success could actually be a much larger trigger for prosthetic revolution than warfare.

Right now, traditional Paralympic engineering attempts to mimic lost limbs aesthetically. In the unregulated Enhanced arena, functionalism will violently override human aesthetics.

Ask an engineer why a sprinter even needs human-shaped knees. A knee is just an extra joint and a massive point of mechanical failure. We will see them swapped out for spring-loaded aerospace-alloy structures that look closer to an ostrich’s legs. Javelin throwers do not really need hands. They need osseointegrated locking catapult mechanisms fused directly into their bone structure.

We will see hybrid athletes completely normalizing hardware tuning. The ethics of disability will be turned entirely inside out. There might even emerge a specialized class of athletes opting for voluntary elective amputation, completely willing to replace perfectly healthy biology with durable robotic systems. It sounds like monstrous sci-fi to a layman, but it makes absolute logical sense if winning is the only metric.

Conclusion

Will the emergence of this competition force traditional sport to rethink its anti-doping models? I honestly do not think they will merge.

The traditional Olympics will stick around as a cultural museum piece. The IOC will turn into a ceremonial custodian of history, functioning very much like Kabuki theater, historical reenactments, or contemporary ballet. It will remain a prestigious, state-subsidized, and largely dull safe space for puritans.

Meanwhile, all the tech hype, massive capital, and youth culture will flow toward the Enhanced Games. Modern audiences hunger for extreme authenticity, and the classical Olympics reads as incredibly dishonest to anyone paying attention. The Enhanced Games are dangerous precisely because they are a seductive, aggressive alternative that is fully honest in its cynicism. They proudly admit to enhancement.

At the end of the day, people have been cyborgs for a very long time. We wear pacemakers. We put contacts in our eyes. We implant metal into our teeth and drink stimulants to get through the workday. We are just terribly embarrassed to admit that the natural human is completely obsolete. In May 2026, Peter Thiel is finally forcing us to drop the taboo.

Let's discuss. Are we watching the greatest medical leap in decades, or just throwing human ethics in the trash for a Vegas spectacle?

Note for everyone who participated in yesterday's thread: As agreed with the moderation team, there are zero links in this main text. I will drop the pure URL to the original full interview article down in the comments for those who wanted to read the extra details.

r/Frugal Ok-Royal4374

Cheap, repeatable little treats that don't turn into takeout spending

I'm trying to reset my spending after a rough few months - my parents separated and everything's felt chaotic. When I'm stressed I default to small buys that add up: coffee runs, convenience-store snacks, and then it snowballs into takeout.

I don't want a joyless no-fun budget, because that's usually when I rebound and spend more. I'm looking for low-cost, repeatable ways to treat myself at the end of a long day that still feel a little special.

Constraints: I live with roommates and have a tiny freezer (no room for bulk meals), I'm in school so time is limited, and I want to keep things simple - I'm fine cooking basic stuff but not complicated meal prep. Also trying not to buy more stuff just to look frugal.

Things I've already tried: iced coffee at home, popcorn on the stove, and a cheap face-mask night. Those helped for a while but I get bored and then the urge to go out comes back.

What are your go-to low-cost treats that actually scratch the takeout/impulse itch? Bonus if it's something you can turn into a routine so I don't have to decide every time. I'd love ideas that feel a little special without sneaking in extra spending.

r/Frugal Embarrassed_Pay1275

My DIY carpet cleaning hacks are failing. Is it time to buy a machine, or is there a better way?

I'm at my wit's end. I've spent the last week trying to be frugal by using vinegar, baking soda, and even acetone on a stubborn carpet stain. The result? My house smells like a salad bar, the stain is now a weird gray smudge, and I'm pretty sure I'm damaging the fibers. I've already spent about $50 on different miracle DIY ingredients, and I feel like I'm just throwing money down the drain.

I'm starting to think that DIY chemicals are a trap and might end up costing me a fortune in carpet replacement.

Should I just invest in a dedicated cleaning machine? Is there a machine that actually works without needing expensive chemical refills? I've heard about high-temp steam extractors, are they worth the investment? I want something that lasts and is actually frugal in the long run.

Please help me stop the DIY madness. What do you guys use to keep your carpets clean without ruining them?

r/artificial Fred9146825

Bringing Code Review to Claude Code

Today we're introducing Code Review, which dispatches a team of agents on every PR to catch the bugs that skims miss, built for depth, not speed. It's the system we run on nearly every PR at Anthropic. Now in research preview for Team and Enterprise.

r/Ghosts acvcardo

Any help for two ghosts that won’t leave me alone

Hi, sorry I have been able to see ghost since I was a child, and I have had bad experience in the past with different entities when a learn how to guard myself but recently I have been followed by these two entities and I can just feel that it’s just something wrong with them. I can’t tell you what it is but it is off about them. They just sit and stare all day. They don’t do anything else I have contacted a few other people on the other side that’s the only thing that seemed to get rid of them not my guarding help me telling them to leave. I’m just at appoint but I don’t leave me alone. They have been grabbing me, but I don’t know what to do because I can use incense because of my building. I can’t use anything with smoke because it will have to fire alarm I’ve tried everything. I could possibly think of to get rid of them but they just came on coming back. I’m just wondering if anyone knew anything to help me with them. I haven’t done anything that was draw the attention to me. I haven’t used an Ouija board or any rituals. I just don’t know why they have attached to me if you know anything to get rid of them that would be amazing because I can’t seem to get through to them. They won’t talk to me. They won’t show me what they want. They just sit there and stare

r/Futurology Busy-Debate-7386

How the Industrial Surplus Enabled the Rise of the Surveillance State

The Industrial Revolution marked a turning point that drastically altered the architecture of human civilization through two fundamental axes. First, this process emancipated the vast majority of society from the servitude of subsistence tasks. While in the pre-industrial era agriculture, livestock farming, and fishing consumed between 80 and 90% of the workforce, mechanization allowed humanity to diversify its activities. This liberation of labor was the catalyst that enabled the emergence of vocations and social structures that were previously unfeasible, simply because the absolute priority was caloric survival.

However, this evolution brought with it a second, more somber consequence: it endowed the State with a technical capacity for surveillance and regulation that surpasses the ambitions of any tyranny of the past. Not even the most invasive intelligence apparatuses of the 20th century can compare with the depth and reach of contemporary digital infrastructures.

From a historical perspective, the limited productivity of pre-industrial economies imposed a natural limit on the extent of the state. With minimal productivity, the bureaucratic apparatus was necessarily small (representing less than 10% of the population) and was limited to basic functions such as tax collection and territorial defense. At that time, the meticulous management of citizens' private lives was not only impractical but a material impossibility.

This barrier crumbled after the Industrial Revolution. The exponential increase in productivity generated sufficient surpluses to feed an ever-expanding bureaucracy. With fewer individuals required for basic sustenance, the human surplus was absorbed by institutions of control, compliance, and public administration. Thus, in Western nations, the last century has witnessed a steady growth in the civil service, accompanied by an increasingly dense network of laws and mechanisms of coercion.

Today, we are witnessing not only an expansion of state but also its infiltration into the very fabric of daily life. Surveillance has become invisible, delegated to algorithms that process our information, track our movements, and scrutinize our speech, even reaching the point of predictive power. Whistleblower's revelations in the last decade confirmed that mass surveillance was not a distant dystopia, but a fully operational system implemented through the forced alliance between state agencies and technology corporations.

In this new paradigm, especially after the consolidation of the digital age, any form of dissent can be identified and suppressed with surgical speed. The architecture for total control has already been built. The current debate no longer revolves around the existence of these tools, but rather the intensity, secrecy, and speed with which they will be fully implemented.

r/Futurology International-Eye613

The role of technological advancement in economic development

One of the most consistent drivers of long-term economic development has been technological advancement.

When new technologies emerge, they increase productivity—allowing the same amount of labor and capital to produce more output. This productivity growth is what enables economies to raise living standards over time. Historically, major waves of development have been closely tied to technological breakthroughs, from industrial machinery to modern computing and digital networks.

Technology also reshapes entire sectors. It lowers production costs, creates new industries, and changes labor demand. For example, automation and digital platforms have transformed manufacturing, logistics, finance, and communication. While these changes can disrupt certain jobs in the short term, they also generate new opportunities and markets in the long run.

Another important effect is the diffusion of knowledge. As technologies spread globally, developing economies can adopt existing innovations rather than invent everything from scratch. This “catch-up growth” has helped many countries accelerate their development over the past few decades.

At the same time, technological progress brings policy challenges—such as managing inequality, workforce transitions, and ensuring access to education and infrastructure needed to participate in a more technology-driven economy.

Overall, technological advancement remains one of the central forces shaping economic growth, productivity, and structural change across the world.

r/geography Brief-Luck-6254

What are some cities whose relevance has waned or faded completely?

Ctesiphon (Pictured above) was an important city in the middle east during antiquity and the early middle ages, however its relevance would fade until it became a set of ruins as it is today, being replaced by other cities like Baghdad.

What are some other examples of cities which have undergone similar fates?

r/illusionporn bigjobbyx

Custom Chessboard illusion generator? Link in the description

r/comfyui crystal_alpine

ComfyUI launches App Mode and ComfyHub

Hi r/comfyui, I am Yoland from Comfy Org. We just launched ComfyUI App Mode and Workflow Hub.

App Mode (or what we internally call, comfyui 1111 😉) is a new mode/interface that allow you to turn any workflow into a simple to use UI. All you need to do is select a set of input parameters (prompts, seed, input image) and turn that into simple-to-use webui like interface. You can easily share your app to others just like how you share your workflows. To try it out, update your Comfy to the new version or try it on Comfy cloud.

ComfyHub is a new workflow sharing hub that allow anyone to directly share their workflow/app to others. We are currenly taking a selective group to share their workflows to avoid moderation needs. If you are interested, please apply on ComfyHub

https://comfy.org/workflows

These features aim to bring more accessiblity to folks who want to run ComfyUI and open models.

Both features are in beta and we would love to get your thoughts.

Please also help support our launch on Twitter, Instagram, and Linkedin! 🙏

r/artificial AuditMind

Are we in the "modem era" of AI?

In the early days of the internet we were in a similar situation.

Modems, early Linux systems, the first websites.

Technically primitive by today’s standards, but something important had appeared: information could suddenly move freely across a network. That was a novum by this time and not many understood it yet.

At the time the real question was not about the technology itself.

The question was much simpler.

What can we actually build with this network??

Today we seem to be entering a similar phase again.

Large language models and related systems allow machines to interact with knowledge: documents, code, conversations, procedures. The tools are still very rough. Many experiments will disappear. Much of what we see today will not survive.

But that is exactly what makes this moment interesting.

The real challenge ahead is not the models themselves.

It is the integration of knowledge and machines into real systems and organisations.

In that sense, this feels less like a finished technology wave and more like the early internet again.

A lot of experimentation. A lot of curiosity. And many things we have not imagined yet. And a lot of fun 😄

r/Futurology Busy-Debate-7386

The demographic experiment of industrial civilization

For more than two centuries, every major wave of technological innovation has been accompanied by recurring fears about the future of employment. From the mechanization of the Industrial Revolution, through electrification, computing, and digitization, each advance has multiplied the productivity of human labor. Yet the historical result has not been permanent mass unemployment. Instead, economies have continued to expand, and new occupations have continually emerged. The reason is relatively simple: when productivity increases, the cost of producing goods and services decreases, which in turn generates new forms of consumption, new industries, and new jobs. Since human needs are essentially unlimited, new economic activities capable of absorbing human labor have always arisen.

This historical observation suggests an implicit condition: as long as there is any set of tasks that machines cannot fully perform, there will continue to be a demand for human labor. Technology can transform tasks, increase efficiency, or eliminate specific occupations, but if there remains any area where human intervention is necessary, the economic system will tend to reorganize labor toward that area. In this sense, human employment has persisted not because technology advances slowly, but because it has never eliminated all the functions that require human capabilities.

The problem arises when this dynamic intersects with another structural phenomenon of modern societies: demographic decline. In much of the developed world, birth rates are well below the replacement level. For decades this may seem manageable, but in the very long term it implies a sustained reduction in population and, therefore, in the workforce. If automation continues to increase productivity but does not completely eliminate the need for human labor, then a shrinking population will eventually face a structural shortage of workers. The economy may become more efficient, but it will still need people to operate systems, maintain infrastructure, manage institutions, and provide countless social services.

If this demographic trend continues for centuries, the result could be a process of progressive economic contraction. A smaller population means less total production, less specialization, and a reduced capacity to sustain complex technological structures. Over time, a highly sophisticated civilization could lose some of its material capacity simply due to a lack of sufficient people to maintain it.

Furthermore, if low birth rates are linked to the cultural and material changes brought about by industrialization (urbanization, extended education, high child-rearing costs, and individual-centered lifestyles) then the demographic dynamics have a deeper implication. As long as these conditions persist, fertility will tend to remain low. Consequently, population decline would not simply stop in a somewhat smaller or less complex society. It would continue cumulatively over generations, progressively reducing the economic scale, institutional density, and technological level that society can sustain.

In that scenario, the contraction would not be limited to moderate simplification. As population and productive capacity decline, many of the structures that characterize industrial civilization (complex infrastructures, global production networks, and highly specialized technological systems) would become increasingly difficult to maintain. Society would tend to gradually simplify until it reaches material conditions very different from those of today. Only when conditions return to simpler ways of life (similar to those that existed before industrialization) could demographic incentives reappear that stabilize the population.

From this scenario arises a fundamental dilemma for modern societies. There are, in principle, three possible technological developments capable of permanently breaking the link between population and productive capacity.

The first would be the creation of fully functional artificial wombs. If human reproduction could be carried out on a large scale outside the human body, the number of births would no longer depend exclusively on individual fertility decisions. This would allow for artificial population growth and compensate for declining birth rates.

The second would be the emergence of technologies capable of halting or reversing biological aging. If people could remain healthy and active for extremely long periods, the need for generational replacement would decrease radically.

The working-age population could continue to grow even with very low birth rates, because people would not leave the workforce due to aging.

The third scenario would be the development of artificial general intelligence (AGI). Unlike current systems, an AGI would be capable of performing essentially any cognitive task a human can perform. In that case, the labor supply problem would virtually disappear, because there would be an almost unlimited source of artificial labor capacity. Since human needs tend to expand with wealth and time, the demand for goods and services would remain potentially infinite, while the labor supply would no longer be limited by the size of the human population.

In the absence of any of these three technologies (mass artificial reproduction, the elimination of aging, or artificial general intelligence) modern societies could face a structural constraint that is difficult to avoid. History shows that automation alone does not eliminate the need for human workers. But if the population continues to decline for generations, that need could become an increasingly stringent limit on economic and technological complexity.

Therefore, the dilemma of advanced societies can be clearly stated: either technologies emerge that can break the link between population and productive capacity, or demographic decline will initiate a prolonged process of civilizational contraction. If the causes of low fertility are linked to the very social model of industrial modernity, population reduction would only halt when society has regressed enough for the demographic conditions that historically sustained stable populations to reappear. In that case, the point of equilibrium could be found in ways of life much closer to pre-industrial societies than to contemporary technological civilization.

r/artificial Fit-Elk1425

2minutepapers covers Nvidia self driving car updating including its usage of reinforcement learning and reactions

r/Futurology sundler

Local policies targeting better health can lead to more support for pro-climate action initiatives

PLEASE NOTE: I understand that current politics in America means much of this doesn't apply there right now. What I'm saying here is more relevant to the rest of the Western world and other countries. Also, this isn't really a political issue outside of certain countries and I'd really much rather we didn't derail discussion towards whose country is the BesTestEr.

 

Mostly, the climate is spoken of as a global issue and that's certainly right. But we should also point out how it affects us on a local level.

We are more likely to support climate action when we can feel the benefits in our daily life. We really should also focus on the health benefits of local climate policies such as cleaner air, safer homes and streets, improved walking and cycling infrastructure, cooler cities, and fewer illnesses and poor health due to extreme climate.

Many deaths and illnesses are linked to pollution, which is a visible problem in many cities and towns, especially in poorer, industrial areas. Our water quality matters to a lot of us. We are happier and healthier when we live in greener areas, which can help alleviate extreme weather events such as when it's very hot or rainy.

It's a lot easier to carry out such smaller scale policies than national or global ones, which seem out of reach for most of us. These short-term, viable, visible and more likely victories can build public support and bring more people, government departments, and organisations on board (not just those focused on the environment). This means that larger scale, general climate plans will also be more likely to succeed.

r/artificial the_elephant_stan

What would the popping of the AI bubble actually mean for AI as a technology?

I understand the reasons why the AI industry is a bubble and agree that it will surely pop.

But so many people treat AI as if, after the pop, we won't have to deal with it anymore. On the consumer scale, it's now integrated into every platform. On the global scale, it's now a major part of "defense" strategies.

The dot-com bubble didn't mean the death of the Internet. The housing bubble didn't mean mortgages went away. And we still grow tulips.

What does the bubble popping mean for the tech itself?

r/Frugal Beautiful_Coat_9294

UPDATE: How to get housewares and stuff needed for new apartment without breaking the bank

I followed your advice and want to thank everyone who commented on my post to help me. I am overwhelmed with the results and now have a fully furnished apartment and fully stocked kitchen all from donations.

I posted on Facebook for friends and family and received some kitchen ware and some furniture from my parents and other family members.

I posted on three buy nothing groups on Facebook and received more kitchen items. A man who has a junk hauling business saw my post and told me he gets a lot of nice furniture that people throw away and asked me what I needed. I gave him a list. Yesterday he delivered a sofabed loveseat (like new), a dining room table with four chairs, a brand new area rug, a coffee table, a file cabinet and an office chair. And he didn’t charge me for delivery or anything!!

The only thing I didn’t get was a set of patio furniture but I can afford to get that on my own. I am blown away you guys by people’s generosity. People were giving me candles, towels….all kinds of stuff!! It literally saved me thousands of dollars in furniture and housewares.

r/Ghosts Dangerous-Kitchen-97

Genuinely curious what this community thinks

Back story is I work at a very old university. I also work in film and scout locations for movies and tv shows. This was captured in a serious of photos taken for a building scout. I was with the person when they took this photo and saw nothing but this is a 1930s or 40s building and the photo really caught me off guard.

r/Ghosts LilCelery100

Might have caught Something in a forest.

So I went for a walk with my friend to a lake but before we proceeded I always take a photo with my iPhone 14 night mode camera just in case if there is anyone. Now I took this zoomed and saw as shown in the first image. And I have no clue if it’s the night mode camera or ofc a ghost. So I took another and this thing did not appear. But there is no was that’s a human and there are no shadows or trees to corse this! So A it’s the camera or B it’s a ghost (also I’m unsure of the history of this lake so someone could have died!)

r/Ghosts j4ck_0f_spades

Is there a specific wait time I have to wait before talking to spirits again?

So I do talk to spirits sometimes with a spirit box app since I can't buy one yet, but I dunno how long I should wait for, can I do it twice a day or do I have to wait for tomorrow? I really don't want to anger or rush spirits!

r/Ghosts ToughPerception3579

People who worked in haunted places , what was your wildest encounter ?

r/AI_Agents help-me-grow

Weekly Hiring Thread

If you're hiring use this thread.

Include:

  1. Company Name
  2. Role Name
  3. Full Time/Part Time/Contract
  4. Role Description
  5. Salary Range
r/ImaginaryPortals YanniRotten

Pustynny portal (Desert portal) by Jarosław Jaśnikowski

r/Futurology lughnasadh

In 2025, for the first time, solar and wind produced more electricity than fossil fuels in the European Union. The bloc's goal to reduce fossil fuel use by 90% by 2040 seems on track.

The 2026 Middle East War is likely to be the last in human history where a disruption to fossil fuels means a major global economic impact. By the 2030s, both China and Europe will be well on their way to totally decarbonising their economies, and Chinese manufacturing exports of renewable tech will be doing the same for much of the rest of the world. The age of fossil fuels will be disappearing in the rear-view mirror.

The longer the war goes on, the more renewables win. It will be clear they mean cheap, reliable, clean, and freedom from global instability. Tens of millions of people around the world who have cars to buy in 2026 will be looking at EVs with new appreciation.

DATA/ARTICLE - In 2025, solar and wind produced more electricity than fossil fuels in the European Union

r/ImaginaryPortals Lol33ta

Purple by Raja Nandepu

r/leagueoflegends AutoModerator

Monday Megathread! Ask questions and share knowledge; newcomer questions encouraged!

Welcome to the latest Monday Megathread, where you the community get to ask your questions and share your knowledge.

Need help against a certain champion? Unsure how and where to ward? Looking to improve your csing? This is the place to ask. This weekly thread is a place for new players to ask questions and get help/advice from more experienced players. So, don't hold back, get your game related questions ready and post away, and hopefully someone can answer them!

Previous threads


If you wish to just view top level comments (ie questions) add ?depth=1 to the end of the page url.

Looking to chat with people live? Come check out our discord channel here! We also have the channel #new-player-help if you want to ask questions there.

If you are willing to learn, /r/SummonerSchool and its respective discord are always willing to teach.


Basic Mechanics explanation in our Wiki

New Player Guide by /u/The-All-Tomato

Riot's New Player Guide

LolEsports New Viewer Guide

Other:

Please sort this post by new, so that you can see the newer, unanswered questions.

r/Ghosts Direct-Hat5929

Is my house haunted or am I just paranoid?

Ok so I live in a 200 year old house In a very historical location in New England. However, we believe that parts of my house(my room!) were moved here from someplace else and could be much older. This already says that A)it has a TON of history and B)it’s old so my experiences could probably be explained by normal old house stuff.

But a few times over the last year, I’ve heard family members call my name when they were either in a different part of the house or not even home. And it sounds just like how they would call my name normally.

Then, 3 days ago it happened again and it sounded like my sister but it came from the opposite direction of her room. The next day I heard my mom call me by her nickname for me when she was out getting groceries. That night when I was sitting in bed I heard a weird sound coming from the door of my bed that sounded like a mix between a knock and a click. It happened twice and then a charger that was on my desk and plugged into my computer fell off my desk and landed 2 feet away as if someone unplugged it and pushed it. Then the sound happened 3 more times. Honestly I don’t know if it was just that fact that weird stuff had been happening but I did NOT feel alone that night after what happened.

So my question is, is something supernatural happening in my house or is it just my imagination and an old creaky house? I know this isn’t the most interesting story but thanks for reading and I appreciate any responses or advice!

r/ImaginaryPortals Lol33ta

Echoes of the Void by Vadim Sadovski

r/illusionporn bigjobbyx

Screen bender

r/Ghosts Hot-Top3124

Strange noise woke my whole family at night

My family got woken up by a noise at around 5.50am.

Can someone confirm this is NOT a ghost and perhaps just an animal.

r/illusionporn bigjobbyx

Can you reverse the spin? (looping version)

r/ImaginaryPortals Lol33ta

Vault Part 01 by Oliver Beck

r/Ghosts karatemnn

A Haunted CD plays live on Japanese Television variety show

I enjoy Japanese aesthetics and nothing beats the paranormal investigations and shows on Japanese tv tho

from BenjaminDa91346 on twitter then

"The “creepy CD that changes its sound every time you listen” sent to Waratte Iitomo is insanely terrifying

When they played it again later, a sound they’d never heard before started playing, and the entire studio was enveloped in fear

And then, after Tamori messed around with it, something horrifying happened"

from NicoleA18060980 on twitter

It's hilarious and terrifying—the creepy CD that was sent to us that changes its audio every time you listen to it is freaking scary as hell.
It was originally a CD of B'z's album RISKY.

A few days later, when I tried playing it again, completely unfamiliar sounds started flowing out, and the studio was enveloped in terror.

And after Tamori messed around with it, something horrifying happens.

r/Ghosts Professional-Gur6746

Any Haunted Locations in Plano, Texas? I’ve been trying to find a place to test out ghost hunting gear.

Title says it all, my birthday is coming up and I’d like to do a bit of ghost hunting for myself as it is something I’ve wanted to do for years. Just never really had the chance. Are there any abandoned or haunted locations in Plano, TX that I won’t get in trouble for walking around late into the evening? Public property preferred.

r/Ghosts Significant_Snow7980

Thought I'd share my experience seeing a ghost with my brother

Think it was year 2005-2006 me and my brother were driving down a road with a church with a cemetery not too far away. It was on a highway with a grass median two lanes on each side. I was 14 he was 17 or 18 and on the opposite side we see a shadow that's the figure of a man that was almost translucent running across the two lanes on the opposite side and running into the woods. I still think about it and know people call these shadow people.

r/ImaginaryPortals Lol33ta

The Way Out is Through by Kevin Carden

r/Ghosts yuleking

Came into work to find a ghost had knocked a tv over

First movement at 8:30 at night 3 hours after leaving, knocked over an hour later at 9:30

r/Ghosts Crestfallen_Quercus

I heard a crying lady in my hotel stay. Hotel Allegro in Chicago.

Posting here because I had a weird experience last night in Hotel Allegro. I heard what I thought was a wailing crying lady. We had the tv on (in low volume) and my partner was watching tik tok videos (in moderate volume) when all of a sudden I heard like a lady crying. It was brief and it sounded like it was coming from the room next door. It was about 10 p.m. on the 19th floor. My partner was too focused on his tik tok videos but I am certain it wasn’t the tv and it wasn’t his videos. I’m still staying in the hotel for another 2 nights so I’ll keep you all posted. I am a believer of the paranormal but I also like to think of plausible situations and realistic things before resorting to thinking it’s paranormal. I’m interested if anyone else has had experiences here considering I saw a post also about a crying lady in the same hotel a few years back. Surely there must be some history in this old building?

r/ImaginaryPortals I_Burn_Cereal

At the Gates of Hel by Andrea Carbone

r/Ghosts redheadwbangs

I think I’m haunted and I don’t know what to do

There seems to be a ghost or spirit or something attached to me. It is changing the reality around me. Cigarettes are appearing out of seemingly nowhere. Shoes are being found in places that they weren’t before. Lighters are showing up out of nowhere. It is interacting with my reality and changing things around me. I don’t know how to get rid of it. It’s driving me up the walls. I’ve been hospitalized because of it. Any ideas on how to get rid of it?

r/Ghosts Blonde_Meringue98

Supernatural/Demonic (?) happenings while pregnant

Hello! I’m almost 8 months pregnant with my second baby and some strange things have been happening. For the past two nights I’ve woken up after seeing the most horrific things happening to my first born child. When I woke up the second time (this morning) the room was still dark but there was a large black mass near the ceiling close to me and it had a buzz to it. Like energy was floating from me into it and it was the scariest thing. It wasnt par*lysis because I could move and was trying to wake my husband up. (On top of this, for the past week I’ll hear people I know screaming at me when I’m on the verge of waking up and then the next day I’ll have a message from them.)

When I was pregnant with my first, I was in bed and I could hear someone walk into the house, go up the stairs, down the hall, and go straight into my room, and then stop dead center at the foot of the bed. My husband shot up out of bed and started screaming. He said he physically saw this whole thing happen and saw them stop at the foot of the bed. I called the priest in town when this one happened but never followed through with him coming over.

Is this just pregnancy hormones or this actually supernatural? Should I put this in a different reddit page?

r/Ghosts Vegetable-Ear-9731

My Encounter With A Cineplex Ghost In Vancouver

A few years ago I was in a Cineplex watching a movie. I believe this was shortly after the pandemic. As I was watching the movie I saw a man wearing a top hat watching the screen. The man looked completely white and somewhat transparent.

I saw the thing and I looked down at my phone to take a picture, but after loading up my camera the ghost was gone.

So, I texted my friend, and I texted my brother-in-law about what I saw. To my surprise, my friend said "Ghosts are real, dude," and my BIL said "Ghosts seem to like movie theaters," and I had a conversation with both of them where they both acted like it wasn't all that special, that apparently seeing ghosts is a normal thing.

It felt weird. Normally, both of them dismiss all things supernatural including ghosts, but it felt like I was talking to completely different people as they just nonchalantly stated that ghosts are real, and that it's common knowledge that ghosts are real.

I don't know why I thought about this encounter randomly, but sometimes I think back to that message. "Ghosts are real, dude."

r/ImaginaryPortals annieann_

Cover Art by Aki Senkinn

r/leagueoflegends AutoModerator

Patch 26.5 Bug Megathread

Greetings Summoners!

With every new patch Riot introduces to balance out champions and items there are some unforeseen issues that creep up and cause disruption during gameplay. We have noticed that these issues which eventually get fixed clutter up the subreddit immediately following the patch.

We want to avoid this by having a single Megathread which will be posted after every patch so that you guys can report the various issues in one place. This allows Riot to easily keep track of the bugs by providing a central hub and also allows other users to confirm that they might have encountered.

Note only bugs caused by the 26.5 Patch should be reported below.

Prerequisites to be noted before reporting a bug


  1. A bug will ideally be accompanied with a screenshot or a video. This provides credibility to your report.
  2. Steps to recreate the bugs should be submitted if possible. This helps Rioters recreate the bug and helps them find the cause behind it.
  3. The bug must have been caused by the latest patch.

Format when reporting a bug: When reporting a bug, please provide as much information as you can about your computer.

Server: The server you encountered the bug (NA, EUW, EUNE, TR, RU, BR, LAS, LAN etc)

Type of Bug: Client Bug, In Game Bug etc

Description: Describe what was the bug that occurred.

Video / Screenshot: Insert screenshot (F12 in game) or Video of the bug occurring.

Steps to reproduce: Provide the steps necessary if someone else had to reproduce the bug.

Expected result: What should have been the result when you follow the steps mentioned above.

Observed result: What was the result you obtained when you followed the steps mentioned above?

Reproduction rate: If you try to recreate the bug how successful are you in causing it to occur? (1/10: Occurs once every 10 tries, 5/10: Occurs 5 times out of 10, 10/10: Happens every single time)

System specs: Processor, Video card, Ram, HDD/SSD, everything you can provide, even drivers.


Example Bug:

• Server: EUW

• Type of Bug: In-Game Bug etc

• Description: Zed's R (Death Mark) does not apply secondary damage

• Insert Video / Screenshot of the incident

• Reproduction rate: 2/10 (happened 2 out of 10 times)

• Steps to reproduce:

Launch a game after selecting Zed as your champion. Attempt to use Death Mark. Observe the result.

• Expected result: The damage should apply after a short delay, amplified by damage dealt during the effect.

• Observed result: The damage will not apply properly.

• System Specs: Intel i5 Processor, Windows 7, Nvidia Graphics card (insert model number) etc.


If you don't know how to format comments on Reddit click here


- **Server:** - **Type of Bug:** - **Description:** - **Video / Screenshot:** - **Steps to reproduce:** - **Expected result:** - **Observed result:** - **Reproduction rate:** - **System specs:** 

Copy paste the above code and fill in your details.


From this Megathread the list of bugs will not be summarized and put up in the main body of the thread, however, note that many Rioters are going through every single comment so don't worry if you post the 1500th or 3000th comment, every reply will be read over the next few days.

r/ImaginaryPortals YanniRotten

Through The Gate - Explorer by David Tilton

r/ImaginaryPortals I_Burn_Cereal

Secret path by Sergey Malinov

r/illusionporn Old_Pomegranate_6272

Circular mural illusion

r/illusionporn Old_Pomegranate_6272

Insane timing... Don't know!

Dont know! , whether this is right time to share ...

r/ImaginaryPortals YanniRotten

Level Select by Ari-Matti Toivonen (Wuggynaut)

r/ImaginaryPortals Lol33ta

Unintended Summoning by Timi Honkanen

r/illusionporn bigjobbyx

Robots in disguise: Proof

r/ImaginaryPortals Lol33ta

TERA Origin by J Line

r/Frugal AutoModerator

Monthly megathread: Discuss quick frugal ideas, frugal challenges you're starting, and share your hauls with others here!

Hi everyone,

Welcome to our monthly megathread! Please use this as a space to generate discussion and post your frugal updates, tips/tricks, or anything else!

---

Important Links:

Full subreddit rules here: https://www.reddit.com/r/Frugal/about/rules/

Official subreddit Discord link here: https://discord.gg/W6a2yvac2h/

---

Share with us!

· What are some unique thrift store finds you came across this week?

· Did you use couponing tricks to get an amazing haul? How'd you accomplish that?

· Was there something you had that you put to use in a new way?

· What is your philosophy on frugality?

---

Select list of some top posts of the previous month(s):

  1. Frugal living: Moving into a school converted into apartments! 600/month, all utilities included
  2. Follow up- my daughter’s costume. We took $1 pumpkins and an old sweater and made them into a Venus Flytrap costume.
  3. Gas bill going up 17%… I’m going on strike
  4. I love the library most because it saves money
  5. We live in Northern Canada, land of runaway food prices. Some of our harvest saved for winter. What started as a hobby has become a necessity.
  6. 70 lbs of potatoes I grew from seed potatoes from a garden store and an old bag of russets from my grandma’s pantry. Total cost: $10
  7. Gatorade, Fritos and Kleenex among US companies blasted for 'scamming customers with shrinkflation' as prices rise
  8. Forty years ago we started a store cupboard of household essentials to save money before our children were born. This is last of our soap stash.
  9. Noticed this about my life before I committed to a tighter budget.
  10. Seeds from Dollar Store vs Ace Hardware.
  11. I was looking online for a product that would safely hold my house key while jogging. Then I remembered I had such a product already.
  12. Using patterned socks to mend holes in clothes
  13. My dogs eat raw as I believe it’s best for them but I don’t want to pay the high cost. So after ads requesting leftover, extra, freezer burnt meat. I just made enough grind to feed my dogs for 9 months. Free.
  14. What are your ‘fuck-it this makes me happy’ non-frugal purchases?
  15. Where is this so-called 7% inflation everyone's talking about? Where I live (~150k pop. county), half my groceries' prices are up ~30% on average. Anyone else? How are you coping with the increased expenses?
  16. You are allowed to refill squeeze tubes of jam with regular jam. The government can't stop you.
r/ImaginaryPortals YanniRotten

The Outcasts of Heaven Belt cover art by Tim White

r/illusionporn bigjobbyx

Robots in disguise

r/illusionporn bigjobbyx

These Autobots are the same colour. Red.

r/ImaginaryPortals annieann_

Fabled Passage by Bruce Brenneise

r/ImaginaryPortals YanniRotten

Gwent card: Teleportation by Anton Nazarenko

r/ImaginaryPortals I_Burn_Cereal

The Opening to Blackwood Forest by Sophie Cook

r/ImaginaryPortals Lol33ta

Space Gate by Henrique Dld

r/illusionporn oac04

PyChroma: A Python package to transform images into dot-based chromostereopsis output

Hello all,

I am excited to announce that I have recently developed a new Python package called "PyChroma", which transforms existing images into dot-based chromostereopsis output!

Chromostereopsis is a visual illusion whereby a perception of depth is created upon viewing a combination of certain colours on an achromatic background in a two-dimensional image. This illusion is prototypically elicited upon viewing red and blue colours on a black background.

Although the impression of depth can be produced from solid colour blocks, the package takes this a step further and replaces blocks of a specific colour with a dot pattern, comprising both small and large dots, which can be flexibly manipulated according to several parameters. The rationale behind this is that dot-based output, relative to solid colour blocks, generally seems to create a more robust perception of depth, which I have attempted to illustrate in the image attached.

The package includes four functions, allowing users to create red-blue, red-green, red-grey, and "flexible" dot-based chromostereopsis images, with the "flexible" package allowing the user to specify their output colours.

You can find PyChroma's GitHub repository here: github.com/OliverACollins/PyChroma

r/ImaginaryPortals I_Burn_Cereal

Aracne Prison & Scene [80 x 90] by Morvold Press | Jeff Todd

r/ImaginaryPortals Cyborg_Ape

Unknown Reality by Walt Jan

r/ImaginaryPortals YanniRotten

"Alzur's Double-Cross" [Gwent] by Anna Podedworna

r/ImaginaryPortals Lol33ta

Singularity by João G. Santos

r/ImaginaryPortals YanniRotten

Rift by Logan Feliciano

r/illusionporn bigjobbyx

Your visual system is filling in the blanks here, the bright circle you see isn’t drawn at all. Only eight straight-line rays are present.

r/ForgottenTV DaniJ678

Veronica's Closet (1997 - 2000)

I started watching this show on and off. I know the show was the third most-watched show, and then the ratings started to dip as the show went on. Did you watch the show when it first aired? What are your thoughts on Josh's sexuality? I wasn't bothered that he started to date a girl. They were acting like bisexuality didn't exist at the time, but I guess that was because the show was set in the 90s, and people hadn't caught up by then. Did you think the show went downhill after the first season?

r/illusionporn bigjobbyx

Nothing circular here, only an Ehrenstein illusion

r/illusionporn votantonio

Someone is looking in the camera…

r/illusionporn Das_Zeppelin

This is cool AF

r/illusionporn bigjobbyx

Black and white only. Maybe a bit of red?

r/illusionporn bigjobbyx

Retina scorcher

r/happycryingdads keyser-_-soze

While doing interviews in the streets of Livigno, a CBC interviewer finds Canadian Olympian Maïa Schwinghammer's dad, who cries talking about watching his daughter compete at Milano Cortina 2026

r/geography abu_doubleu

State of r/geography in 2026: Should anything change?

Hello everybody!

As a moderator in this subreddit, I have noticed some users are expressing dissatisfaction with the state of the subreddit over the past few months.

If you have any suggestions on how this subreddit should be moderated, or any other ideas in general, please comment them here.

Being specific and with examples is great.

r/illusionporn bigjobbyx

So much movement in this one

r/illusionporn EndersGame_Reviewer

Makes you think

r/ForgottenTV OrgasmicOasis

"Trial & Error" / "Trial & Error: Lady, Killer" NBC (2017-2018)

Trial & Error was a mockumentary-style sitcom that aired on NBC, created by Jeff Astrof and Matthew Miller, about a young New York lawyer who moves to a small Southern town to join a quirky legal team in a taxidermy shop, tackling absurd murder cases with a cast of eccentric characters. The show, which premiered in 2017, is known for its humor, which is loosely based on true crime stories, and its unique setting in East Peck, South Carolina.

Trial & Error: Lady, Killer is the second season of NBC’s mockumentary-style sitcom, created by Jeff Astrof and Matthew Miller, which follows a young New York lawyer defending an eccentric Southern socialite accused of murdering her husband. Set once again in the bizarre town of East Peck, South Carolina, the season parodies true crime documentaries—particularly The Staircase—and leans even harder into absurd humor, unreliable narrators, and increasingly unhinged local personalities. Premiering in 2018, Lady, Killer is remembered for its sharp satire, heightened performances, and cult-favorite comedic chaos.

r/happycryingdads keyser-_-soze

This made me cry a little

r/happycryingdads thisisallme

Son surprises Dad with tickets to NFC Championship Game 🎉💙

r/happycryingdads Competitive_Serve_73

Dad returns from 6 month deployment and surprises his young daughter at school

r/happycryingdads narstee

gordon ramsay getting emotional at his daughter holly's wedding

r/aivideo ZashManson

🏆🥳🙀 AI VIDEO AWARDS 2026, HOSTED BY WHISKERS THE CAT: Nominees, Performers, and Presenters, coming to r/aivideo SUNDAY 03/15/2026

. 

AI VIDEO AWARDS 2026 🏆🤩

. 

COMING TO r/aivideo SUNDAY MARCH 15th 2026
LIVE FROM LAS VEGAS, NEVADA
AIRING EXCLUSIVELY HERE ON r/aivideo REDDIT

AN ORIGINAL r/aivideo PRODUCTION

. 

🐾 HOSTED BY:

🙀 WHISKERS THE CAT

. 

WITH OVER 56,000 VIDEO SUBMISSIONS, THE AI VIDEO AWARDS 2026 IS THE BIGGEST AWARDS RACE IN THE AI VIDEO INDUSTRY YET

AI VIDEO AWARDS ARE FULLY INTERACTIVE , VIEWERS PICK THE WINNERS. THERE ARE NO JUDGES. REDDIT UPVOTE SYSTEM PICKS THE WINNER IN EACH CATEGORY. ALL VIDEOS SUBMITTED TO r/aivideo ARE AUTOMATICALLY ENTERED IN THE RACE
THE TOP VIRAL AI VIDEO CREATORS IN THE WORLD, THE LEADING AI GENERATION PLATFORM COMPANIES IN THE INDUSTRY, AND THE LARGEST COMMUNITIES IN THE AI VIDEO SCENE ARE COLLABORATING IN THE PRODUCTION OF THIS CEREMONY

AI VIDEO AWARDS IS AN ORIGINAL r/aivideo PRODUCTION, ORGANIZED BY THE ACADEMY OF AI VIDEO ARTS AND SCIENCES, A 1PCITY STUDIOS PRODUCTION, IN COLLABORATION WITH KLING AI, MINIMAX, AND DOMO AI

WATCH LAST YEAR’S AI VIDEO AWARDS 2025 CEREMONY HERE

https://www.reddit.com/r/aivideo/s/eZCMuFIa3R

. 

🎸 LIVE APPEARANCES BY:

. 

WHISKERS THE CAT

MIND OVER IMAGES

ANDY RIFFETH

WACKY TUBE MAN

FALLING KNIFE FILMS

BOTS IN THE HALL

SELENA LOPEZ

KNGMKR

TONYA & TRISHA CODE

AMBER IRWIN

ONE MORE SHOT AI

. 

🏆 CURRENT 2026 AWARD NOMINEES

( LIST MAY CHANGE DUE TO VOTING AND NEW CONTENT SUBMITTED ):

. 

🤯 MINDBLOWING VIDEO OF THE YEAR

Category Includes: CRAZY / MINDBLOWING + HORROR + MEME + VIDEOGAME

. 

HeroHusky

"AI Chiropractor"

🔥 https://www.reddit.com/r/aivideo/comments/1omdf0l/ai_chiropractor/

.

NekoChan

"Fast Food Worker From the Feline Community"

🔥 https://www.reddit.com/r/aivideo/comments/1j4l2hd/fast_food_worker_from_the_feline_community/

.

Unknown

"No More Concerts"

🔥 https://www.reddit.com/r/aivideo/comments/1oxezic/the_didgeridoo/

.

Eslop

"Tube Man Exposed"

🔥 https://www.reddit.com/r/aivideo/comments/1p67r1r/tube_man_exposed/

.

Unknown

"Bong Smoking Olympics"

🔥 https://www.reddit.com/r/aivideo/comments/1o2bvi9/bong_smoking_olympics/

. 

📀 MUSIC VIDEO OF THE YEAR

. 

Synthetic Collective

"Take More Dr*gs"

🔥 https://www.reddit.com/r/aivideo/comments/1ohk6xx/take_more_drugs/

.

Promise AI

"Is This AI or Not?"

🔥 https://www.reddit.com/r/aivideo/comments/1nn80vf/is_this_ai_or_not/

.

Andy Riffeth

"Boys In Blue"

🔥 https://www.reddit.com/r/aivideo/comments/1j3kpsa/half_naked_women_get_thousands_of_views_how_many/

.

Mind Over Images

"F*ck It, I’m the Problem"

🔥 https://www.reddit.com/r/aivideo/comments/1pdwl3p/fuck_it_im_the_problem/

.

Neural Hours

"Wash The Snorp"

🔥 https://www.reddit.com/r/aivideo/comments/1lrwpu5/wash_the_snorp/

. 

🍔 TV AD OF THE YEAR

. 

Nimrod Animations

"The Future of Intimacy"

🔥 https://www.reddit.com/r/aivideo/comments/1oi8sd8/the_future_of_intimacy/

.

PJ Accetturo

"Puppramin"

🔥 https://www.reddit.com/r/aivideo/comments/1kslhy3/i_used_to_make_500k_in_pharmaceutical_commercial/

.

Ok Injury

"Beer Tree"

🔥 https://www.reddit.com/r/aivideo/comments/1ofk04l/beer_tree/

.

Jhonny Darrell

"Geriatric Meltdown 2000"

🔥 https://www.reddit.com/r/aivideo/comments/1k3q62k/geriatric_meltdown_2000/

.

Perfectly Unreal AI

"Maid Service"

🔥 https://www.reddit.com/r/aivideo/comments/1or34al/bikini_babes_maids_service_mostly_used_klingai/

. 

🍟 TV SHOW OF THE YEAR

Category Includes: TV SHOW + DOCUMENTARY + ANIME

. 

Qemmish

"How Toothpaste Is Made"

🔥 https://www.reddit.com/r/aivideo/comments/1jujzh2/how_toothpaste_is_made/

.

Code Samurai

"Interdimensional Cable"

🔥 https://www.reddit.com/r/aivideo/comments/1ks2jp9/interdimensional_cable_veo_3/

.

Darri 3D

"America’s Funniest AI Home Videos – Episode 1"

🔥 https://www.reddit.com/r/aivideo/comments/1kjp3is/americas_funniest_ai_home_videos_episode_1/

.

Gorm The Old

"Mr. Rogers at the Battle of Agincourt"

🔥 https://www.reddit.com/r/aivideo/comments/1nwzxfq/mr_rogers_at_the_battle_of_agincourt/

.

I Only Like Giraffes

"ADHtv"

🔥 https://www.reddit.com/r/aivideo/comments/1oqae7p/channel_surfing_through_alternate_dimensions_full/

. 

😂 COMEDY OF THE YEAR

Category Includes: COMEDY SKETCH / PARODY + COMEDY STAND UP

. 

DOTM Society

"Hood Tours"

🔥 https://www.reddit.com/r/aivideo/comments/1ojkdal/hood_tours/

.

Middle Detective

"Boomers Selling Their Home"

🔥 https://www.reddit.com/r/aivideo/comments/1mnxltw/boomers_selling_their_home_in_2025/

.

Barely Standing Up

"Babies Doing Stand Up"

🔥 https://www.reddit.com/r/aivideo/comments/1omwute/babies_doing_standup/

.

Solopops

"Fast Food Wars"

🔥 https://www.reddit.com/r/aivideo/comments/1pfuclp/mcdo_vs_kfc_a_fast_food_war_film/

.

Krogg Rage

"Oh Yeah, That’s Powerful"

🔥 https://www.reddit.com/r/aivideo/comments/1pghs10/with_great_power_comes_great_accountability/

. 

🍿 MOVIE TRAILER OF THE YEAR

. 

Jurassic Smoothie

"Collect and Liquify"

🔥 https://www.reddit.com/r/aivideo/comments/1luxcv5/collect_and_liquify/

.

Neuro Dreams Exe

"The Awakening"

🔥 https://www.reddit.com/r/aivideo/comments/1nsrjax/the_awakening/

.

Eat The Ethos

"Seraphys"

🔥 https://www.reddit.com/r/aivideo/comments/1l9wgr7/trailer_for_the_ai_show_seraphys/

.

Neilchacho

"Dick Gun"

🔥 https://www.reddit.com/r/aivideo/comments/1m0uopa/dick_gun_official_trailer_one_mans_junk_is/

.

Daryl Wein

"Beer A Geddon"

🔥 https://www.reddit.com/r/aivideo/comments/1kw33sh/insane_memorial_day_movie_trailer/

. 

🎬 MOVIE OF THE YEAR

. 

Eat The Ethos

"The Internet Is Dead"

🔥 https://www.reddit.com/r/aivideo/comments/1kwt2cv/the_internet_is_dead/

.

Gossip Goblin

"The Thirteenth Cycle of Humanity"

🔥 https://www.reddit.com/r/aivideo/comments/1lkax2e/the_thirteenth_cycle_of_humanity/

.

Darri 3D

"Cardboarding"

🔥 https://www.reddit.com/r/aivideo/comments/1mt8v39/carboarding/

.

Solopops

"Egg Protein"

🔥 https://www.reddit.com/r/aivideo/comments/1q6mmpg/egg_protein_my_weirdest_ai_film/

.

1pcity Studios

"AI Blockbusters — Theme Park Collapse"

🔥 https://www.reddit.com/r/aivideo/comments/1mjw7hm/theme_park_collapse/

. 

WATCH THE AI VIDEO AWARDS 2026
SUNDAY MARCH 15th — AIRING EXCLUSIVELY HERE ON r/aivideo

. 

🥳 OFFICIAL AI VIDEO AWARDS 2026 AFTERPARTY

. 

OFFICIAL AI VIDEO AWARDS AFTERPARTY will be held at the Ghost Bar in the Palms Casino in Las Vegas the night of the awards, this is an invite-only ai video industry event for AI creators who are NOMINATED, PRESENTING or PERFORMING at the ceremony along with their production teams. Please get in touch with the mod team for details.

. 
r/happycryingdads keyser-_-soze

He couldn't help his reaction when he saw Mickey.

r/ClaudeAI sixbillionthsheep

Usage Limits, Bugs and Performance Discussion Megathread - beginning December 29, 2025

Why a Performance, Usage Limits and Bugs Discussion Megathread?

This Megathread makes it easier for everyone to see what others are experiencing at any time by collecting all experiences. We will publish regular updates on problems and possible workarounds that we and the community finds.

Why Are You Trying to Hide the Complaints Here?

Contrary to what some were saying in a prior Megathread, this is NOT a place to hide complaints. This is the MOST VISIBLE, PROMINENT AND OFTEN THE HIGHEST TRAFFIC POST on the subreddit. This is collectively a far more effective and fairer way to be seen than hundreds of random reports on the feed that get no visibility.

Are you Anthropic? Does Anthropic even read the Megathread?

Nope, we are volunteers working in our own time, while working our own jobs and trying to provide users and Anthropic itself with a reliable source of user feedback.

Anthropic has read this Megathread in the past and probably still do? They don't fix things immediately but if you browse some old Megathreads you will see numerous bugs and problems mentioned there that have now been fixed.

What Can I Post on this Megathread?

Use this thread to voice all your experiences (positive and negative) regarding the current performance of Claude including, bugs, limits, degradation, pricing.

Give as much evidence of your performance issues and experiences wherever relevant. Include prompts and responses, platform you used, time it occurred, screenshots . In other words, be helpful to others.


Just be aware that this is NOT an Anthropic support forum and we're not able (or qualified) to answer your questions. We are just trying to bring visibility to people's struggles.

To see the current status of Claude services, go here: http://status.claude.com


READ THIS FIRST ---> Latest Status and Workarounds Report: https://www.reddit.com/r/ClaudeAI/wiki/latestworkaroundreport Updated: March 11, 2026.


Ask our bot Wilson for help using !AskWilson (see the stickied comment below)


r/happycryingdads The_Stein244

Professional baseball player, Brady Singer, pays off all his parents' debt after signing his first MLB contract ❤️

r/happycryingdads softdancesing

Such a sweet grandpa.

r/happycryingdads Outrageous-Regular34

Angola Prison hosted its first father-daughter dance, giving incarcerated men a rare opportunity to reconnect and share a meaningful moment with daughters they had not seen for years.

r/happycryingdads MyNameisNotMaxie

Dad surprised with dream Mustang for Christmas

r/happycryingdads sint0xicateme

No tears yet, but he went off camera for that. She's awesome. I'd be proud too.

r/happycryingdads nlikelyhero

The moment he turned around, he saw his little girl🥹🤍

r/happycryingdads MyNameisNotMaxie

Dad breaks down into tears after family surprises him with new puppy

r/happycryingdads MyNameisNotMaxie

Dad starts crying after holding his new born baby.

r/Anthropic MatricesRL

Top AI Productivity Tools

Here are the top productivity tools for finance professionals:

Tool Description Claude Enterprise Claude for Financial Services is an enterprise-grade AI platform tailored for investment banks, asset managers, and advisory firms that performs advanced financial reasoning, analyzes large datasets and documents (PDFs), and generates Excel models, summaries, and reports with full source attribution. Endex Endex is an Excel native enterprise AI agent, backed by the OpenAI Startup Fund, that accelerates financial modeling by converting PDFs to structured Excel data, unifying disparate sources, and generating auditable models with integrated, cell-level citations. ChatGPT Enterprise ChatGPT Enterprise is OpenAI’s secure, enterprise-grade AI platform designed for professional teams and financial institutions that need advanced reasoning, data analysis, and document processing. Macabacus Macabacus is a productivity suite for Excel, PowerPoint, and Word that gives finance teams 100+ keyboard shortcuts, robust formula auditing, and live Excel to PowerPoint links for faster error-free models and brand consistent decks. Arixcel Arixcel is an Excel add in for model reviewers and auditors that maps formulas to reveal inconsistencies, traces multi cell precedents and dependents in a navigable explorer, and compares workbooks to speed-up model checks. DataSnipper DataSnipper embeds in Excel to let audit and finance teams extract data from source documents, cross reference evidence, and build auditable workflows that automate reconciliations, testing, and documentation. AlphaSense AlphaSense is an AI-powered market intelligence and research platform that enables finance professionals to search, analyze, and monitor millions of documents including equity research, earnings calls, filings, expert calls, and news. BamSEC BamSEC is a filings and transcripts platform now under AlphaSense through the 2024 acquisition of Tegus that offers instant search across disclosures, table extraction with instant Excel downloads, and browser based redlines and comparisons. Model ML Model ML is an AI workspace for finance that automates deal research, document analysis, and deck creation with integrations to investment data sources and enterprise controls for regulated teams. S&P CapIQ Capital IQ is S&P Global’s market intelligence platform that combines deep company and transaction data with screening, news, and an Excel plug in to power valuation, research, and workflow automation. Visible Alpha Visible Alpha is a financial intelligence platform that aggregates and standardizes sell-side analyst models and research, providing investors with granular consensus data, customizable forecasts, and insights into company performance to enhance equity research and investment decision-making. Bloomberg Excel Add-In The Bloomberg Excel Add-In is an extension of the Bloomberg Terminal that allows users to pull real-time and historical market, company, and economic data directly into Excel through customizable Bloomberg formulas. think-cell think-cell is a PowerPoint add-in that creates complex data-linked visuals like waterfall and Gantt charts and automates layouts and formatting, for teams to build board quality slides. UpSlide UpSlide is a Microsoft 365 add-in for finance and advisory teams that links Excel to PowerPoint and Word with one-click refresh and enforces brand templates and formatting to standardize reporting. Pitchly Pitchly is a data enablement platform that centralizes firm experience and generates branded tombstones, case studies, and pitch materials from searchable filters and a template library. FactSet FactSet is an integrated data and analytics platform that delivers global market and company intelligence with a robust Excel add in and Office integration for refreshable models and collaborative reporting. NotebookLM NotebookLM is Google’s AI research companion and note taking tool that analyzes internal and external sources to answer questions, create summaries and audio overviews. LogoIntern LogoIntern, acquired by FactSet, is a productivity solution that provides finance and advisory teams with access to a vast logo database of 1+ million logos and automated formatting tools for pitch-books and presentations, enabling faster insertion and consistent styling of client and deal logos across decks.
r/happycryingdads DivaMizz

Grandfather finds out his grandson was named after him..

r/happycryingdads MyNameisNotMaxie

Military Dad Gets Emotional When He Sees His Daughter on Her Wedding Day

r/ChatGPT samaltman

Updates for ChatGPT

We made ChatGPT pretty restrictive to make sure we were being careful with mental health issues. We realize this made it less useful/enjoyable to many users who had no mental health problems, but given the seriousness of the issue we wanted to get this right.

Now that we have been able to mitigate the serious mental health issues and have new tools, we are going to be able to safely relax the restrictions in most cases.

In a few weeks, we plan to put out a new version of ChatGPT that allows people to have a personality that behaves more like what people liked about 4o (we hope it will be better!). If you want your ChatGPT to respond in a very human-like way, or use a ton of emoji, or act like a friend, ChatGPT should do it (but it will be because you want it, not because we are usage-maxxing).

In December, as we roll out age-gating more fully and as part of our “treat adult users like adults” principle, we will allow even more, like erotica for verified adults.

r/ChatGPT WithoutReason1729

GPT-4o/GPT-5 complaints megathread

To keep the rest of the sub clear with the release of Sora 2, this is the new containment thread for people who are mad about GPT-4o being deprecated.


Suggestion for people who miss 4o: Check this calculator to see what local models you can run on your home computer. Open weight models are completely free, and once you've downloaded them, you never have to worry about them suddenly being changed in a way you don't like. Once you've identified a model+quant you can run at home, go to HuggingFace and download it.


Update:

I generated this dataset:

https://huggingface.co/datasets/trentmkelly/gpt-4o-distil

And then I trained two models on it for people who want a 4o-like experience they can run locally.

https://huggingface.co/trentmkelly/gpt-4o-distil-Llama-3.1-8B-Instruct

https://huggingface.co/trentmkelly/gpt-4o-distil-Llama-3.3-70B-Instruct

I hope this helps.


UPDATE

GPT-4o will be removed from ChatGPT tomorrow at 10 AM PT.


UPDATE

Great news! GPT-4o is finally gone.

r/happycryingdads SpartySoup

Trey Yesavage's father shows his emotions watching his son's big league debut

r/goodnews Cdave_22

We now have user flairs

Hi r/goodnews! 🌍 You can now assign a flair to show what country you’re from! We’re still adding to the list, so if you don’t see your country yet, just send us a modmail and we’ll be happy to add it. Enjoy commenting.

r/happycryingdads SpillaMangBang

Special Daddy Daughter Moment

r/happycryingdads unequaldarkness

This woman announces to her husband that she is pregnant after more than 3 years of infertility and against all medical odds. His reaction is speechless

r/happycryingdads happylaxer

A daughter goes to visit her father that she hasn't seen in 5 years. He knew something was up, but he was still surprised.

r/happycryingdads DerFlamongo

Teen surprises step-dad by putting step-dad's last name on his football jersey ☺️

r/happycryingdads yeahDAD83

After five years working abroad, son finally returns home.. and his parents’ reaction says it all

r/LocalLLaMA HOLUPREDICTIONS

Announcing LocalLlama discord server & bot!

INVITE: https://discord.gg/rC922KfEwj

There used to be one old discord server for the subreddit but it was deleted by the previous mod.

Why? The subreddit has grown to 500k users - inevitably, some users like a niche community with more technical discussion and fewer memes (even if relevant).

We have a discord bot to test out open source models.

Better contest and events organization.

Best for quick questions or showcasing your rig!

r/goodnews Cdave_22

Remember to use the correct flair

Hi r/goodnews,

We’ve noticed that a lot of you guys aren’t using the correct flair when posting. This makes it harder for users to filter content they do or don’t want to see

While our community is somewhat divided on what type of content should be allowed here especially when it comes to politics the mod team has ultimately decided that political content will be allowed.

However, to respect those who prefer to avoid political posts, we’ve organized the subreddit into categories using flairs. If a post isn’t flaired correctly, it ends up mixed in with other types of news, which can frustrate users.

To help keep things organized and enjoyable for everyone, please remember to use the correct flair for your submissions.

For political content, always use the “Political Positivity” flair. And for all other posts, please choose the correct flair that best fits the content.

Thanks for helping us keep the subreddit organized!

The r/goodnews Mod Team

r/Ghosts GhostsMods

To anyone considering posting a video or image - READ THIS FIRST

The overwhelming majority of orbs have nothing to do with the paranormal. The incessant parade of dust, bugs, and other debris posted is proof of that reality. Therefore, it is necessary to revisit this problem. If you have a video of an 'orb' and you believe it's paranormal, it is strongly recommended that you read this post first.

A basic understanding of the principles that create orb-like phenomena on camera can go a long way in preventing your post from being ridiculed in the comments and removed from the sub. Please take this generous opportunity to review the above-posted article and familiarize yourself with light play and focusing issues in photography and filmography.

r/Ghosts DyfedH

Before posting a photo, you should read this.

So you found a photo that you think has a spirit or ghost on it, before you post it and get upset about 30 people saying it’s a piece of dust or a bug, you may want to read this!

BEFORE DOING ANYTHING

Do read the posts

https://www.reddit.com/r/Ghosts/comments/deki6k/explanation_of_orbs_aka_backscatter/

and

https://www.reddit.com/r/Ghosts/comments/icziuc/ring_doorbells_security_camera_videos_and_false/

As these really do have useful information on how to spot particle orbs, lens flare and insects crawling over lenses which do make up most of the post in this Reddit community channel.

It is common for people who believe they are haunted to take lots of photos and assume that any “orbs” must be related to this. In fact, it’s quite easy to photograph orbs. If you can’t see the orb then it’s not there. People might say that cameras can pick up ‘orb’ energy but a lot of them just don’t understand why digital cameras tend to easily pick up particles (dust, dander, hair, insects).

If you have read the above and are sure (or not quite sure) then read on.

BEFORE POSTING

Make sure you have the original photo downloaded or posted from the same device the picture was taken from. Having a photo shared from a social media site, screen grabbed on a phone, photographed from a screen or been email shared will likely to be low resolution and suffer from further compression which destroys any fine detail and will generally be useless.

If the photo is just a photo then scan the photo in at least 300 DPI and save it as a “.PNG” image. If you don’t have access to a scanner then you can take a photo of it but take it in a brightly lit room and don’t use a flash to reduce reflections on the photo.

Use the original resolution of the photo, do not rescale, resize, brighten the image to make posting easier.

If you have to edit the photo to remove people then draw the smallest black filled box you can to hide their face or them. Don’t crop the picture or remove large chunks of the picture. Keep the picture the same same size. Once edited, save / export the image as a “.PNG” image as this will keep the JPEG image as it is (if you resave it as a JPEG it recompresses the image in a lossy way so that fine detail tends to get lost) if the paint program has it then make sure that “Save resolution” is checked and that you save “EXIF information”.

If you must highlight anything in the picture (to show where you think the paranormal image is) then use a bright colour that can be seen easily, use a transparent box with thin lines and draw it in the general area of what you want us to see, do not cover anything nearby. Save it as a “.PNG” as above. If you do this then make sure you post the original too.

If you want to change the contrast or brightness of a photo then again do so but save as a “.PNG” as above and post the original for comparison. If you have other pictures from the same area that were taken at the same time then post these too as comparison photos.

If the photo is from a place you can visit again easily then take more pictures of the same area using the same camera, try to take it at different times of the day and try to take a picture at the same time of day the original was at. Also, take pictures of the surrounding area and close up images of the original location.

It is important to note that even “taking the picture in the same place” is going to mean that you will be taking the picture at different angles, heights and places which will cause images to change.

POSTING THE PHOTO

Post as an image within the Reddit post so people can see it, you can also post any comparison posts at the same time.

If the photo is too large then rescale it and save as a “.PNG” to post on Reddit but upload the original it to a file-sharing site like Google Drive or Dropbox, as this will not rescale the image.

Don’t just upload them to Imgur as most photo sharing websites will recompress the image and destroy fine detail in the picture.

Make sure the title is short but informative. “Possible ghost image from a cell in Alcatraz” is a good title, “GHOST!!?!?!?” is not.

ALSO SUPER IMPORTANT!!!! CONTEXT! CONTEXT! CONTEXT!

After you have posted the images. Make a comment in the post you have just created and put in as much context as possible!!! Too many photos tend to have the photo and some poor description and we have zero ideas of what we should be looking at.

Examples of what to include are…

  • Where the photo was taken. You don’t have to give the address but a location or rough area (if a private address) helps.

  • When the photo was taken, give the date and time if possible, if not known or you can’t remember then give what information you can (I.e In the winter in the early evening).

  • Who took the photo. It should be you or someone you know. Something like ‘Taken by A friend of a girlfriend that is friends with a baker, that my 2nd cousins' hairdresser visits every 3 months is not good’ means it is hard to ask follow up questions.

  • What you were trying to take. (I.e Ghosts, a high school prom or of your granny on her 97th Birthday)

  • If you (or they) were with other people. Also mention strangers.

  • What the weather was like. Was it sunny, cloudy, raining, hot, cold, warm?

  • Description of the location. If in a home, details about the place, I.e, If it is a house, apartment, cabin located 200 miles from civilizational, etc as well as if you are near a road or other places like a pub or bar, major flight path, etc. Else just whatever you can describe the place.

  • What did you feel at the time? I.e cold, scared, fearful, happy, sad, etc.

  • Why do you think it is paranormal? Describe this as best as you can.

  • Anything spooky that happen before or after. Within the same time frame, something that happens 3 months before is probably not related.

So good examples would be…

“This photo was taken at the Alcatraz prison in San Francisco. It is a picture of Cell 13 of cell block D in the cell house. It was taken on Sunday the 13th of May 2018 at around 2pm. It was mostly sunny but there were thick clouds, it was reasonably warm at around 65f and there were strong breezes every so often.

The photo was taken by myself, although I was with my girlfriend at the time. It was quite busy but no one was nearby when I took the photo. I was trying to take a picture of my girlfriend as she stood at the door but I noticed a strange white figure behind her. There was no one in the cell as we had waited for another couple to finish taking their picture before we took ours. I took 3 photos and you can see that the figure is in 2 of the pictures but is not in the third picture.

My girlfriend did say she felt cold in the cell, we had taken a picture in another cell (134 in Cell Block C) and she did not feel anything. The rest of the day was fine though and we enjoyed our trip.”

“A photo of a chair my grandmother used to sit on, in our house in Stirling, Scotland. It was taken on the 1st June 2015, in the wee hours of the morning at 6.30am. I remember it was very cold and it was dark grey and cold outside like a typical morning in Scotland. I had the lights on as although it was morning outside, it was dark in the room.

I took the photo as I felt a presence and I was drawn to the chair. I took 2 photos and you can see a grey mist around the chair. I would have taken more but my phone just drained of power and switched off, not sure why this happens as I am pretty sure I charged it overnight. I don’t smoke, didn’t have any hot drinks. I was in the room all alone as my parents and sister were still in bed and my dog was still on the sofa sleeping peacefully.

Once I charged my phone and saw the pictures, I was filled with deep happiness, I tried to take more photos but the mist never came back. What do you think it is.”

Bad examples are…

Ghosts?!???!!?!?!”

I took this photo whilst in an abandoned building, do you think it is a ghost?”

“Whilst trying to contact my dead aunt to ask where she left her cash, I took lots of photos and all of them had orbs and I heard some moans, screams and creaking coming from my cousins' bedroom but her door was locked!!! Is the house haunted?”

SortedFor.me