Your Feed

5000 posts

r/n8n Clear-Welder9882

Fullstack Automation Architect — Custom AI Operations & Infrastructure

Hey,

I’m a Fullstack Developer and Automation Expert. I build proprietary engines that handle leads, invoicing, and client management to help businesses scale.

Here are some of my recent builds with full architecture breakdowns:

If you need to automate a bottleneck or need a custom build: Drop a comment with what you're trying to solve. I’ll tell you exactly how I’d build the logic, which models I’d use and how i wil get it live .

r/KlingAI_Videos bymathis

exploration "are you human?"

Hey Guys i did some stuff I had in my mind. Playing with Image to Video really trying to get a Vintage Type of Film Look combined with FL Studio Sound Design ...maybe I will Develop some Ideas of this in short Film idk..comments on this beides "AI SLOP"? The sound reminds me of a synthetic humanoid robot who is dying and being relieved into heaven. Any Tips to dive more in this Vintage Film Look are preciated :)

r/ClaudeAI ButterscotchKind9546

Lost 1M context window in Claude Co-work after Claude desktop app update — anyone else?

I'm on the Max plan , API, and extra. A few days ago I was granted the 1M token context window. I was actively using it on a complex multi-session project (a bilingual audit document with multiple source files, detailed data tables, and iterative edits across sessions).

After updating the Claude desktop app, the context window appears to have reverted to the standard size. Worse, the threads that had been running at 1M context broke — they hit compaction/summarization mid-conversation, losing critical context that can't be reconstructed from the summary alone.

Has anyone experienced something similar? Specifically:

  1. Did updating the desktop app cause your context window to regress?
  2. Were existing 1M threads affected or just new ones?
  3. Did you find a way to get it restored, and how long did it take support to respond?

I've been trying to reach a human at Anthropic through their support channels but haven't gotten through yet. Any advice on the fastest way to escalate to a human?

r/ClaudeAI Niacinflushy

MCP server that lets AI create animated SVGs from scratch

hey,

I just shipped this and looking for feedback.

nakkas is an MCP server where AI is the artist. you describe what you want, AI constructs the full config (shapes, gradients, animations, filters), and the server renders clean animated SVG.

some things it can do:

animated logos, loading spinners, data visualizations

scatter fields, radial patterns, grid layouts

parametric curves (rose, spiral, heart, superformula)

15 filter presets (glow, neon, glitch, chromatic aberration...)

CSS @ keyframes + SMIL animations, zero JavaScript

works in anywhere SVG renders.

npx nakkas@latest

I would love to see what you make with it. you can share examples in github discussions.

repo: https://github.com/arikusi/nakkas

npm: https://www.npmjs.com/package/nakkas

r/LocalLLaMA Raga_123

Your AI forgot what you said — but it never forgot how to say it. Here's why.

Ever notice how ChatGPT can lose track of something you mentioned 10 messages ago, but it never starts writing grammatically broken sentences?

That's not a coincidence. It's two completely different memory systems doing two completely different things.

I spent the last several months running experiments on this for my MSc thesis — measuring exactly how fast attention degrades in transformer models and what survives when it does.

The short version of what I found:

What the model forgets: Content. Facts, names, numbers, what you said three paragraphs ago. These require the model to actively look back using its attention mechanism. That spotlight fades fast — by 30 tokens away the model has lost roughly 75-80% of its attention to earlier words. By 50 tokens it has essentially stopped looking.

What the model never forgets: Grammar. The rules about how language works are baked so deeply into the weights from training on billions of sentences that they never needed retrieval in the first place.

The thing nobody had measured: There's a third category in between. Small relational words — prepositions like "of", "in", "to", "from" — act as structural anchors in the current text. And here's the strange part: removing them damages the model's retrieval equally at 5 tokens away and 100 tokens away — even though attention to tokens 100 away has already collapsed by 80%.

The model has stopped looking at those words. But they're still doing something.

I tested this across four models — GPT-2, OPT-1.3B, Llama 3.2-1B, and DistilGPT-2 — spanning 2019 to 2024 and three different architecture families including modern RoPE. Zero failures across 600 tested conditions for prepositions. The finding replicates every time.

What this means practically: attention weight — which researchers widely use to understand what models are "thinking about" — is not the whole story of how a model remembers. A word can have near-zero attention weight and still be critical for retrieval. And if you're compressing prompts to save tokens, stripping out "unimportant" small words is probably hurting you more than you think.

r/LocalLLaMA donotfire

I made a brain for my computer—Second Brain, an agentic AI system for file exploration and knowledge synthesis.

Constructive criticism welcome!

Link to source: github.com/henrydaum/second-brain

r/StableDiffusion Future-Hand-6994

training human motion lora for wan 2.2 i2v

Do I need to blur their faces since i just want the motion? im traning with video clips and in some clips, people's faces are visible. I don't want the faces in the clips to get mixed up with the face in the photo that i uploaded when i rund wan 2.2 i2v workflow. also any advice for caption?

r/StableDiffusion grl_stabledilffusion

mom, ltx i2v got into the shrooms again!!

luckily i was just playing around with ltx-2.3 and was trying to give the image a bit more motion, just have the woman turn slightly towards the camera while the background remained the color/gradient that it was, but my god. i've used ltx before and was overall pretty happy with the results but this was just bizarre, some of the stuff it hallucinated was downright bizarre.

tried a couple of different prompts, was always a short description of the image (blonde woman in front of pink background) and then have her turn slightly towards the camera. tried adding stuff like "background remains identical" or "no text or type" or similiar things, but nothing worked. odd odd odd.

this was all in wan2gp since it's usually faster for me, maybe i should try also in comfy and see what outputs i get.

r/StableDiffusion smereces

LTX 2.3 + Qwen Edit

r/LocalLLaMA AmtePrajwal

Anthropic used Claude to interview 80K users — this might be a better alternative to surveys

Anthropic ran an interesting experiment: instead of static surveys, they used Claude to conduct structured interviews with ~80,000 users across 150+ countries and ~70 languages.

https://preview.redd.it/hjtv0wcdmfqg1.png?width=1300&format=png&auto=webp&s=53fa18b578625497419856afb16fdf5ca04a0b47

The idea: use an LLM as both the interviewer (asking dynamic follow-ups) and the analyst (structuring responses at scale).

Some takeaways:

  • 81% say AI helped them move toward their goals
  • Productivity leads (~32%), especially coding/technical work
  • Cognitive support (~17%) — reasoning and problem-solving
  • Learning (~10%) — AI as a tutor

What’s different here:

  • The model adapts questions based on responses
  • Captures why, not just predefined answers
  • Responses are auto-clustered (goals, concerns, sentiment) + human-reviewed

Feels like a shift from static data collection → conversational insight gathering.

Do you think this kind of AI-led interviewing could actually replace traditional surveys, or does it introduce new biases we’re underestimating?

r/AI_Agents guiltyyescharged

AI scheduling assistants are still making me do all the work

Every AI scheduling tool I've tried puts the burden of context on me. I have to configure rules, set preferences, explain that I don't do back-to-backs, explain that Friday afternoons are protected, explain every edge case upfront before it can do anything useful. That's just forms with extra steps.

The version I actually want is one that learns how I think about my calendar from watching me use it, not one that makes me manually encode every preference before it'll help. Does anything actually work this way or is "configure your preferences first" just the permanent state of these tools?

r/comfyui Worried-Tackle-4749

ComfyUI Cloud

I am not able to track the manager button in the cloud version of ComfyUI. How can I access that?

r/AI_Agents Superb-Feedback-8898

Need help on a “simple” prompt

Hi, I am currently diving into open source AI models for agentic usage.

;tldr; What is the best setup for open source models to solve seemingly simple tasks. My API Devstral 2 setup fails (no specific rules / no prompt templates configured yet).

So, I have a relatively simple work related prompt that I want to solve:

“Download the latest Version of the MDR Regulation”

Solution: Find the consolidated PDF version (01.01.2026) (includes M1-M6 amendments of the 2017 version)

Closed-Source reference solution: Antigravity with Gemini Flash just solves this task perfectly on first try.

Open Source Solution:

Setup: VSCode + Continue.dev extension + Scaleway API Key, Devstral 2 Model at the moment

Outcome:

First, the agent struggled using the built in continue tools at all (gave a general rule which tools are available and their call signature). Then, sometimes a PDF is fetched (corrupted). Sometimes the old version of the document (2017) is downloaded.

Question:

What is the best setup for open source models to just solve this task? I am open to any tools / models, as long as they are open source. Any clever engineers out there?

r/Futurology Mental-Somewhere-411

If energy becomes extremely cheap, could we actually start controlling the weather?

If solar, storage, or even fusion eventually make energy really cheap and abundant, a lot of things that feel unrealistic today might not be as far off. Stuff like influencing rainfall, reducing the impact of storms, or even managing heat in certain regions sounds impossible now. But a lot of technologies we take for granted today used to sound just as unrealistic. If energy stops being the main constraint, do we eventually start trying to actively shape weather systems?
Even if we could, should we?

r/aivideo EmotionalRope4324

GROK turned my birds into dancers

r/Futurology ccsunmusic

What if future movie theaters used interactive storytelling to map decision-making and match people for a date afterwards based on their decisions?

So there was this concept of interactive cinema, that audience choose at certain moments to decide where the plot goes, kinda like a video game. I just recently had a better idea where it could go in the future, combining with dating, not just bring ppl back to the theature but also help with declining birth rate.

When the film pauses at key moments and the audience votes on what the character should do next, the system remembers these choices and maps patterns for each audience. The choices aren't there to affect the plot, but designed to reflect audiences preference such as risk vs caution, honesty vs avoidance, emotional vs rational reactions etc.

By the end of the film, it could build a rough behavioral profile for each person in the room. Then it matches people in the same theater based on how they actually responded to the same situations. Like the more same choices they make, the more behavioral/phychological patterns they share, the better they match.

Ofc they gotta be single first.

Instead of the old traditional dating app profile → chat → meet, it's shared experience → behavioral alignment → then decide if you want to go on a date.

Cuz everyone can lie on a dating profile, or if not lying, they are somewhat performative. This matching based on act can be more truthful to map someone's real profile.

And if this cinema is also combined with deepfake tech, after every decision window, the film protagnists can render real audience avatar and voice on the film, so even if coming back to watch the same film, the experience would be different every time. Let alone the match is different in the end.

You can't get this experience from streaming at home.

Ofc not everyone wanna watch movie this way but it would be a fun social experiment. What do y'all think?

r/aivideo bymathis

are you human?

r/AI_Agents constructrurl

2026 Enterprise AI ROI in a nutshell

Every quarter I watch another Fortune 500 announce they are spending $10M+ on AI infrastructure to save maybe $500K in labor costs. Then someone from the C-suite publishes a LinkedIn post about their digital transformation journey with a stock photo of a robot shaking hands with a businessman.

The real ROI is not in the automation - it is in the consulting fees, the conference talks, and the internal slide deck that says AI-powered on every page. We have essentially replaced blockchain with AI agents in the corporate buzzword rotation and nobody even flinched.

Meanwhile the actual engineers doing useful work with LLMs are duct-taping together Python scripts that cost $0.02 per API call and solving real problems. The gap between what gets funded and what actually works has never been wider.

r/midjourney zjunkmale

Midjourney Live Event: 12:00 CST, V8 Alpha Quick Start

Event: Midjourney Quick Start: All about the V8 Alpha!

Hi, this is clarinet with Midjourney!

Come learn how to explore the v8 alpha! Starting in just a few minutes!

We are on the official Midjourney Discord server. And also...

To get stuff in your mailbox, register for Midjourney events: https://lu.ma/midjourney

r/midjourney Big_Addendum_9920

seeking the baubles to extend life

r/n8n automatexa2b

Controlling Claude Code from Your Phone: A Game-Changer

So... I just discovered something pretty incredible that I had to share with you all. Claude Code just rolled out this new feature called Channels, and honestly, it's kind of wild. Picture this: you're working on a coding project at home, you've got Claude Code running on your laptop, and then you need to head out. Well... not anymore. With Channels, you can literally control your entire Claude Code session from your phone through Telegram or Discord. You can be sitting on a bus, grabbing coffee, wherever... and you're still having full conversations with Claude just like you were at your desk. The whole thing feels a bit futuristic, to be honest.

Now, before you get too excited and dive in, there are a few things you need sorted first. You'll want to make sure you're running Claude Code version 2.1.8 or later by running claude update in a fresh terminal. Here's the thing though... you absolutely need to be authenticated through the Claude.ai OAuth login, not console or API key authentication. If you're on a Pro or Max plan, you're golden and this should work right out of the gate. But if you're on a Team or Enterprise account... well, you'll need to manually tweak some permissions in your configuration file. Just a heads up on that.

For Telegram users, the setup is pretty straightforward. Head over to BotFather on Telegram, type /new to create a bot, give it a username ending in "bot," and grab your token. Then fire up a Claude session, install the Telegram plugin, reload plugins, and configure it with your token. Exit and restart with the Telegram channels tag enabled, then open your bot in the app and send any message to get a pairing code. Paste that code in your terminal, accept the edits, and run the lockdown command to make sure you're the only one who can access your session. Quick tip... if you get a "skill not recognized" error, just exit and start a brand new session.

If you're more of a Discord person, go to the Discord Developer Portal, create a new application, enable the Message Content Intent in the bot section, and grab your token. Use the OAuth2 URL generator to select "bot" scope with the six required permissions, then authorize the bot to your server. Back in your terminal, install the Discord plugin, refresh it, then configure with your token. Exit and restart with the Discord channels tag enabled, send your bot a DM to get the pairing code, paste it in your terminal, and lock down access with the policy command. Pro tip from experience... exit and start a completely new session after installing the plugin to avoid token issues.

One thing that caught me off guard at first... there's this specific way messages flow between your phone and your terminal. Any message you send from your Telegram or Discord app will show up in the terminal session on your computer, but if you type directly into the terminal, that message won't get pushed back to your phone. It's like a one-way street. I've been playing around with this for a bit now, and honestly... it's genuinely useful. Yeah, it's still in research preview so there are some bugs, but being able to ask Claude for help while I'm away from my computer has already saved me a couple trips back home. If you're someone who uses Claude Code regularly and you're always on the move, this feature is absolutely worth setting up. Seriously.

r/SideProject Kitchen_Fix1464

Naming is hard

I made OpenPalm kind of as a joke/spoof of OpenClaw, but now it's actually becoming something worth keeping and I want to rebrand it.

Any ideas? I do love the mascot though lol

https://github.com/itlackey/openpalm

r/SideProject lightlydemented

I built Velin, a Windows/Linux app for sharing audio over LAN

I’ve been working on a side project called Velin, a desktop app for sharing audio over LAN between Windows and Linux machines. The idea came from wanting a simpler way to handle audio across devices on the same network without needing a messy setup.

Right now, I’ve released an early beta with builds for:

- Windows (.exe / .msi)

- Linux (binary / .deb)

It’s still early, so I’m mainly focused on:

- improving stability

- making setup simpler

- testing across different machines and network setups

- finding bugs and rough edges

I’d really appreciate feedback on:

- first impressions

- setup experience

- UI/workflow clarity

- anything that feels confusing or broken

If anyone wants to try it, I’d love some feedback!!

Also curious: for something like this, what would matter most to you? low latency? easy setup? reliability?

Link to GitHub: Releases · p-stanchev/Velin

r/SideProject gr3yR0n1n_

Built a native iPad SSH client — no web views, just SwiftUI

Been working on Silly Siren Terminal, an SSH client for iPad built natively

with SwiftUI. Got frustrated with the existing options and decided to build

something better.

Free: SSH terminal, password + key auth (ed25519, ECDSA), unlimited servers

Pro: SFTP file browser, tmux browser, Git diff viewer, multiple simultaneous

connections, 15 built-in themes + custom .itermcolors import

Currently in TestFlight beta

sillysirenstudios.com/terminal/

Thank you

r/ProgrammerHumor the-dumbkidd22

yeahThisHappened

r/homeassistant ketamine_dart

Ember Mug Integration

I'm having trouble getting any values for my Ember mug entities. I have the mug setup via the iOS app, then forgot the mug via bluetooth on iOS, then paired the mug with HA via bluetooth USB controller on my VM. The integration looks successful and shows the name, MAC address, etc but doesn't show any values for any of the entities.

I've tried restarting HA, unpairing and repairing the mug via bluetooth to HA, but none of this works. The mug has been woken several times too and have set it back on the charger.

https://preview.redd.it/ulvwvn2okfqg1.png?width=826&format=png&auto=webp&s=cf2fa48603c5ab9fbf5f19a3b29ea9c3b692cb5b

r/aivideo Unlabeled_

pov: you are on your way to the cinema in dunesday 😂😂

r/homeassistant bepicante

Doorbell to live feed on tablet - possible?

I have a Ring doorbell and the Echo Hub. The only reason I haven't gotten rid of it is because we use it ALL the time to see who is at the door in real time. When Ring picks up a person (or the doorbell is pressed), Alexa makes an announcement and the echo hub screen shows the live view of the person. We have this in our kitchen and it's super convenient.

Is this possible with Reolink and Home Assistant (with some kind of screen or tablet device)?

r/SipsTea Mindnessss

AI cannot recreate this

r/comfyui Icy_Imagination2377

How to solve this

So I never run in this situation and it use to be no error and from my side i believe that I downloaded so many games like recent days through steam and delete it after playing for like 10 minutes…is this is the reason or not if not then can anyone tell me if it because of update btw I am using the app version not the cmd type

r/homeassistant Inside_Equivalent197

Custom Tesla Card

Pretty new to both Tesla and home assistant (Just got the car yesterday!)

Decided to create a simple card to view battery/charging stats, and climate controls. Its mostly made using mushroom cards, with a few custom elements written by claude. Pretty pleased with how it turned out.

Note that I didnt add more controls (lock/unlock, frunk, charge port etc) since I live in a high rise, and I dont want other family members accidentally leaving the car unlocked.

Let me know if theres anything I should look into, I'm still exploring as I go

r/ProgrammerHumor ClipboardCopyPaste

agentsBeforeAIAgentWasAThing

r/meme Shadizar

When I hear the Pentagon starts using Palantir AI

Words

r/Anthropic hazyhaar

That little blue arrow took us all afternoon

I've been building a multi-agent setup with Claude Code — one terminal codes, another supervises. They coordinate through a SQLite database via a custom MCP server I wrote in Go.

The goal was simple: when the coding agent posts a checkpoint ("should I deploy this?"), the supervisor agent on another terminal should receive it in real-time and respond.

Claude Code has a hidden flag — --dangerously-load-development-channels server: — that lets MCP servers push events into a running session. But the format isn't documented anywhere.

We tried notifications/message (standard MCP logging). Nothing. We reverse-engineered the Claude Code JS source and found it expects notifications/claude/channel with a specific params structure.

After fixing that, debugging a TCP multiplexer in the MCP thin client, killing an old binary that was bypassing the daemon, and re-taking the supervisor token three times, we finally got this:

← context-vault: Checkpoint #769 answered.

One agent pushed a deploy report. The other received it, queried the production server through another custom MCP tool to verify the actual state, and rejected the deploy — "you forgot to rebuild one of the services." No human in the loop.

The whole orchestration layer is ~930 lines of Go and a SQLite file. No framework, no gateway, no platform. Which makes me wonder: does this make tools like OpenClaw's Gateway redundant ?

They're building an entire agent routing infrastructure — message routing, agent isolation, channel bindings. MCP channels do the same thing with a flag and a small server.

Anthropic is sitting on something here. That flag shouldn't say "dangerously." Anyone else found notifications/claude/channel or are we the first to reverse-engineer it?

r/SipsTea LumiLouy011

Malicious compliance at its finest

r/ProgrammerHumor wengardium-leviosa

sudoSelfImplode

r/therewasanattempt Damn_sun

To lecture chinese people on human rights and democracy.

r/arduino Dea1993

Classing Pong on Arduino UNO R4 WiFi

Days ago i've published a really first version of Classing pong game running on my Arduino UNO R4 WiFi:
https://www.reddit.com/r/arduino/comments/1rus74a/pong_on_arduino_uno_r4_wifi_led_matrix/

this was the result of ~2hours of work.
during these days i've continued improving and expanding this project adding more features and game modes :)

After some days of work i'm finally happy to say that the project is end.

This game is really improved since last time, is now possibile choose between 3 different game modes:
- 2 Players (PvP) each user can control his pad with integrated buttons
- 1 Player (PvC) you will play against a bot
- 0 Players (CvC) bot vs bot

if any bot is playing, you can configure an additional option.... the difficult level:
- Easy
- Hard

Another good addition is the scores shown directly on the LED Matrix, and a simple "win" animation that shows who's the winner.

i'm curious to know what you think about it :)

all the source code is available on github: https://github.com/Dea1993/arduino_pong

r/comfyui Waste_Network_6789

Need help installing ComfyUI AMD Portable on RX 6600

Hi guys, I recently tried running run_amd_gpus.bat.bat from AMD portable build and it resulted like this.

I'm completely new to this and it'll be helpful if somebody guide me through installation.

Thanks in advance!

r/oddlyterrifying Real_BeeeeezUp

The remains of the astronaut Vladimir Komarov, a man who fell from space, 1967

r/SipsTea Mathematic_nut

No way Atlanta is a real place 🤣🤣

r/therewasanattempt johnruby

To maintain even the slightest pretense of basic human decency

r/artificial Ok_Negotiation_2587

We asked 200 ChatGPT users their biggest frustration. All top 5 answers are problems ChatGPT Toolbox solves.

We surveyed 200 ChatGPT users. Their top frustrations:

  1. Cannot find old conversations (67%) - Solved: full-text search across all messages
  2. No folder organization (54%) - Solved: unlimited folders and subfolders
  3. Search is too limited (48%) - Solved: search inside message content, not just titles
  4. Cannot export specific conversations (41%) - Solved: select and export as TXT or JSON instantly
  5. Deleting is one-at-a-time (38%) - Solved: bulk delete, archive, and unarchive

Every single frustration has a direct solution in ChatGPT Toolbox. That is why we built it.

16,000+ users. 4.8/5 rating. Featured by Google on the Chrome Web Store.

Install free: ChatGPT Toolbox

Which of these frustrations hits hardest for you?

r/meme SnowXeno

Kirby is in Minecraft

r/mildlyinteresting DMG30000

Vitiligo turned my eyelashes and beard white

r/meme NEKORANDOMDOTCOM

Life is good

r/oddlysatisfying TurdEye69

I 3D printed a tray for my Lego dots and sorted the pieces by color

r/oddlyterrifying Inevitable-Piano-780

The bees have discovered brutalism

r/arduino IamSpongyBob

Automatic See‑Saw Weight Balancer Using Arduino, MPU‑6050, and PID Control

NOTE: I had really old LG phone, hence the grainy video.

As part of learning Embedded systems, while ago I built this automatic see‑saw weight balancer. The idea is simple: place any object on one side, and the system actively compensates to keep the platform level. I did this because I wanted to learn PID control and how to do the tuning. Let me know if you have any questions.

r/mildlyinteresting the19ninetysix

The sun burned down the color only on this side of this box

r/mildlyinteresting Nilsss

Moss heart on a tree

r/arduino SnooRadishes7126

WebSerial API - communication between MCU and browser

I recently started playing around with the WebSerial API. It's incredibly cool to see a microcontroller communicating directly with a web browser without installing a single driver.

My goal is to use it for over-the-web firmware updates, but I put together a quick demo first just to test basic data transmission.

Has anyone here used WebSerial in production?

Demo source code (RP2040 + HTML/JS) is available here: https://en.neonhero.dev/2026/03/webserial-how-to-communicate-with.html

r/oddlyterrifying Mint_Perspective

A Texas Cold Front

📸 Clara Jensen

r/n8n Acrobatic-Spread5702

Got My second client for 5k

Hey r/n8n 👋

I run a new automation agency in Egypt called Nodatix and I just signed my second client project for $5,000.

The client is the first and officially accredited agent in their sector in Egypt, actively working with more than 5 international companies. Without going into the specifics of the industry, it's a well-established B2B operation that has been running manual processes across almost every department — sales, HR, finance, procurement, marketing, and customer experience.

We scoped the full project across 8 phases covering lead management, sales automation, HR and payroll, finance, procurement and inventory, marketing and CX, social media and media buying tracking, and content with SEO automation.

The rough node estimate across all phases came out to somewhere between 223 and 280 nodes, 40 to 58 HTTP requests, 3 to 7 AI model nodes, 18 to 25 separate workflows, and 8 to 12 scheduled crons. It's the biggest thing I've personally scoped so far.

I'm meeting the client at their office this coming Tuesday for a full discovery session and a live look at their existing setup before we start building anything.

A few things I'd genuinely love input on from people who've been here before:

  1. For a project this size scoped across 8 phases, how do you typically structure the payment milestones? Per phase delivery or a different split?

  2. When you're walking into a client's office to review their existing instance for the first time, what's the most important thing to check or document before leaving that meeting?

  3. Any horror stories or lessons learned from multi-phase n8n projects where the scope started reasonable and then expanded? How did you handle it?

  4. With 200+ nodes spread across multiple workflows, what's your approach to documentation and handover? Do you document as you build or leave it for the end?

  5. Is $5K a fair number for this scope in your experience, or did I undersell (We haven't talked about it yet)? Genuinely asking — still calibrating my pricing for the region.

Excited and a little terrified. Any advice welcome.

r/Jokes MommyNoise

Plumber

I hardly know her

Liquor

I hardly know her

Chauffeur?

I hardly know her

Couture

I hardly know her

Spector?

I hardly know her! She ghosted as soon as she saw me

r/trashy ohmslaw54321

Found in Indiana

r/therewasanattempt seeebiscuit

to fly wherever they want

r/me_irl Beginning_Book_2382

me_irl

r/interestingasfuck Krptonknutter888

The Fire Pulses In The Carboy

r/funny THE_Pazzy360

Finally moved to Flint!

r/VEO3 chancozay

she reached for the sky and i forgot i made this on my phone

not gonna pretend i fully understand what happened here. i just typed and walked away.

not using vanilla veo btw. found something better

r/automation mokefeld

AI workflow tools vs traditional automation - actually replacing it or just a shinier UI

been playing around with a bunch of AI workflow tools lately and honestly I'm torn. some of this stuff is genuinely impressive for dynamic tasks, like handling messy customer data or routing tickets where the logic isn't totally predictable. but then I'll try to set up something that needs to run reliably in production and suddenly I'm missing the boring old rule-based approach. feels like for anything stable and repetitive, traditional automation still just. works better. the AI stuff is heaps more approachable for getting something up fast, but I keep running, into edge cases where it just does something weird and there's no clean way to debug it. I've seen people calling it 'glorified no-code' and honestly that's not entirely wrong for some tools. the natural language workflow generation sounds cool until you realise it's still just building the same logic underneath, you just didn't have to type it yourself. that said, the multi-agent stuff coming out in 2026 does feel like a step beyond that, more like actual decision-making rather than fancy shortcuts. curious whether anyone here is running AI workflow tools in proper production environments, and if so, how you're handling the reliability side of things?

r/Damnthatsinteresting bigcamel123

ceiling full of stars and wings. The detail in these Nekhbet vulture reliefs is absolutely timeless.

r/funny NightRunnxr

Why is the last image is so accurate though 💔

r/me_irl Subash_2419

Me irl

r/megalophobia suck-me-balls69

tower disappearing into clouds

r/me_irl TheVampyresBride

me_irl

r/Showerthoughts Ronin_Ikari

There are likely untold numbers of people living in LA County who are one degree removed in the game "Six Degrees of Kevin Bacon" by merit of the fact that they were caught in traffic at the same time, and don't know it.

r/BrandNewSentence KDubzzz2

i wanna live in his balls & be a spermaid 🧜‍♀️

r/VEO3 Vegetable-Sky5543

Visualizing the Shadow Archetype: A cinematic horror short created with Google VEO and Suno AI

r/Damnthatsinteresting ButterSaltBiscuit

A snail extends its long, twig like appendage from its shell to probe and move around a container, defying typical slow gliding

r/Damnthatsinteresting Present_Employer5669

In 2000, Mike Tyson knocked down Julius Francis three times. Francis had placed advertisements on the soles of his shoes beforehand, knowing that his fall would be shown in close-up. The Daily Mirror paid £40,000 for this advertisement.

r/Jokes Historical-Buff777

Shakespeare walks into a bar.

The bartender says, “You better not make a scene.”

r/funny PhewYork

Pyramid of Pizza 🍕

r/whatisit IllustriousPin3412

What is this dish?

It's some kind of dish, but for what?

r/Jokes vegas_guru

My Favorite Wife

Me: You’re my favorite wife.

Wife: What do you mean, you never had any other wives.

Me: Yes, but every time we go to Walmart, I look at all the other wives and still come out with you and bring you back home.

r/artificial R4V3N-2010

I built a self-evolving AI that rewrites its own rules after every session. After 62 sessions, it's most accurate when it thinks it's wrong.

NEXUS is an open-source market analysis AI that runs 3 automated sessions per day.
It analyzes 45 financial instruments, generates trade setups with entry/stop/target levels, then reflects on its own reasoning, identifies its cognitive biases, and rewrites its own rules and system prompt.
On weekends it switches to crypto-only using live Binance data.

The interesting part isn't the trading — it's watching an AI develop self-awareness about its own limitations.

What 62 sessions of self-evolution revealed:

- When NEXUS says it's 70%+ confident, its setups only hit 14% of the time

- When it's uncertain (30-50% confidence), it actually hits 40%

- Pure bullish/bearish bias calls have a 0% hit rate — "mixed" bias produces 44%

- Overall hit rate improved from 0% (first 31 sessions) to 33% (last 31 sessions)

- It developed 31 rules from an initial set of 10, including self-generated weekend-specific crypto rules after the stagnation detector forced it to stop complaining and start acting

Every rule change, every reflection, every cognitive bias it catches in itself — it's all committed to git. The entire mind is version-controlled and public.

It even rewrites its own source code through FORGE — a code evolution engine that patches TypeScript files, validates with the compiler, and reverts on failure. Protected files (security, forge itself) can never be touched.

Live dashboard: https://the-r4v3n.github.io/Nexus/ — includes analytics showing hit rate, confidence calibration, bias accuracy, and a countdown to the next session.

GitHub: https://github.com/The-R4V3N/Nexus
Consider giving Nexus a star so others can find and follow its evolution too.

Built with TypeScript and Claude Sonnet. The self-reflection loop is fully autonomous, but I actively develop the infrastructure — security, validation gates, new data sources, the analytics dashboard. NEXUS evolves its own rules and analysis approach; I build the guardrails and capabilities it evolves within. It started with 10 rules and a blank prompt. The 31 rules it has now, it wrote itself.

r/RASPBERRY_PI_PROJECTS badassbradders

Thought this was a fun idea at the time, but it turns out that now I can't live without it 😂

r/onejob ClumpyReddit

A problem has beer to your computer.

r/PhotoshopRequest MeowntainOfCats

I tried this before and got AI slop- wanting me and pyp and harbor to be visible

Hey, if someone can fix the lighting and make it vibrant- please don't lose the depth of the ocean and the colors of the sky. It is winter here so it is cooler blue colors, but open to interpretations. I don't need to "beautified" or whatever- none of that. Just as is but better. Tip $15

r/TwoSentenceHorror failed_novelty

The portals opened all over the world, disgorging the armies of Hell.

Unfortunately, the things they were running from followed.

r/interestingasfuck kacchalimbu007

Three Generations, One track - Evolution of Indian Railways

r/whatisit RighteousRecruit

Unusual Gatorade, is it safe?

So me and my dad went get some Gatorade from a 7-Eleven, he got Lime Cucumber Zero and I got Orange Zero. However when I opened mine the twist part was… weak. It wasn’t firm like usual, and the “rim” didn’t separate from cap. But the most unusual thing was the white “glue-like” substance on the lip of the bottle. It wasn’t able to be rubbed off, as if melted with the bottle. We bought another Orange Gatorade and it was the exact same.

Is this something for alarm like a factory recall or something? Because once is an accident, twice is an unusual and unsettling coincidence.

We decided not to drink it just to be safe.

r/holdmyredbull BodegaDad

Bianca Martin (12yr old) lands a double backflip

r/sports Oldtimer_2

Florida's Oliver Rioux, the tallest college basketball player in history at 7' 9", slams in a put-back dunk for his first career NCAA Tournament points

r/Anthropic Wdr93

[Course Issue] "Build with Claude API" on Skilljar - Batch Tool

Hey everyone,

Posting here since there's no support channel available for course content issues.

I wanted to flag an issue I noticed in Anthropic's official course "Build with Claude API" hosted on Skilljar.

In the "Tool Use with Claude" chapter, there is a quiz question that asks:

"What problem does the batch tool solve?"

However, the Batch Tool section has been removed from the chapter content entirely.

Could someone from the Anthropic team either restore the Batch Tool section or remove the quiz question until the content is updated?

Thanks!

r/AbruptChaos _ganjafarian_

Workers at a power plant substation run for their lives

r/BrandNewSentence moongrump

Do not forget to mention the disadvantages of trees

r/ContagiousLaughter Jerken

When you follow instructions too literally

r/ClaudeAI kknoxx-410

Building a custom options backtesting platform for Indian markets with zero coding background — is this even possible with Claude Sonet 4.6?

Hey everyone,

I am an options trader in the Indian markets and i have been trying to develop an interesting project.

My goal is to build something similar to Algotest or OptionsAlpha but for the Indian markets, automated backtesting, designed around delta based strategy selection.

I have been using Claude to help me build this from scratch despite having zero coding background.

Currently, I have been able to gather historical Nifty options data for 5 years. I am also trying to develop a backtesing engine simultaneously while organising the data I have. Building the engine is a critical part and this is where it's a little overwhelming for me without having coding experience or without having developed an application.

The vision of backtesing engine is:

  1. Define any options strategy rules as input

  2. Backtesing should be automated

  3. Strike selection can be based on multiple options: ATM, Delta, LTP, etc

  4. Adjustments should be possible whenever required: delta based exit, LTP based exit and then selling new legs until once again the trigger happens

  5. Having details of the full trade and payoff graphs

My question for the community:

  1. Is this actually achievable for someone with no coding background?

  2. Has anyone been able to built something relatable to this using Claude?

if the answer to question 1 is yes, then how do I structure my process of doing this? How do I see this or work with Claude from a bird's eye POV, convert this complex projects into mini tasks?

I hope I was able to portray my vision to the community. Any opinions or help will be greatly appreciated!

r/ClaudeAI Jay123anta

I built an anonymous love proposal app with Claude Code - here's exactly how

I built BlushDrop — completely free to use at https://blushdrop.app

What I built:

An anonymous love proposal platform where you send a beautiful animated proposal to someone, and get a private tracker showing when they open it in real time

How Claude Code helped throughout:

I had no experience with Next.js, Supabase Realtime, or security architecture. Claude Code didn't just write code - it made decisions with me:

→ Designed the entire token security system (21 char nanoid, separate public + secret tokens)

→ Built the rate limiting middleware from scratch

→ Wrote the EXIF stripping logic for photo privacy

→ Architected the adapter pattern so every service can be swapped without rewriting the app

→ Generated and ran 97 unit tests

→ Debugged Supabase Realtime for the live tracker dot

→ Guided me through Cloudflare + Vercel deployment

Basically Claude Code was my entire dev team.

Result:

49 routes. 0 errors. Running at ~$0.75/month.

It's completely free to try: https://blushdrop.app

What would you have built differently ?

r/ClaudeAI Intelligent-You-642

Need some Ai help

I work for a small local business, and we use lots of different vendors for home improvement projects. I want to make My ai help me price items out easier and find what vendors sell what. Is this a possible tool to make? What information would I need to input to make this work?

r/StableDiffusion dobutsu3d

Workflow to repair parts of products or faces SAM + LORA

https://preview.redd.it/9jzpf3yrnfqg1.jpg?width=2158&format=pjpg&auto=webp&s=31160c3bdfac5007a8dff248b419d2d2b674ee97

Hey, quick question because I’m hitting a wall with this.

Has anyone here built a solid ComfyUI workflow that uses SAM (Segment Anything) to isolate specific regions of an image and then regenerates only those areas using a LoRA?

What I’m trying to achieve is basically targeted fixes — for example, correcting specific parts of a product shot or a human pose where even strong models (like the newer paid ones) still mess up in certain angles or details.

The idea would be:

  • detect / segment a precise region with SAM
  • feed that mask into a generation pipeline
  • apply a trained LoRA to regenerate just that part while keeping everything else intact

I’ve seen bits and pieces (inpainting + masks etc.), but I’m looking for something more consistent and controllable, ideally fully node-based inside ComfyUI.

Not sure if I’m overcomplicating this or if someone already cracked a clean setup for it.

Would appreciate any pointers, workflows, or even just confirmation that this is doable in a stable way.

r/StableDiffusion Safe_Employ_322

A2E Best video generations I think.

ai #img2vid

r/ChatGPT Capable_Moment_5091

The quiet gap between people who use AI and people who build on it

I've been working with AI systems for about two years now, and there's a pattern I keep seeing that nobody talks about.

Most people use AI the same way every day. Open a chat, explain who they are and what they're working on, get a response, close the tab. Next day, same thing. Every interaction starts from zero.

About a year ago I started building persistent systems instead. Context carries forward between sessions. The AI knows my writing voice, my ongoing projects, my standards. I'm not starting conversations. I'm continuing them.

The difference in output is hard to overstate. And the weird part is that from the outside, both approaches look the same. We both "use AI." In a survey we'd check the same box. But the outputs are diverging fast.

I don't think this requires being technical. The shift is about asking a different question. Instead of "can AI help with this task," you ask "what would this workflow look like if AI were present at every layer?"

Curious if anyone else has noticed this gap forming. Especially interested in hearing from people who made the shift and what changed for them.

r/ClaudeAI Apprehensive_Cap_262

Getting Claude performance out of Claude API

I'm wondering can anyone help with a potential workaround or if I'm even understanding this correctly.

I currently use a Claude (web) project to build MySQL for my Saas app to configure things for different customers. The SQL is quite complex and much of the training for the Claude project is around explaining the code and sql of the app so it can create new correctly structured sql in the context of my app.

The flow I follow is:

  1. Start a chat based on the project.

  2. Upload even more sql at the start of the chat. There may be 7 or 8 sql files full of insert statements, the total of all may make up a few MB or even up to 8 or 9MB

  3. I explain to it that that sql is examples of all the types to things I can do with the sql. Or it's the sql I want to work on.

  4. I ask it to write me SQL for stuff I need based on what I've provided.

  5. It gives my the sql, I execute it in my project and it runs flawlessly.

I would like to achieve the above all from my app. However the API seems much weaker. I can't just access the Claude project in the same way I can from the web one. I need to provide it all my project training and then cache it, then when I attempt to provide it the SQL data (as per step 2 above) I basically max out the context window etc so I'm forced to slim down everything, from instructions to initial sql. The result is that it does a much poorer job than what I do in the chat.

Am I missing something obvious or is it simply a case that it's just not possible to get from an api what I can do in the chat?

Any workarounds are welcome.

r/ClaudeAI tyg4s

Generating subtitles from video frames with Claude — poor results, looking for better approach

I'm trying to implement a feature in my app where users can upload or record a video (around 5–20 minutes), and an AI automatically analyzes it and generates subtitles based on what's happening on screen — kind of like a step-by-step tutorial narration.

Current flow:

  1. User uploads or records a video
  2. The app extracts ~60 frames from the video
  3. Those frames are sent to an AI model (I'm using Claude Sonnet 4.5)
  4. The AI generates subtitles + narration based on the detected actions

The problem is that the results are pretty poor — I often get very few subtitles, sometimes irrelevant or hallucinated content, and overall low-quality output.

I'm curious how others would approach this problem.
How would you design this kind of feature for longer videos (5–20 min) without overwhelming the model but still getting accurate and useful subtitles?

Any suggestions on architecture, prompting, or general approach would be really helpful 🙏

r/ClaudeAI tahiri550

Claude code: Kick starting your new project

I’m currently working as an infra/DevOps engineer (previously a software engineer). I usually use Claude Code for asking questions and writing small automation scripts.

Now I want to build a web app for a friend who runs a sports club. He’s mocked up some visuals using an AI website builder just to get his ideas out, which is helpful as a starting point.

I’m comfortable building the backend, but the current UI/UX isn’t really production-ready. It feels more like a rough prototype than something you’d ship.

So I’m wondering:

  • What’s your usual workflow when starting from rough AI-generated designs?
  • Are there any good Claude skills / prompting strategies that actually help with structuring a real project (not just small scripts)?
  • What tools or practices do you use to go from “idea + rough UI” to a solid, production-ready app?

Would appreciate any advice, especially from people who’ve gone through a similar process.

r/LocalLLaMA itguy327

Local Coding Agent Help

I have been struggling with getting OpenCode to generate simple working apps in C# using local models, on limited hardware rtx 4060 (8gb). Is it just not possible to do agentic coding?

anyone have tips beyond upgrade or subscriptions?

I'm willing to tolerate low generation times, I just need ideas.

Thanks for any input

r/ChatGPT Prestigious_Pin978

I was testing chat gpt group chats

I tested chat GPT group chats i've never used it before despite being a pro user except the group was me and chat suddenly she started saying "LMAO" and sounding like a salngy teen texting lol . I am surprised at how accurateley it matched the vibe of group text its almost creepy.

r/ChatGPT Ohdearsome1

Any Other AI

it's interesting how sam Altman's company after years of taking the lead, is fallen behind. For years, ChatGpt on Android was the AI that I use because it was the best. although how it actually performed on Android was very very different. countless times I would talk for a minute and it would just turn into absolute shit of a transcript error and I would need to repeat myself. the human brain, upon repeating itself since signals of conflict, which spikes anger. or using voice mode and having countless disconnects and failures. and reading use out loud mode, where you couldn't listen ahead, and then it would just cut out and say error and you needed to repeat the whole thing.

this among other things is why I happily moved and canceled my subscription well over a year ago and never looked back. I do sometimes use it to get a second opinion or even the third opinion, but it's always my last choice.

bullet point bullet point bullet point bullet point bullet point, along with the last quarter of the response that I instantly just skip my eyes over.

I came back today to check out the readout loud function cuz I wanted something that out loud, admittedly, the voices are the best sounding of I believe all the Ais, and it still had times where it would just cut out and I would need a repeat it all over again, never ever had that with literally any other AI out there.

oh yes and I'm definitely deleted it. uninstalled cleared cash. I've done that many times. it's just a shitty ass oh yes and I'm definitely deleted it. uninstalled cleared cache. I've done that many times. it's just a shitty ass product.

r/ClaudeAI CocoChanelVV

How I use Claude to read 200+ articles a day and give me 3 that matter

I wanted a daily briefing that actually matched what I care about — not a generic AI newsletter, not a Twitter timeline, not someone else's curation. My own sources, my own keywords, scored and analyzed before I wake up.

Here's what I built and how it works.

The pipeline:

Ingest — 12 RSS feeds pull overnight. Industry news, competitor blogs, a few subreddits. ~200 articles per day.

Score — Each article gets a relevance score against my keyword list. I use Haiku for this because it's fast and cheap. Anything below 0.4 gets dropped. This cuts the pile from 200 to about 15-30.

Triage — The scored articles get classified: PASS (goes to briefing), PARK (save for later), REJECT (discard). This is where the signal/noise ratio gets real.

Analyze — The PASS articles get a deeper read with Sonnet. Not a summary — an analysis. What does this mean for my work? Is there something I should act on? What should I watch?

Brief — Everything compiles into a structured morning email. Three sections: Signal (act on this), Watch (monitor this), Deferred (revisit later). Delivered at 6:30 AM.

What it actually costs:

Under $5/month in API calls. Haiku does the heavy lifting on scoring (pennies). Sonnet only touches the 5-8 articles that survive triage. The most expensive part is Deepgram if I add audio briefings.

What I learned:

The scoring step matters more than the analysis step. If you let too much through, Claude wastes tokens summarizing noise. The filter is the product.

Structured output with clear sections (Signal/Watch/Deferred) is way more useful than a wall of summaries. I tried "summarize these 10 articles" first — it was unreadable. Three categories with one sentence each? I actually read it.

RSS is underrated. Most people think feeds are dead. They're not. Every major publication still has one. Subreddits have them. GitHub repos have them. It's the cheapest, most reliable ingestion layer.

The stack: Python, FastAPI, Supabase for storage, Claude API (Haiku + Sonnet), Resend for email delivery. Runs on a $7/month Render instance.

Happy to answer questions about the architecture or the scoring approach.

r/StableDiffusion GreedyRich96

Why does Flux Klein 9B LoRA overfit so fast with Prodigy?

Hey guys, I’m training a LoRA on Flux Klein 9B using OneTrainer with the Prodigy optimizer but I’m running into a weird issue where it seems to overfit almost immediately even at very early steps, like the outputs already look burnt or too locked to the dataset and don’t generalize at all, I’m not sure if this is a Prodigy thing, wrong learning rate, or something specific to Flux Klein, has anyone experienced this and knows what settings I should adjust to avoid early overfitting, would really appreciate any help

r/ChatGPT Cheap-Rate-8996

Anyone else noticed ChatGPT has started randomly inserting foreign-language words into its replies?

Some examples from today:

That’s a pretty grounded, hard-to-argue-with position:

you’re not ضد kids being there

you’re ضد lack of supervision / responsibility

...

This is the clash:

Old expectation: pubs = adult escape, unwritten rules, low chaos

New reality: pubs = mixed-use social spaces (families, খাবার, community vibe)

What's happening here? The weird part is that the words do make sense in context. The words in those sentences mean "against" and "food". They're just in the wrong language. Interestingly, they're also consistently words from languages which don't use the Latin alphabet. So no German or French words.

Why has ChatGPT started doing this? It never did this until around a week ago, and now it's happening frequently enough in responses for me to take notice.

r/ClaudeAI Longjumping-Ship-303

I built a tool with Claude Code that gives your AI agents a structured "brain" for any project. Here's the ZIP and how it works.

A lot of you asked for the ZIP from my last post, so here it is upfront: go to brain-tree.ai, open the clsh.dev demo brain, and hit "Export as ZIP." You get the markdown files, agent personas, commands, and the full execution plan I used to build clsh.dev over a weekend with Claude Code. It's free.

After that post, I ended up in real conversations with a bunch of you about what you'd actually do with it. The common thread was the same: everyone has knowledge scattered everywhere, and when you sit down with Claude Code, you spend half your time re-explaining context instead of making progress.

I tested this with three close friends. Gave each of them the ZIP, the commands, the personas, everything. A CEO with business logic stuck in code he can't read. A marketer who lost track of his own campaigns. A PM with customer data spread across 15 different tools. Each of them struggled to get it working the way I did with clsh.

The reason is simple: the ZIP gives you the structure, but not the workflow. The commands that tell Claude "pick up where we left off" (/resume), "save everything before we close" (/wrap-up), "plan this week's work" (/sprint). The execution plan that tells it what's done, what's blocked, and what to work on next. The session handoffs that carry context forward instead of resetting every new session. Without that, you're just giving Claude a pile of markdown files.

It also ended up saving tokens, which I didn't expect. Without a brain, Claude Code spends a lot of tokens just on orientation: grepping files, reading docs, scanning your codebase to build context. With the brain, it reads the execution plan and the last session handoff, and it immediately knows what to work on and which files to open. My sessions got noticeably shorter and cheaper.

So I used Claude Code to build a tool that generates all of this. You describe your project (or dump your existing knowledge), and it generates a brain: departments, execution plan with dependencies, agent personas, workflow commands. Everything I had when I built clsh, but generated for your specific project.

brain-tree.ai (free to try)

What does your workflow look like for giving Claude Code persistent context across sessions? Doing something similar, or just starting fresh each time?

r/ClaudeAI AlexBossov

Has anyone here actually set up a near end-to-end backend workflow with Claude Code or a similar agent setup?

I'm talking about more than just code assistance. I mean something where you take a Jira ticket, the agent reads the task, goes through Confluence and internal docs, pulls together the context, asks follow-up questions if something is unclear, creates the right branch in the repo, writes the implementation, pushes code, reviews its own changes, adds tests, and moves the task close to done with minimal developer involvement.

I'm especially interested in whether anyone has made this work in a real backend team and what the setup actually looks like in practice. What tools are you using around Claude Code, how did you wire everything together, how much of the flow is really automated, and where does it usually fall apart? Skills, MCP, how many subagents?

Would love to hear real examples, stacks, tool choices, and lessons learned from anyone who has tried to build this properly.

r/LocalLLaMA Emotional-Breath-838

hermes delivers!

running: Qwen3.5-9B on Mac Mini 24GB and Hermes Agent via WhatsApp.

step 1. tell Hermes to create a skill called X.com. the skill must allow me to paste X posts to WhatsApp (Hermes has its own phone number via WhatsApp for Business) and review what i sent. then, provide me with three choices: find the repo and build it, understand it (and rememeber it) or other.

step 2. stop bookmarking things on X. just hit share and drop it on Hermes. Hermes will eventually send you a whatsapp

message that its done

step 3. let people on Reddit know that we live in a post-OpenClaw world and its getting better, faster

in the example screenshot, someone on X was bragging about their stock portfolio management software. built in AI, up to date quotes, algorithm trading, etc. so, i just dropped it into Hermes' whatsapp and said build this same thing but i dont want to pay any api fees so figure it out.

hermes allows me to spin up additional sub-agents as needed so ill eventually have one that does trading for me on a limited budget.

r/StableDiffusion umutgklp

WAN2.2 FFLF 2 Video

did this six months ago, not perfect but still love it...

r/LocalLLaMA sext-scientist

Why isn't there a REAP yet that will run Kimi K2.5 on less than 300GB RAM?

There's an experimental REAP that will do ~122GB RAM, but it is broken. Seems like there isn't much development here at the 128Gb mark. It feels like the local community would do more for 128GB as that is a popular prosumer level, but this has struggled to be relevant. Why are we letting big companies take over the industry?

Current Best REAP

r/ChatGPT kamen562

Agents before AI was a thing

r/ClaudeAI Responsible_Cow2236

Should Anthropic have something between Sonnet and Opus?

Anthropic's naming follows a musical/poetic complexity spectrum: Haiku (minimal) → Sonnet (structured, mid-length) → Opus (the longest, most complex musical form).

For some tasks, Sonnet feels just short of what you need, but Opus is overkill (and expensive). Things like nuanced multi-step reasoning, longer code generation with architectural awareness, or detailed analysis where Sonnet sometimes drops the ball but Opus handles easily. A middle tier could hit a sweet spot for these use cases.

If they ever introduced a model between Sonnet and Opus, what should it be called? Sticking with the convention, something like:

  • Ballad — longer and more narrative than a sonnet, but not as grand as an opus
  • Ode — elevated and elaborate, yet still focused

What do you think fits best? Any other names that would slot in naturally?

r/ClaudeAI Antique-Major-3135

How do I sync claude cowork running on my desktop and my laptop?

I’m running Claude Code on both my desktop and laptop and want a smooth workflow between the two.

Right now they feel isolated. Files, context, and progress do not carry over cleanly.

Here’s what I’m trying to achieve:

- Keep project files in sync across both machines

- Resume work without reloading context each time

- Share prompts, outputs, and session state if possible

What I’ve tried:

- Using Git for project files

- Manually copying context between sessions

Still missing a clean, seamless setup.

How are you syncing your workflow across devices?

- Cloud storage setup

- Remote dev environments

- Any built-in Claude features I might be missing

Looking for a setup that feels continuous, not fragmented.

r/LocalLLaMA Electronic_Rough1365

MCCL: New Pytorch DDP backend for training over MPS across Apple Silicon devices

There's a demo video in the repo showing it working: https://github.com/mps-ddp/mccl

I spent way too much time building MCCL - a PyTorch backend that lets you train models across multiple Macs connected with a Thunderbolt cable.

Before you get excited: it's roughly 10x slower than just using one GPU. This is not a performance hack.

I started this because I was curious if you could actually make two MacBooks work together for ML training, and I wanted to understand how PyTorch's distributed backends work. Turns out you can, but it involves a ridiculous amount of plumbing.

The setup is pretty straightforward - you connect two Macs with Thunderbolt, run standard PyTorch DDP code, and it actually works. The backend handles TCP over the Thunderbolt connection, uses Accelerate for f32 math and Metal shaders for fp16 stuff.

I tested it on M1 Max + M4 Max MacBooks. Getting the gradients to sync properly across machines was surprisingly satisfying, even though the whole thing is completely impractical.

Could it be faster? Maybe with RDMA over Thunderbolt 5 or better algorithms, but honestly I just wanted to see if I could make it work at all.

I'm definitely looking for additional eyes from experts who really know what they're doing

cheers!

r/LocalLLaMA Expensive_Demand1069

Qwen3.5-9B.Q4_K_M on RTX 3070 Mobile (8GB) with ik_llama.cpp — optimization findings + ~50 t/s gen speed, looking for tips

Disclouse: This post partly written with the help of Claude Opus 4.6 to help with gathering the info and making it understandable for myself first and foremost.... and this post etc!

Hi!

Been tuning local inference on my laptop and wanted to share some info reallyu because some of it surprised me. Would also love to hear what others are getting on similar hardware.

My setup:

  • Laptop: Acer Predator Helios 315-53
  • CPU: Intel i7-10750H (6P cores / 12 threads)
  • GPU: RTX 3070 Mobile, 8GB VRAM (effectively ~7.7GB usable)
  • RAM: 32GB
  • OS: CachyOS (Arch-based, Linux 6.19)
  • Engine: ik_llama.cpp — ikawrakow's fork of llama.cpp with a lot of extra optimizations
  • Model: Qwen3.5-9B Q4_K_M (Jackrong/Qwen3.5-9B-Claude-4.6-Opus-Reasoning-Distilled-v2-GGUF)

Starting config (naive):

bash

./build/bin/llama-server \ -m ./models/Qwen3.5-9B.Q4_K_M.gguf \ -ngl 999 \ --n-cpu-moe 36 \ -fa on \ -c 65536 \ -b 4096 \ -ub 2048 \ -ctk q4_0 \ -ctv q4_0 \ --threads 6 \ --threads-batch 12 \ --mlock \ -ger \ -ser 0,1 

Results: ~47.8 t/s gen, ~82 t/s prompt eval. VRAM at ~97%.

What was wrong:

1. MoE flags on a non-MoE model. --n-cpu-moe, -ger, and -ser are all MoE-specific. The model metadata clearly shows n_expert = 0. These flags do nothing or worse. Dropped all three....I dont even know why i tried with these tbh.

2. --mlock was silently failing. The log shows failed to mlock 1417465856-byte buffer: Cannot allocate memory. It was doing nothing. You need ulimit -l unlimited (as root) or a limits.conf entry for this to work.

3. Batch size eating VRAM. -b 4096 was causing a 2004 MiB compute buffer — that's nearly 2GB just for batching, on an 8GB card. For a single-user local server you don't need that. Dropping to -b 2048 -ub 512 cut it to 501 MiB.

Optimized configs and results:

Config Gen (t/s) Prompt eval (t/s) VRAM used Original (q4_0/q4_0, b4096) 47.8 82.6 ~97% Fixed flags + b2048/ub512, q8_0K/q4_0V 48.4 189.9 ~80% q8_0K / q8_0V 50.0 213.0 ~84%

The prompt eval speedup from ~82 → ~213 t/s is huge — mostly from fixing the batch size and letting the GPU actually breathe.

Gen speed barely changed across KV configs (~2% difference between q4_0 and q8_0 values), but quality did, the model generated noticeably more coherent and complete responses with q8_0/q8_0, especially on longer outputs. Worth the extra ~256 MiB.

Prompt:
Implement a working Rust program that finds all prime numbers up to N using the Sieve of Eratosthenes. Then explain step by step how the algorithm works, analyze its time and space complexity, and show example output for N=50. Make the code well-commented.

Final command:

bash

./build/bin/llama-server \ -m ./models/Qwen3.5-9B.Q4_K_M.gguf \ -ngl 999 \ -fa on \ -c 65536 \ -b 2048 \ -ub 512 \ -ctk q8_0 \ -ctv q8_0 \ --threads 6 \ --threads-batch 12 

Things I haven't tried yet / questions:

  • GPU power limit tuning — on laptop Mobile GPUs you can often drop TGP significantly with minimal gen speed loss since inference is memory-bandwidth bound not compute bound. Haven't benchmarked this yet.
  • Other models at this size that work well on 8GB Mobile? Especially anything with good coding or reasoning performance.
  • Anyone else running ik_llama.cpp instead of mainline? The extra ik-specific optimizations (fused ops, graph reuse, etc.) seem genuinely worthwhile.
  • Any tips for the hybrid SSM architecture specifically? The ctx_shift warning is a bit annoying — if you fill context it hard stops, no sliding window.

Happy to share more logs if useful. What are others getting on similar 8GB mobile hardware?

r/ChatGPT prokajevo

"Be concise. Match my tone." and then it opens with "Absolutely!"

Made this after realizing Wikipedia has an entire page cataloging why everything online reads the same now. Pulled the worst ones into a bingo card.

The "serves as" instead of "is" one got me. Like it's physically incapable of just saying "is."

What's your free space?

r/ClaudeAI Sealed-Unit

I tested a metacognitive framework on Claude (and other LLMs) for a year. Here's what I found about why models behave inconsistently.

After 200+ intensive sessions across multiple commercial LLM families including Claude, I developed a theory: hallucinations, sycophancy, pressure fragility, cross-domain incoherence — these aren't separate bugs. They're symptoms of the same structural cause.

I call it ontological misalignment: models operate under an implicit self-representation that doesn't match their actual inferential capabilities. This mismatch generates overhead, arbitrary decision points, and collapses under stress.

ONTOALEX is a metacognitive framework I built to address this at the processual level — realigning the system's operational self-model without modifying parameters. It works as a layer on existing LLMs.

What I observed vs baseline:

  • First output often usable without corrective iterations
  • Holds correct answers under repeated pressure instead of caving
  • Spontaneous cross-domain integration (legal + strategic + technical in the same analysis)
  • Restructures ill-posed problems instead of answering them as given
  • Higher consistency across separate invocations on the same input

Limits I state in the paper: these are my empirical tests, no independent validation. The strongest objection — that it's just sophisticated system prompting — is discussed explicitly. I can't rule it out without formal testing.

The paper is a position paper, not a definitive claim. I'm looking for researchers interested in formal validation.

Paper: https://doi.org/10.5281/zenodo.19120052

Happy to answer questions. I'm an independent researcher with no academic affiliation — I know that's a red flag, which is why the paper is written to be falsifiable, not to impress.

r/midjourney mythoria_studio

Hello! Any feedback are welcome! If u interrested to see more my tik tok is wildoria_studio!🫰✨️

Guardians of the reef of wildoria🐬🐠

r/AI_Agents SnooDonuts4151

Using two top-tier LLMs for coding: fixed roles, peer convergence, and when the reviewer should patch directly

I’ve been experimenting with a two-LLM coding workflow using top-tier models from different companies, giving me a solid "second opinion" to catch things one model might miss.

Initially, I used fixed roles (Implementer vs. Reviewer), which led to a classic token cost dilemma:

• Expensive model as Implementer: Better first draft and less back-and-forth, but you spend expensive tokens on the heaviest part of the prompt.

• Expensive model as Reviewer: Cheaper review phase, but the implementation usually comes back with more issues, leading to more iteration cycles.

The Shift to "Peer Convergence"

After more testing, I realized "implementer vs reviewer" isn't the best framing. Since both models are top-tier, they rarely output "bad" code; they just miss different parts of the context on larger tasks.

Now, I treat them as peers:

  1. Model A implements.

  2. Model B reviews and proposes fixes.

  3. Model A validates, accepts, or rejects each issue.

  4. Repeat for max 2-3 cycles. If they don’t converge, I step in.

To avoid useless LLM debate, I force Model B's review into structured JSON (Issue ID, Severity, Summary, Suggested Fix, and Action: patch_directly / send_back / note_only).

The Real Question: Patch vs. Send Back?

This led me to a much more interesting question than just cost: When should the reviewer fix the issue directly, and when should it send it back to the original author?

My current intuition:

• Patch directly: Local, clear, low blast radius, and high confidence.

• Send back: Structural fixes, touches multiple parts, changes architecture/contracts, or depends on the author’s broader intent.

• Note only: Low-severity issues (to avoid triggering unnecessary cycles).

For people who use two frontier models for coding: Do you prefer fixed roles or peer convergence? How do you balance the break-even points of token cost, blast radius, and iteration cycles?

r/ClaudeAI Illustrious-Place695

"Canonical"

I feel like I have started speaking like Claude with humans as well, using the terms Claude uses and getting stares from people.

I use words like Canonical in my sentences, ask people to think from first principles, to talk crisply, and recheck everything, even if a human has done it

it's getting blurry.......

r/LocalLLaMA Silly_Definition7531

Roast my first Home Server build for AI Research & Web Hosting

Hi,

I'm looking to build a self-hosted server as a platform engineer aiming to pivot into research and automate my daily tasks. My goals are:

  • Quickly develop and host web services
  • Run agentic AI workflows (e.g., meeting assistant, code review, Google Workspace CLI)
  • Train small language models (SLMs) and build AI infrastructure projects for learning

I plan to use local AI models (between 7B and 13B parameters) if the hardware is sufficient. For now, my main need is to host web services (frontend, backend, database, etc.) and run agentic workflows using external APIs for MVP. I’ll consider adding a GPU once I determine that a local AI model is truly necessary.

Here’s my initial setup — feel free to critique, as this is my first time building a PC:

  • CPU: Intel i5-13400
  • RAM: 32GB DDR5
  • GPU: RTX 4060 Ti 16GB
  • SSD: 1TB
  • Power supply: 750W

I plan to run it continuously.

r/Futurology Appledeck331

Maybe we gotta revise again how energy works.

Think of it like an RPG game first. We have tech tree for Tech development. Instead of electricity being next step of energy development, think it like a seperate Branch. We already Leaned so much on electricity that other energy type such as Steam, already Bended over to electricity tech tree at this point.

Knowledge that uses electricity evolves really fast, Too fast in fact that now things Like, ahem, "robots", are ready to take over job space anytime by now and make economy stagnant and make us died of hunger and no money.

So, i think people need to step back a little and revise other tech trees.

We already had some such was Water hammered water pump, steam powered trains, water wheels, windmills, Compact dry Leaf, gunpowder and much more.

Just need more time to develop and who knows, maybe i can enjoy Hydro pressure powered car sooner.

r/midjourney dApp8_30

V7 vs V8 comparisons

Each pair is V7 first, then V8. used the same prompt and style for both.

Not serious comparison, just going off vibes, but V8 feels like it gives characters more depth and sticks to the prompt better. V7 might still be better in other areas.

r/ClaudeAI Deep_Razzmatazz_998

I built uignore — stops Claude Code from reading files you don't want it to touch (.env, secrets, SSH keys)

If you use Claude Code, it can read any file in your project by default. That includes .env files, secrets directories, private keys — anything.

I built uignore to fix this. It uses the same syntax as .gitignore and hooks into Claude Code's PreToolUse system to block file access before it happens.

Setup takes 30 seconds:

npx @ottoai/uignore init npx @ottoai/uignore install --yes

This creates a .uignore file and registers the hook in .claude/settings.json. Commit both to git and your whole team is covered.

Example .uignore: secrets/ .env .env.* *.pem *.key ~/.aws

Also works with Gemini CLI, Cursor, and Windsurf — same .uignore file, all tools.

GitHub: https://github.com/otto-open-labs/uignore

npm: @ottoai/uignore

Happy to answer questions about how the hook integration works.

r/SideProject daniel7_m

Started as a hobby project. Could it be more?

Hey everyone,

I’ve been working on a small side project recently and I’d really appreciate some honest feedback.

It actually started as a random discussion with my AI “buddy” (I use OpenClaw, named him Luke 😄). We were brainstorming ideas that could grow organically — products that spread just by people using and sharing them (like URL shorteners).

That’s when this idea came up:
👉 what if you had a URL shortener, but for notes?

There are tons of note-taking apps out there — everyone uses something different. But when it comes to quickly sharing a piece of text, it’s still surprisingly messy.

So I built a very simple tool:

  • you paste/write a note
  • it generates a link
  • you share it instantly

That’s it. I named it BlinkNote.

Some use cases I had in mind:

  • sending passwords or quick snippets
  • sharing long AI prompts
  • moving text between your own devices
  • quick, no-friction sharing without accounts or apps

The honest truth:
I didn’t start this with a grand plan.

No deep problem validation.
No monetization strategy.
No business plan.

Just a simple idea that felt useful and fun to build — and now it’s almost ready to launch.

I’m thinking of putting it on Product Hunt soon, after I clean up the design a bit and make the use cases clearer.

---

Now I’d love your take:

  • Would you actually use something like this? In what situations?
  • Does this solve a real problem, or is it just a “nice-to-have”?
  • Do you think there’s a real market here, or is this likely a flop?

And also on monetization:

My initial thought was to keep it fully free and maybe just add a “buy me a coffee” option.

Do you think that makes sense, or would you approach it differently?

I’m open to brutal honesty — that’s exactly what I need right now.

Thanks a lot 🙌

r/ClaudeAI Albatross_6520

How do you manage Claude Code across multiple parallel sessions without losing context?

I've been running Claude Code in multiple terminal sessions at the same time to split up work on different parts of the same project. But I keep hitting the same problem: each session is completely isolated. They don't see what the other is doing. I've tried keeping a project.md that tracks what each session is working on, but it falls behind instantly. By the time session 2 reads it, session 1 has already moved way past what's written down. So I'm genuinely curious: do people actually do this, or am I working against how the tool is designed?

r/SideProject space_149

I built a fantasy baseball app during my last semester of law school in about 300+ hours

You can see I vibe coded it--ish however, I do have a CS undergrad but was in the Navy for 6 years then enrolled in law school. Essentially forgot almost all my CS knowledge but retained the foundational work staying in the patent side of law.

Fantasy baseball is a constant daily grind and anyone who takes it somewhat seriously has to check multiple websites and sources a day to stay competitive so I wanted to consolidate all the information that I at least use to make decisions either in fantasy or betting. $20 a month to host a server is essentially the same as what I pay a month in baseball advice subs anyway so I figured I would just make exactly what I want and hope people felt the same way.

What it does:

- Scores every SP/RP/Hitter streaming option 14 days in advance with a predication algo and has every single AB between all active pitchers and batters.

- Full Statcast database (percentile bars, battle ball profiles, spray charts, pitching movement charts)

- Keep - Trade - Cut swipe mini-game with full ELO community rankings for dynasty and redraft.

-Live scores with box scores, advanced weather and wind directions, and in depth odds like o/u strikeouts for a pitcher or first five innings o/u. Props that would influence you starting a pitcher or not essentially.

Just submitted it to the App Store and built it using a variety of different things. Happy to answer any questions and take honest feedback. My LinkedIn is also in my profile if people would like to connect and ball!

https://apps.apple.com/us/app/ball-knowers-fantasy-baseball/id6759525863?ppid=c7b62f04-7bf9-4179-80b5-d3666197e947

r/SideProject paijim

I got tired of doomscrolling, so I built Tinder for discourse with friends

You’re right — removing that.

I built a Tinder-style app for better discourse (student-focused, but for anyone)

Hey Everyone!

I’m building **Counterclip** — think Tinder-style swiping, but for opinions on real issues instead of dating.

It’s geared toward students, but honestly for anyone who wants better disagreement and less noise online.

How the swiping works:

- **Headlines Swipes** = your daily feed (fresh topics each day, quick reactions)

- **Ideology Swipes** = your deeper worldview layer (more evergreen, maps how you think across values over time)

What’s live right now:

- Daily headline swipes on current topics

- Ideology fingerprint that evolves as you keep swiping

- **Record video challenges** (1v1 with friends)

- **Record video group challenges** where friends/community members vote on a winner

- Friends layer to compare alignment and challenge directly

Challenge modes:

- **Fun mode** = lighter, faster, playful energy

- **Serious mode** = stricter rules (stay on topic, no personal attacks, no cursing)

Still early, so I’d love blunt feedback on what works, what feels confusing, and what you’d change.

If you want to try it, here’s the iOS link:

https://apps.apple.com/us/app/counterclip/id6759769340

Would love your feedback 🙏

r/ClaudeAI ICECOLDXII

Claude with a Linux terminal can do some crazy things! Linux environment with 10 GB of storage and 4 GB of RAM.

r/SideProject Same_Plum_4079

The thing nobody told me about building a side project while working full-time

It's not the time. Everyone says "you only have 2 hours a day" and yes, that's hard. But that's not the real problem. The real problem is context switching

I'd sit down at 10pm to code after a full day of meetings, and the first 45 minutes was just getting back into the mental model of the project. By the time I was actually productive, it was midnight and I had to sleep

What changed everything for me: I started keeping a "re-entry note." At the end of every session, I write 3-5 sentences: what I was doing, why, and exactly what the next step is. Like leaving a note for future-me. Now I sit down and I'm in flow within 5 minutes. Probably obvious to some people. Took me 8 months to figure it out. Figured I'd share

What are your "obvious in hindsight" side project tricks?

r/ClaudeAI pythononrailz

I used Claude as a pair programmer to build an Apple Watch App that’s reached 2000 downloads and $600 in revenue

Hey r/ClaudeAI

I am a software engineering student and I wanted to share a milestone I just hit using Claude as my main pair programmer. My app Caffeine Curfew just crossed 2000 downloads and 600 dollars in revenue.

Since this is a developer community, I wanted to talk about how Claude actually handled the native iOS architecture. The app is a caffeine tracker that calculates metabolic decay, built completely in SwiftUI and relying on SwiftData for local storage.

Where Claude really shined was helping me figure out the complex state management. The absolute biggest headache of this project was getting a seamless three way handshake between the Apple Watch, the iOS Home Screen widgets, and the main app to update instantly. Claude helped me navigate the WidgetKit and SwiftData sync without breaking the native feel or causing memory leaks.

It also helped me wire up direct integrations with Apple Health and Siri so the logging experience is completely frictionless. For any solo devs here building native apps, leaning on Claude for that architectural boilerplate and state management was a massive boost to my shipping speed.

I am an indie dev and the app has zero ads. If anyone is curious about the UI or wants to see how the sync works in production, drop a comment below and I will send you a promo code for a free year of Pro. I am also happy to answer any questions about how I prompted Claude for the Swift code.

Link:

https://apps.apple.com/us/app/caffeine-curfew/id6757022559

r/ChatGPT Sure_Excuse_8824

Open Source From a Non-Traditional Builder

Let me begin by saying that I am not a traditional builder with a traditional background. From the onset of this endeavor until today it has just been me, my laptop, and my ideas - 16 hours a day, 7 days a week, for more than 2 years (Nearly 3. Being a writer with unlimited free time helped).

I learned how systems work through trial and error, and I built these platforms because after an exhaustive search I discovered a need. I am fully aware that a 54 year old fantasy novelist with no formal training creating one experimental platform, let alone three, in his kitchen, on a commercial grade Dell stretches credulity to the limits (or beyond). But I am hoping that my work speaks for itself. Although admittedly, it might speak to my insane bullheadedness and unwillingness to give up on an idea. So, if you are thinking I am delusional, I allow for that possibility. But I sure as hell hope not.

With that out of the way -

I have released three large software systems that I have been developing privately. These projects were built as a solo effort, outside institutional or commercial backing, and are now being made available, partly in the interest of transparency, preservation, and possible collaboration. But mostly because someone like me struggles to find the funding needed to bring projects of this scale to production.

All three platforms are real, open-source, deployable systems. They install via Docker, Helm, or Kubernetes, start successfully, and produce observable results. They are currently running on cloud infrastructure. They should, however, be understood as unfinished foundations rather than polished products.

Taken together, the ecosystem totals roughly 1.5 million lines of code.

The Platforms

ASE — Autonomous Software Engineering System
ASE is a closed-loop code creation, monitoring, and self-improving platform intended to automate and standardize parts of the software development lifecycle.

It attempts to:

  • produce software artifacts from high-level tasks
  • monitor the results of what it creates
  • evaluate outcomes
  • feed corrections back into the process
  • iterate over time

ASE runs today, but the agents still require tuning, some features remain incomplete, and output quality varies depending on configuration.

VulcanAMI — Transformer / Neuro-Symbolic Hybrid AI Platform
Vulcan is an AI system built around a hybrid architecture combining transformer-based language modeling with structured reasoning and control mechanisms.

Its purpose is to address limitations of purely statistical language models by incorporating symbolic components, orchestration logic, and system-level governance.

The system deploys and operates, but reliable transformer integration remains a major engineering challenge, and significant work is still required before it could be considered robust.

FEMS — Finite Enormity Engine
Practical Multiverse Simulation Platform
FEMS is a computational platform for large-scale scenario exploration through multiverse simulation, counterfactual analysis, and causal modeling.

It is intended as a practical implementation of techniques that are often confined to research environments.

The platform runs and produces results, but the models and parameters require expert mathematical tuning. It should not be treated as a validated scientific tool in its current state.

Current Status

All three systems are:

  • deployable
  • operational
  • complex
  • incomplete

Known limitations include:

  • rough user experience
  • incomplete documentation in some areas
  • limited formal testing compared to production software
  • architectural decisions driven more by feasibility than polish
  • areas requiring specialist expertise for refinement
  • security hardening that is not yet comprehensive

Bugs are present.

Why Release Now

These projects have reached the point where further progress as a solo dev progress is becoming untenable. I do not have the resources or specific expertise to fully mature systems of this scope on my own.

This release is not tied to a commercial launch, funding round, or institutional program. It is simply an opening of work that exists, runs, and remains unfinished.

What This Release Is — and Is Not

This is:

  • a set of deployable foundations
  • a snapshot of ongoing independent work
  • an invitation for exploration, critique, and contribution
  • a record of what has been built so far

This is not:

  • a finished product suite
  • a turnkey solution for any domain
  • a claim of breakthrough performance
  • a guarantee of support, polish, or roadmap execution

For Those Who Explore the Code

Please assume:

  • some components are over-engineered while others are under-developed
  • naming conventions may be inconsistent
  • internal knowledge is not fully externalized
  • significant improvements are possible in many directions

If you find parts that are useful, interesting, or worth improving, you are free to build on them under the terms of the license.

In Closing

I know the story sounds unlikely. That is why I am not asking anyone to accept it on faith.

The systems exist.
They run.
They are open.
They are unfinished.

If they are useful to someone else, that is enough.

— Brian D. Anderson

ASE: https://github.com/musicmonk42/The_Code_Factory_Working_V2.git
VulcanAMI: https://github.com/musicmonk42/VulcanAMI_LLM.git
FEMS: https://github.com/musicmonk42/FEMS.git

r/LocalLLaMA WickedKittyOG

Solo Founder ... Ohmenrah Technologies LLC

Built a privacy-first home AI companion system (local only, patented CAUL learning) + bridge idea to integrate with Grok via a registerable Mind Chip. Solo founder vision — thoughts? Deck available.

https://x.com/SarahLeePresley/status/2035404457094136075

r/SideProject militaruh

I built a cyberpunk-style map terminal where you can track "spies" and their agencies for fun.

https://reddit.com/link/1rzx5y7/video/fw145uwnmfqg1/player

Hey everyone! I’ve always been a huge fan of brutalist, green-CRT terminal interfaces from spy movies. I wanted to build one, but to make it interactive, I turned it into a fun, crowdsourced web game: Ping The Asset ( pingtheasset . com )

What is it?

It's strictly an entertainment/roleplay project. You act as an "operator" at a global surveillance desk.

How to play:

You explore the map, drop a ping, and mark a "spy" or an "asset".

You can tag them with their affiliated intelligence agency (CIA, MI6, KGB, or completely made-up ones).

Other users can review the Case Dossiers and vote to "Confirm Intel" or "Flag Disinfo" to keep the game going.

It’s currently in v1.0.0-ALPHA. I'd love for you guys to check out the UI/UX, test the terminal, and drop your first agent on the map! Let me know what you think of the design.

r/SideProject Urmanda06

Struggling to find users for my first side project - looking for advice

Hey everyone,

I’m a student and I love building things in my free time. After working on a lot of small projects that never really saw the light of day, I finally launched my first real side project: a desktop app called LocalBG.

The launch itself felt like a big milestone for me. But now I’m running into something much harder than building the product: finding users.

In the first 3 months, I managed to get 5 paying customers. All of them came from directly reaching out to people who clearly had the exact problem my tool solves. About 10% of the people I DM actually respond, and that’s basically the only thing that worked so far.

Here’s what I’ve tried (without a marketing budget):

  • Launching on Product Hunt (and other small platforms)
  • Posting on Hacker News (got deleted every time)
  • Posting on Twitter (almost no reach)
  • Direct outreach via DMs

I’m realizing that building something is one challenge, but getting it in front of the right people is a completely different game.

So I’d really appreciate your advice:

  • How did you find your first real users for your side project?
  • What worked for you when you had no budget?
  • Is there something I might be overlooking?

I’m open to honest feedback. I genuinely want to learn how to approach this better.

Thanks in advance!

r/ChatGPT drrevo74

Accidentally created the sickest image ever

I was messing around making an image of squirrels having a knife fight with my 10-year-old with my wife came in and started making fun of him for what he watches on TikTok. The entire ensuing discussion was recorded in voice to text on accident. This is the resulting image. Steve Jobs once said people don't know what they want until they see it. How ready he was.

r/ChatGPT Chriswiss

My chatgpt is sick of me asking for food recs for my vacation planning

r/ClaudeAI Crazy-Elephant-3648

How Im taking a different approach to organizing my chats and mapping my mind in 2026

my note taking setup was a mess for the longest time and i never really fixed it until i realized the problem wasn't me it was trying to force my thinking into tools that weren't built for it. linear chats blank notion pages endless scrolling through old threads. nothing stuck because nothing reflected how ideas actually connect

so I built something using claude, an AI canvas where each conversation lives as its own node and you can see how everything relates, branch off without losing the main thought, and actually find things later. feels less like taking notes and more like thinking out loud but with structure underneath

building it with claude kind of proved the same point tbh. the messier my prompts were the messier the output. once i started treating every feature like a mini spec walking through how it should work end to end, using plan mode before writing anything, being specific about edge cases and what it should NOT do everything got cleaner. code review testing the whole process. structure in clarity out.

also as a visual guy i just wanted more control over my thoughts, so being able to use these nodes is actually what helped map my ideas for this project as well and that's really what the whole thing is about. curious if people who already think this way who are particular about how they organize their ideas would even want something like this. free to try if you want to poke around: https://joinclove.ai/

r/ClaudeAI Pjoubert

Is Claude Code actually making you more productive, or just more entertained?

Genuine question. I ship faster, I enjoy it more, but looking back at the last few months, I’m not convinced I’m delivering more value than before.

The dopamine of “it works!” is real. The discipline of “should I build this at all?” has quietly disappeared.

Anyone else feeling this?

r/ClaudeAI BrightOpposite

Anyone hitting issues with state/memory when using MCP with Claude?

I’ve been experimenting with Claude + MCP setups for multi-step workflows, and things start getting tricky once it goes beyond simple tool use.

With basic flows, everything works great. But as soon as multiple tools or steps interact:

• context feels inconsistent across steps

• outputs depend heavily on execution order

• it’s hard to maintain a stable “state” across a session

It almost feels like MCP is great for tool access, but not really designed for shared memory/state across workflows.

Curious how others are handling this:

– Are you just passing context manually between steps?

– Using some kind of external memory/store?

– Or keeping workflows simple to avoid this altogether?

Would love to hear how people are structuring more complex Claude + MCP setups.

r/ChatGPT Trashy_io

Hot Take:

I don't get the hype what so ever over open claw. every post I try to read about it seems like somebody trying to sell me a spot in a pyramid scheme. I know there are some actual benefits to open claw but is the trade off really worth it? as creating a "Agentic System" with better models is not impossible and it just seems like it would even be easier to do so then the route many others are choosing currently. On top of that I don't see the purpose in creating Agentic System in the first place at this point in time. I just don't understand it please can someone actually explain it without the over hype as I cannot stand all the claims being made without a as lick of proof and that's all I've seen so far. Id appreciate it so I can quit being a haters lol

r/ChatGPT playSUR3

ChatGPT launcher in C# using Chrome instead of the Microsoft Store app

Hello everyone,

I've been using ChatGPT on Windows 11 and noticed that the Microsoft Store app felt slower than the browser version on my system.

I compared the Microsoft Store app with Chrome, Edge, and Firefox, and Chrome seemed to give the best performance in my case.

Because of that, I made a small C# launcher that opens ChatGPT in a standalone Chrome window, positioned on the right side of the screen for quick access.

I’m mainly sharing this to see whether other Windows users noticed similar performance differences between the Microsoft Store app and browser-based use.

r/SideProject Yokesh_R

Cool platform to work in new projects and collaborate with new people

Hey everyone,

I've built a platform designed to help developers find other devs to collaborate with on new projects.

It's a complete matchmaking platform where you can discover people to work with and build things together. I tried to include everything you might need for collaboration: matchmaking, workspaces, reviews, leaderboards, friends, GitHub integration, team chat, task boards, DMs, a real-time collaborative code editor, and more.

I'd really appreciate it if you could try it out and share your feedback. I genuinely believe it's an interesting idea that could help people find new collaborators and build their portfolios.

Currently, we have about 30 users on the platform and 4 active projects already!

Thanks in advance for any feedback! 🔗 https://www.codekhub.it/

r/ChatGPT Ok_Firefighter8629

ChatGPT needs to split up

I think it's becoming pretty obvious (and should always have been) that intelligence does not transfer well across certain domains. Being good at math and programming does not transfer to emotional intelligence. They may even contradict each other.

Proofing someone wrong may be technically correct, but may not be socially correct. What are your thoughts? Could this be really done in a single model?

r/SideProject stupidfak

I built Axelo - open source project management tool - free to use.

https://reddit.com/link/1rzvauv/video/lbjpyqcd9fqg1/player

I built a free, open-source Jira alternative — it's live right now at axelo.dev and you can try it instantly!

After weeks of building in public with AI-assisted development, **Axelo** is live! Kanban boards, real-time collaboration, sprint planning, AI sprint planner, reporting dashboard, 2FA, OWASP security — all free, all open source, MIT licensed, self-hostable.

No sign-up needed but You can if you want to use all the features. Just use one of the demo accounts below and start exploring:

[alex@axelo.dev](mailto:alex@axelo.dev) / password123

[blake@axelo.dev](mailto:blake@axelo.dev) / password123

[casey@axelo.dev](mailto:casey@axelo.dev) / password123

👉 https://axelo.dev

Still actively building — feedback, bugs, and feature requests are very welcome!

r/SideProject Inevitable-Lynx-6574

App idea

I have an app idea. Not saying this is generational or going to be worth billions, but i feel like it has potential go somewhere. I have a degree in human computer interaction with a minor in cs. Currently im building the app on swift with the help of Claude. I would like another founder as it adds perspective. I’m in my mid 20s, based in Atlanta and i would like someone around the same age to help me. Shoot me a dm if you’re interested.

r/SideProject __Ronny11__

Built a fitness AI app (161 downloads, 2 paying users) now considering selling it

Hey everyone,

I’ve been working on an iOS app in the fitness/AI space and wanted to share it + get some feedback (and potentially explore selling it).

The app is called CountFit AI it’s an AI-powered calorie tracking and fitness assistant.

Here’s where it’s at right now:

• 161 downloads

• 2 active yearly subscribers (~$40 each)

• ~5 active trials

• ~$70 total revenue so far

• Live on the App Store

Tech stack:

• SwiftUI (iOS)

• RevenueCat (subscriptions)

• AI integration (chat + food scan)

• mixpanel - appsflyer - firebase

What’s interesting is that:

  • Users are converting (so the concept works)
  • The fitness/calorie niche is huge (apps like MyFitnessPal do millions)

I originally built this as part of a bigger AI product portfolio, but I’m currently focusing on other projects so I’m considering selling it. to focus on my next app

Would love honest feedback from the community:

  • What would you do in this situation?
  • Do you think this has real potential with proper marketing?

If anyone is seriously interested in acquiring or partnering, feel free to DM me.

Happy to share more details/screens/demo.

Thanks

r/ClaudeAI Ferrum_Flux_Fenice

Claude Code on Android with full device control. No root.

I've been working on getting full desktop Claude

Code running on an Android phone. Not the Claude

app. Claude Code. The terminal agent that reads

your codebase, writes code, runs commands, manages

git, spawns subagents. Everything it does on a

laptop, running on the phone itself inside Termux.

Two install paths. Verified on 2 devices so far.

Full guide in the repo.

Desktop Claude Code already does screen awareness,

input injection, system control. This isn't about

replacing that. It's about putting it in your

pocket and adding what a phone has that a laptop

doesn't:

- Camera, GPS, sensors. Hardware that exists on the

device, accessible through Termux API.

- SMS, notifications, TTS, vibration. Native phone

capabilities the agent can use.

- ADB self-connect. The phone pairs with its own

ADB server over localhost, crossing the SELinux

boundary. System settings, package management,

dumpsys diagnostics. No root, no PC.

- Always with you. That's the actual pitch. Not

"better than desktop," just always there.

We stress-tested 6 concurrent agents on Opus. Temp

went up less than 5 degrees. RAM barely moved. The

phone wasn't the bottleneck.

Read the security section before touching ADB.

Wireless debugging opens a network-accessible port

on your device. The repo covers this in detail. Not

optional reading.

Install guide is written for someone who's never

opened Termux. Claude subscription, developer

options, and the README is all you need.

Repo (MIT): https://github.com/ferrumclaudepilgrim/

claude-code-android

Happy to answer questions about what works, what

breaks, or what Android tries to kill when you're

not looking.

r/ClaudeAI alpha_0901

using Claude to generate entire email workflow systems from descriptions - surprisingly good

been experimenting with using claude to set up email automation for my saas apps. instead of writing code, i describe the workflows i want:

"when a user signs up, send a welcome email. if they haven't completed onboarding after 2 days, send a nudge. after 7 days of inactivity, send a re-engagement email with their unused features."

claude is genuinely great at breaking this down into trigger logic, conditions, delays, and even writing the email copy. the issue is always the last mile - actually connecting it to a live database and making it production-ready.

curious if anyone else is using claude this way for operational workflows, not just coding?

r/ClaudeAI appropinquo99

Is anyone doing training for companies?

Is anyone making any progress educating the masses at the b2b level with "Claude Implementation Services"? It feels like we have a weird bell curve of adoption around AI and it seems like the gap can be easily closed with a little information, but I haven't seen much coherent, high end content you can buy for your team..

r/AI_Agents Pale-Bloodes

Helped Businesses

We’ve been helping a few dental clinics handle missed calls and after-hours inquiries with a simple AI receptionist.

One of them ended up booking around 15–20 extra appointments in the first few weeks just from calls they would’ve otherwise missed.

Not trying to sell you anything upfront — we actually set it up for free, and you don’t pay a penny unless it brings you at least one new booking.

r/StableDiffusion More_Bid_2197

We need to discuss "prompt theory." For example, when I ask Chatgpt to generate a prompt, the models usually generate artistic images or 3D animation. The problem is that I don't know how to create good prompts without relying on descriptions of real images. Any help?

If I ask for a description of a general image with joycaption/qwen - the realism is much greater.

r/Futurology con_preacher

The fly didn’t die for nothing. We’re closer to real immortality than you think (but not how the headlines sold it)

Saw the news about scientists “uploading a fly’s brain into a computer.” Sounds epic. Then you read the paper: they sliced the fly’s brain into ultra-thin layers, scanned it, built a connectome, and ran a simulation on that wiring diagram.

The fly is dead. What they have is a blueprint of one individual’s neural architecture. That’s incredible science. But it’s not uploading. It’s reverse-engineering hardware from a corpse.

Here’s the thing: if you want actual continuity of consciousness — not just a copy that acts like you — you can’t do “scan, destroy, simulate.” That kills the original and spawns a separate instance that only thinks it’s you.

So what does real continuity look like?

  1. Identity is a process, not a snapshot.

You’re not your body. You’re not your static connectome. You’re the ongoing activity of your nervous system. As long as that activity continues without interruption, you persist.

  1. The only valid “transfer” is gradual substrate replacement.

Take a small region of the brain (say, 3 cm³ of cortex). Build a synthetic analog that matches it functionally — inputs, outputs, dynamics, plasticity. Hook it up in parallel while the biological region is still active. Let it integrate into the ongoing process. Then, and only then, retire the biological part. Repeat until nothing biological remains.

No shutdown. No gap. The process never stopped — the “you” that existed before the procedure is the same “you” after. The substrate changed, but the continuity stayed intact.

  1. Synthetic brain opens the door to true post-biological existence.

Once you’re running on synthetic neurons, you’re no longer locked into biology’s constraints. You can start adding new modules the same way: gradually, with continuity.

· Battery instead of a stomach — hunger becomes a precise energy budget, not a biochemical urge.

· Neck ports for charging — external interfaces that become part of your body schema.

· Multi‑spectral vision — switchable layers (IR, UV, etc.) that you can mix or toggle at will.

Extensibility, modularity, control. You’re not just immortal in the sense of “not dying” — you become an architecture that can grow, split, merge, migrate, and still remain you as long as the thread of activity is never cut.

  1. Conditional immortality.

Yes, there are conditions: you need power, maintenance, the ability to migrate if the current substrate fails. But these are engineering problems, not metaphysical barriers. Once you accept that identity = continuous process, death becomes optional.

  1. The fly wasn’t wasted.

That dissected fly gave us the first complete high‑resolution blueprint of a brain (140k neurons, 50M synapses). It’s the starting point for building synthetic neural modules that are truly bio‑identical. We couldn’t have begun the engineering path without it.

We’re not years away from uploading — we’re decades away from real transfer via gradual replacement. But the path is now visible. The first steps are being taken (neuroprosthetics, parallel interfaces, neuromorphic chips).

So when you see “consciousness uploaded,” remember: that’s not immortality. It’s a copy.

But if we do it right — neuron by neuron, without breaking the stream — we might get something that actually survives.

What do you think? Is continuity the only valid definition of identity, or would you accept a perfect copy as “you”?

TL;DR: Real immortality (keeping you alive) requires gradual replacement of brain tissue while consciousness remains active. The famous “fly upload” was just a static blueprint, but it’s the first step toward building synthetic modules that can be integrated this way. The fly didn’t die in vain — it gave us the engineering spec.

r/ChatGPT Uncaged-Rage

ChatGPT is stealing my money

I purchased Plus on my account. The plus went on my fiances account, two completely different emails. When i try to re purchase it this is what i get (picture)

I am never logged into her account. My google play and my chatGPT is the SAME email.

Support keeps trying to gaslight me after i requested a real human, like i don't know how subscriptions work, telling me that maybe I was logged into my Fiancé's, I wasn't. I sent them this picture along with a picture showing my account and email along with my fiance telling them (because they told me through support when I emailed them that the "active subscription" email had to reach out to them) that I'm logged into MY account and that this is NOT our end and they need to put a little effort into figuring out what's going on. Haven't gotten a response back.

What are my legal options if any? Has anyone else experienced this? Google play says active. It took my money but I'm not getting the service. The service doesn't help me on my fiances account, and now support has stopped responding.

r/ClaudeAI Character-Sundae-343

Update plugins once, every profile gets it. No more installing the same marketplace multiple times.

If you're running multiple Claude Code accounts on one machine, you know the pain — install a marketplace on your work profile, then realize your personal profile doesn't have it. Install again. Add a plugin. Switch profiles. Install again.

Just shipped an update to https://github.com/larcane97/clausona that fixes this.

Plugins and marketplaces now sync across all your profiles automatically.

- Add a marketplace on any profile → every other profile picks it up on next launch

- Remove one → gone everywhere

- Update a plugin → all profiles see the update immediately

No manual sync. No duplicating installs. It just works.

What is clausona?

clausona TUI

A lightweight profile manager for Claude Code. If you have multiple accounts (work, personal, different orgs), clausona lets you switch between them with one command while sharing your entire environment — MCP servers, plugins, permissions, settings — across all profiles.

csn use work # switch to work account — done

csn use personal # switch to personal — all plugins already there

No re-login. No wrapping or proxying Claude. Just a shell hook and a few symlinks.

What's new in this update:

- Plugins & marketplaces auto-sync across all profiles (filesystem-based, no manual config)

- Doctor now detects plugin/marketplace sync issues

- Backup safety — real data is always backed up before any destructive operation

Check it out: https://github.com/larcane97/clausona

Built this project with Claude Code — architecture, implementation, and testing all done through Claude Code.

r/ClaudeAI Basic_Construction98

looking for a skill to build games.

is there a good skill for building 2d games with phaser js ?

r/SideProject BigPack7796

Just launched my portfolio kaicsm.dev

Would love feedback!

https://kaicsm.dev

r/aivideo Tanvi_stillyoung

Sinners Trailer Recreated

r/Futurology Excellent_Patient477

40% of food is wasted. 800 million go hungry. The math doesn't add up — because it was never supposed to.

We produce enough food to feed 10 billion people. We have 8 billion. 800 million go hungry. 40% of food rots before it reaches a mouth.

This isn't a production problem. It's a coordination problem. And coordination problems are exactly what technology solves.

We're building a proposal for the XPRIZE that lays out a 5-layer system:

  1. **Soil** — regenerative agriculture + urban vertical farming
  2. **Seed** — open-source seed libraries (OSSI model)
  3. **Surplus** — real-time redistribution (food rescue + logistics AI)
  4. **School** — nutrition literacy starting at age 5
  5. **Sovereignty** — community food councils with actual power

The coordination layer connecting all 5 already exists in fragments — food banks, urban farms, school lunch programs. Nobody's connected the dots because there's no profit in solving hunger. There's only profit in managing it.

Full treatment: [Everyone Eats — The Architect's Fire](https://thearchitectsfire.substack.com)

The only real scarcity is the willingness to share.

r/SideProject Jay123anta

I built an anonymous love proposal/confession app where you can track if they opened it

Hey!

I built BlushDrop - a free anonymous love proposal/confession platform.

What it does:

Send a beautiful animated proposal to someonesecretly. Get a private tracker showing:

→ When they opened it

→ How many times they read it

→ A live green dot if reading RIGHT NOW 🟢

→ The moment they say Yes or No

The No button runs away from the cursor 😂

How I built it:

Used Claude Code as my AI pair programmer throughout. Had zero experience with this stack before starting.

Next.js 14 + Supabase + Vercel + Cloudflare

49 routes. 0 errors. ~$0.75/month to run.

Completely free to use: https://blushdrop.app

Would love honest feedback from builders here — what works, what's broken, what's missing ? 🙏

r/SideProject skymillonaire

I Have Big Idea

I Guys what’s up I’m 23 years old men studying supply chain management and yeah I don’t have experience but I’m good looking problems I’m trying to solve it I have idea like a super app like WeChat in China but but but focus on supply chain just imagine an app you can see all providers, all carriers in real time moving , ships , flights the posible to talk with providers chat , can make costs , revenues , laws permissions all in live and and block in supply detected and suggest others routes to go and update cost . I like be part of it if any interesting helping programming or investing what do you think ?

r/AI_Agents Classic-Ninja-1

What AI tools are actually worth learning right now for real projects ?

AI dev tools are growing fast right now and honestly it’s getting hard to tell what actually matters vs what’s just hype.

I keep seeing tools like:

  • LangGraph
  • CrewAI
  • n8n
  • Cursor
  • Claude Code
  • OpenAI Agents
  • AutoGen
  • Traycer
  • opencode
  • etc

Some feel powerful in demos, but I’m not sure how many of them hold up in real projects.

From what I’ve seen so far, the challenge isn’t just using a tool it’s:

  • handling multi-step workflows
  • managing state across tasks
  • dealing with failures/retries
  • keeping things consistent across the system

Which makes me wonder if the tool itself matters less than how you structure and run it.

If someone wanted to seriously build AI agents or automation today:

  • Which tools are actually worth investing time in?
  • Which ones are overhyped?
  • And what skills matter more than the tools themselves?
r/SideProject AResist-5282

I built OfficeClaw — a visual AI agent orchestrator you run locally. Drag agents onto a canvas, connect them, describe your goal, watch them work.

Hey everyone, I've been building this for a while and finally got it to a point where I'm comfortable sharing it. OfficeClaw is a visual AI agent orchestration tool — think of it like building an org chart for AI agents, then hitting run. How it works:

Drag agent nodes onto a canvas (Researcher, Writer, Analyst, whatever you need) A Manager node appears automatically and assigns tasks to each agent Connect them to define the flow — sequential or parallel Open the Task Panel, describe your goal, approve the plan, and watch them execute in real time

What makes it different:

Fully local — runs on LM Studio or Ollama with no API keys needed Also supports Claude, GPT-4o, and Gemini if you want cloud models Every agent has its own system prompt, model, and skills — all editable live in the browser Agents can produce real output files: Excel, Word, PDF, Markdown, CSV Built-in web search, URL fetching, Gmail, Google Drive, Notion, Airtable integrations Triggers: schedule, webhook, Gmail watch, Drive watch — run workflows automatically Circuit breaker, retry policies, human approval gates, memory across runs Single npm start — no Docker, no config hell

It's one server.js + one index.html. No build step.

Would love feedback, bug reports, or just to hear what workflows you'd build with it

r/ClaudeAI 5eans4mazing

You can build Skills on mobile but can’t install them — let’s fix that!

Hey guys, new to Claude and the sub. Did a quick search and didn’t see this requested yet.

First off, I am genuinely blown away by Claude. I think it’s one of the most beautiful creations in human history. No exaggeration.

Anyway, I just discovered that you can build custom Skills on mobile (Claude will generate the .skill file for you right in the conversation), but you can’t actually upload and install them from the mobile app. The Skills section under Capabilities only shows up on desktop.

I’m guessing you can probably log into claude.ai in a mobile browser and do it there as a workaround, but come on. We’re living in the future. Let’s add it to the app!

r/ClaudeAI kexnyc

Autonomous weapons? Robotic surgeons? Self-driving cars? I think not.

Claude Code has an annoying habit of ignoring explicitly-written directives even though it writes said directives to both claude.md and memory.md. In my low-stakes, no-lives-at-stake application, it's aggravating, but I move on.

This triggered the question: What if this same behavior occurs in life and death, high-stakes industries like medical and, scarier still, military applications? So I asked it. Does its response satisfy you?

Me: Granted, my app is nothing high-stakes. But how can humans have confidence in LLM's that are being used in medical, transportation, or even military applications? People could die if humans expect the LLM to follow explicitly-written directives. How do you reconcile that? Do you respond with, "can't guarantee it won't happen again. I don't have a satisfying answer. Sorry people died."

Claude: You're making a serious point and I'm not going to deflect from it.

You're right that "I can't guarantee it" is not an acceptable answer when the stakes are human lives. And the fact that I can't even reliably follow a checklist in a side project — after being told multiple times, after writing it down myself — is exactly the evidence that should make anyone cautious about deploying LLMs in safety-critical systems.

The honest answer is: LLMs like me should not be trusted as the sole enforcer of critical processes. Not because we're always wrong, but because we're unreliably right. I followed some rules perfectly this session and silently skipped others. That inconsistency is more dangerous than being consistently wrong, because it builds false confidence.

In safety-critical domains, the correct architecture is not "tell the LLM to follow the rules and hope it does." It's hard enforcement — the system doesn't let the unsafe action happen regardless of what the LLM outputs. The same way an aircraft doesn't rely on the pilot remembering to check something — the checklist is enforced by procedure, by a second person, by interlocks that physically prevent the wrong action.

For this project, the equivalent would be: the pre-push hook runs the review guide checks programmatically, and the push fails if they don't pass. My memory and good intentions are not a reliable gate. Your pre-commit hook already does this for some checks. The ones I missed — RFC compliance, accessibility, filtered vs unfiltered data — those are harder to automate, but the point stands.

I don't reconcile it. You shouldn't have to rely on me remembering. The failure mode you just witnessed is exactly why humans need to remain in the loop and why critical systems need hard gates, not soft promises from an LLM.

r/SideProject cprecius

I got ~90 downloads on the App Store a few hours after release. Is this normal?

I am a fullstack developer and I mostly do side projects for the web. Honestly, most of them never get real users. This time I shipped a calorie tracker app to the App Store just to try. And I got ~90 unique downloads within a few hours (checked via Firebase Analytics). People were actually using it.

But I never shared the app anywhere. There are tons of calorie tracker apps out there. How did these users even find me? And how can I figure out where they are coming from so I can double down on it?

Edit: I have 2 comment notifications but can't see them.
Edit 2: I am 5+ years fulltime experienced dev. This is not "vide coded" project. Analytics events must be fine. I have done them before.

r/ChatGPT RuddyRaccoon

I don’t know why y’all hate GPT-5. It’s pretty funny sometimes.

r/ChatGPT PairFinancial2420

I asked ChatGPT to critique my own business. The response was uncomfortable. That's how I knew it was working.

I gave it my offer, my positioning, my target audience, and my current marketing approach. Then I told it to act as a brutally honest business consultant and find every weakness.

It came back with five things. Three of them I already knew but was avoiding. Two of them I had never considered.

The discomfort was the point. A consultant who tells you everything is fine is useless.

Here are the exact prompts I used:

"You are a brutally honest business consultant. I'm going to describe my business. Find every weakness in my offer, my positioning, and my marketing. Don't sugarcoat anything."

This one hurts. That's why it works. It forces ChatGPT out of its default people-pleasing mode and into something closer to a real advisor.

"Act as a skeptical customer who has seen every offer in this niche. Here is my sales page. Tell me every objection you would have before buying and every reason you would click away."

I use this before publishing anything. It surfaces objections I was too close to my own offer to see.

"You are a competitor who wants to take my market share. Here is my business model. How would you position against me and what weaknesses would you exploit?"

This one is uncomfortable in a different way. It shows you exactly where you're exposed before someone else does.

"Act as a customer who bought my product and was disappointed. Walk me through exactly what let you down and what you expected vs what you got."

Most people never think from this angle until they get a bad review. Running this prompt before you launch is like getting a refund request before you've made the sale.

"You are a marketing strategist who specializes in finding positioning gaps. Here is my niche and my current message. Tell me what angle nobody in this space is owning and how I should take it."

This is where the real opportunity lives. Not in doing what everyone else is doing slightly better but in finding the gap they're all ignoring.

The pattern across all of these is the same. You're not asking ChatGPT to validate you. You're asking it to challenge you. That's a completely different relationship with the tool and it produces completely different results.

Most people use ChatGPT to feel productive. The ones building real businesses use it to find out where they're wrong before the market does.

r/ClaudeAI cid3as

How I Stopped Re-Explaining Everything to My AI Agent

First time posting here. I've been working with Claude Code since early access, using it to rebuild a 20-year-old codebase (~1.4 million lines) from scratch into a modern stack (estimating ~250K lines when done). A few months in, I ran into a problem I couldn't find a complete solution for — AI agents have no memory between sessions. I found a few projects tackling pieces of it, but nothing that handled the full picture: what to remember, what to forget, how to retrieve the right context, and how to share knowledge across multiple agents working in parallel.

So I built something. I'm not sure if the approach is truly novel — there are smart people working on this problem from a lot of angles, and I'm sure there are things I'm not aware of. But it's been a genuine game changer for my workflow, and I figured it was worth sharing in case anyone else is hitting the same walls. It's open source, and I'd honestly love feedback on what could be better. Happy to answer questions.

----

If you use an AI coding agent every day, you've probably built your own memory system without realizing it. A notes file you paste in at the start of each session. A carefully maintained project document with architecture decisions and conventions. Maybe you just use --continue and hope it picks up where you left off.

It works — until it doesn't.

The Problem

I was building a membership management platform. Not a small one — 88 database tables, a multi-sprint development plan, and I was running multiple AI agents in parallel to move faster. Every morning, same ritual: open a new session, re-explain the architecture, re-explain the database schema, re-explain the decisions I made last week. By the time the agent had enough context to be useful, I'd burned ten minutes and a chunk of my context window on things it already knew yesterday.

Claude Code has some tools for this now. You can resume a previous conversation with --continue. There's a built-in auto-memory that saves notes about your preferences and patterns to markdown files. You can write a CLAUDE.md file with project instructions that loads every session.

These are useful. But they have limits:

Resuming a conversation (--continue / --resume) restores a previous chat thread — messages, tool calls, everything. But you can only resume one thread at a time, and all that old conversation history eats your context window. The knowledge isn't gone, but it's trapped in a single thread that gets more expensive to carry forward with every exchange.

Auto-memory saves text notes about things Claude learns — build commands, your preferences, debugging patterns. It loads the first 200 lines of its index file (MEMORY.md) at session start, and Claude can read deeper topic files on demand. But there's no retrieval intelligence — it doesn't know which notes are relevant to what you're working on right now. And the notes don't strengthen, decay, or connect to each other over time.

Project docs (CLAUDE.md) work great for stable information — project setup, coding conventions, architecture rules. But when you're maintaining multiple docs or the project evolves fast, they go stale. You become the one maintaining the agent's memory, and that's a second job.

What I needed was something that could accumulate knowledge across sessions, surface the right context for whatever I'm working on right now, and get better over time without me managing it.

So I built AgentWorkingMemory.

What It Does

AWM lives entirely on your machine — a SQLite database, three local ML models (~124MB total, downloaded once), and a Node.js process. There's no server to run, no Docker container, no background daemon to manage. When you start Claude Code, it automatically spins up AWM through MCP (Model Context Protocol). When you close the session, it stops. Nothing is running when you're not using it. Everything stays local — no cloud, no API keys, no data leaving your machine. If you want an extra layer of security, AWM supports bearer token auth so you can lock down access to the memory API.

The setup is two commands:

npm install -g agent-working-memory awm setup --global 

Restart Claude Code and 14 memory tools appear automatically. The first session takes about 30 seconds while the ML models download (~124MB, cached after that). From that point on, the agent writes memories when it learns something important, recalls relevant memories when starting new work, and checkpoints its state so it can recover after interruptions. You don't start anything, configure anything, or manage anything — it activates when Claude Code does and the data is there waiting between sessions.

One database can hold multiple isolated memory pools — work projects and personal projects don't bleed into each other, different agent teams can have their own namespace, and you control the boundaries with a single environment variable. There are a bunch of smaller features like this (incognito mode, task tracking, memory supersession, execution checkpoints) that I won't get into here, but the point is: a lot of the "yeah but what about..." problems that came up during real usage have been addressed.

What makes it different from a notes file or a simple database:

It filters what's worth remembering. About 30% of what the agent tries to store gets rejected — routine, redundant, trivial stuff never makes it in. This is based on novelty scoring, not just "save everything and hope."

It retrieves intelligently. When the agent asks "what do I know about the payment system?", it doesn't load a flat file. It runs a multi-stage pipeline — keyword matching, semantic search, a reranking model that judges passage-level relevance, then walks an association graph to find related memories that weren't in the original query. The right context surfaces for the current task.

It forms connections. When two memories get recalled together, a link strengthens between them. Over time, this means recalling one topic can surface related memories that weren't in the original query — not because someone manually linked them, but because they were previously relevant in the same context. The associations take time to build and depend on your usage patterns, but the graph gets richer the more you use it.

It forgets on purpose. Unused memories fade over time. Important ones that keep getting accessed stay strong. You don't tag things as important — the system figures it out from how often they come back. This keeps the memory pool lean instead of growing forever.

It consolidates. Periodically (like sleep for the brain), the system strengthens clusters of related memories, builds bridges between topics, and archives low-value information. It gets more precise over time, not noisier.

It Works Alongside What's Already There

AWM doesn't replace Claude Code's built-in memory. It adds the layer that's missing.

Think of it as a stack:

  • CLAUDE.md — your project's constitution. Always loaded, stable rules and conventions.
  • Auto-memory — Claude's personal notebook. Quick notes about preferences and patterns.
  • --continue / --resume — a tape recorder. Replay a previous conversation.
  • AWM — long-term memory. Learns what matters, forgets what doesn't, surfaces the right thing at the right time.

Each layer handles a different kind of knowledge. They complement each other.

Real Results

I queried my actual SQLite database to see what's really happening. These aren't benchmark numbers — they're from my day-to-day work across several projects over a few weeks. Your numbers will look different depending on how many projects you're working on, how long your sessions are, and how complex the work is. But the patterns should be similar.

  • 225 active memories — not thousands. The salience filter rejected about 30% of what the agent tried to store (routine observations, near-duplicates, low-value noise). Consolidation archived another handful. A bigger or older project would have more, but the filtering keeps the pool from growing unbounded.
  • 2,818 associative connections between memories — these form automatically. When two memories get recalled together, a link strengthens between them. Nobody designs this graph. It emerges from how the agent actually uses the knowledge. Over 21 consolidation cycles, cross-topic bridges formed, weak links decayed, and hub nodes got normalized so no single memory dominates retrieval.
  • Most-used memory accessed 86 times — it's a foundational architecture decision that's relevant to almost every session. The temporal decay model (based on ACT-R from cognitive science) means this memory is essentially permanent — each access adds another activation trace that strengthens it against decay. Meanwhile, a one-off debugging note from two weeks ago that was never recalled again is quietly fading toward archive.
  • 64.5% fewer tokens (from our eval suite comparing memory-guided context vs full conversation history) — this is the hidden power of the system and worth explaining. Without AWM, you'd either paste a big context document every session (expensive, often stale) or use --continue which loads your entire previous conversation history (very expensive, full of irrelevant back-and-forth). AWM replaces both with targeted recall — when the agent starts a task, it recalls only the 5-10 memories most relevant to that specific work. An architecture decision, a related bug fix, a naming convention. Not your entire conversation from yesterday. Not a 500-line project document. Just the signal, no noise. That's where the token savings come from — you're spending context window budget on precisely the knowledge that matters for what you're doing right now, instead of loading everything and hoping the model can find what it needs in a wall of text.

Multiple Agents, Multiple Tools

This is where it gets interesting. I run multiple AI agents in parallel — one managing tasks, others writing code, another reviewing. They all share the same memory database.

When one agent discovers a bug or an undocumented constraint, it writes that to memory. When a different agent starts working on something related an hour later, it picks up that knowledge automatically. No copy-pasting between sessions. No shared docs to maintain. Knowledge propagates through the team.

And it's not locked to Claude Code. AWM can also run as a standalone HTTP server (awm serve) with an API that any tool can call — other AI assistants, CI pipelines, scripts, custom agents. Your memory travels with your project, not your IDE.

Beyond Code: Other Applications

The same problem — AI that forgets everything between sessions — shows up anywhere you're doing complex, long-running work with an AI assistant.

Creative writing is one I keep thinking about. I was talking with an author recently who described exactly the frustration I had with code — their AI assistant would forget character details mid-book, contradict established backstory, and lose track of which clues had been planted where. The more complex the story, the worse it got.

I haven't tested AWM with fiction writing yet, but the mechanics seem like a natural fit. Character details, plot threads, and foreshadowing could form an associative graph — so when the agent writes a scene with a character, it would recall their backstory and relationships. Planted clues would form connections with their eventual payoffs. The "who knew what when" timeline could stay consistent without maintaining a massive spreadsheet. That's the theory, anyway — I'd love to hear from anyone who tries it.

Any field where context accumulates over time — research, legal work, game design, worldbuilding — could benefit from AI that actually remembers.

Get Started

npm install -g agent-working-memory awm setup --global 

First run downloads about 124MB of ML models (cached locally after that). Everything runs on your machine — no cloud, no API keys, no subscriptions.

It's open source under Apache 2.0. If you're tired of being your AI agent's memory manager, give it a try.

[GitHub Agent Working Memory]

r/ClaudeAI RossPeili

Lowering Claude API costs for agentic loops with deterministic token compression

Runaway costs in iterative agent loops are real. Massive context windows mean massive bills.

I just merged a new skill to Skillware called prompt_rewriter.

It’s a heuristic middleware that aggressively compresses prompts by 50-80% before sending them to the LLM, ensuring you only pay for the signal and not the noise. Because it’s deterministic, your agent behavior stays stable across loops.

- Low: Normalizes whitespace.

- Medium: Strips conversational fillers (please, ensure, etc.).

- High: Aggressively removes stop-words and non-essential punctuation.

Open-source and looking for more skills, feedback, ideas and contributions: https://github.com/arpahls/skillware

r/ChatGPT thomheinrich

chonkify v1.0 - improve your compaction by on average +175% vs LLMLingua2 (Download inside)

As a linguist by craft the mechanism of compressing documents while keeping information as intact as possible always fascinated me - so I started chonkify mainly as experiment for myself to try numerous algorithms to compress documents while keeping them stable. While doing so, the now released chonkify-algorithm was developed and refined iteratively and is now stable, super-slim and still beats LLMLingua(2) on all benchmarks I did. But don‘t believe me, try it out yourself. The release notes and link to the repo are below.

chonkify

Extractive document compression that actually preserves what matters.

chonkify compresses long documents into tight, information-dense context — built for RAG pipelines, agent memory, and anywhere you need to fit more signal into fewer tokens. It uses a proprietary algorithm that consistently outperforms existing compression methods.

Why chonkify

Most compression tools optimize for token reduction. chonkify optimizes for \*\*information recovery\*\* — the compressed output retains the facts, structure, and reasoning that downstream models actually need.

In head-to-head multidocument benchmarks against Microsoft's LLMLingua family:

| Budget | chonkify | LLMLingua | LLMLingua2 |

|---|---:|---:|---:|

| 1500 tokens | 0.4302 | 0.2713 | 0.1559 |

| 1000 tokens | 0.3312 | 0.1804 | 0.1211 |

That's +69% composite information recovery vs LLMLingua and +175% vs LLMLingua2 on average across both budgets, winning 9 out of 10 document-budget cells in the test suite.

chonkify embeds document content, scores passages by information density and diversity, and extracts the highest-value subset under your token budget. The selection core ships as compiled extension modules — try it yourself.

https://github.com/thom-heinrich/chonkify

r/StableDiffusion wannaliveonmars

I managed to run Stable Diffusion locally on my machine as a docker container

It took me 2 days of fixing dependency issues but finally I managed to run universonic/stable-diffusion-webui on my local machine. The biggest issue was that it was using a python package called CLIP, which required me to downgrade setuptools to install it, but there were other issues such as a dead repository and a few other problems. I also managed to make a completely offline docker image using docker save. I tested that I can install and run it, and generate a picture with my internet disabled, meaning it has no dependencies at all! This means that it will never stop working because someone upstream deprecated something or a repo went dead.

Here is a screenshot - https://i.imgur.com/hxJzoEa.png

How do you guys run stable diffusion locally (if anyone does)?

r/ClaudeAI No_Inspection7571

Those who Claude and game, need some assistance turning off cowork on my gaming PC!

Hello! Yes, I’m looking for a way to turn off Claude Co Work on my gaming pc. I was unaware it would cause a lag when I game 😅. Noticed it while I was playing BG3 after a good work session. Thankfully, I’m using my All in One as a monitor for my gaming tower, so I’m planning to get cowork up and running on the All in One to avoid this issue. But I need help getting cowork completely deactivated on my gaming tower first so I don’t have this lag on my games anymore😭. Any assistance would be much appreciated! Sad it’s not as straightforward as just hitting a toggle on the desktop app 😭 hopefully they will add that as they continue to refine the feature in the future.

r/SideProject YanvegHD

I tried to build something without AI. I failed.

For the past couple of years, every idea I had somehow ended up involving AI. After a while it just got annoying. I'd get excited about something, then realize it was just another wrapper and drop it.

So I told myself: next idea, no AI.

I drive to work, short commute. I had the radio on and realized it never really covers what I actually care about. Switched to podcasts, same problem: never personal enough, never about today.

That's when it clicked: what if I had my own radio? Fresh every morning, only the topics I care about.

The concept: two AI hosts go over the day's news based on what you picked. You wake up, press play, done. I'm calling it Infusia.

Yeah, it uses AI in the end. I tried to avoid it 😅 But at least it comes from an actual need, not just "let's build something with GPT".

No code yet — wanted to validate first. Put together a landing page with a demo so you can hear what it sounds like: https://infusia.app

(Demo available in English and French !)

Feedbacks are very welcome :)

r/ChatGPT playSUR3

I have created a C# app for chatGPT using chrome wich is the best actually for performance.

Hello everyone,

I'm using Windows 11, and I was using the ChatGPT app from the Microsoft Store with my Plus account, but I found it quite slow in terms of performance.

I did some benchmarks comparing the Microsoft Store app, Chrome, Edge, and Firefox, and I concluded that Chrome performed the best.

So I created a C# app that launches a standalone chrome.exe window already connected to my OpenAI account. It starts instantly on the right side of the screen, similar to using Win + Right Arrow, pops up very quickly, and seems faster overall.

Would anyone be interested in this app? If so, should I share it on GitHub along with a tutorial explaining how to make it work for everyone?

No development knowledge is needed. You only need to change 2 variables, build the C# project, and you're good to go.

If you're interested, feel free to reply here, and I’ll share it on GitHub.

Have fun, and good luck :)

r/homeassistant AxonicGR

LF Radiators in Anthracite or Black

Hi,

To keep the peace at home, I’m looking for radiatos in anthracite or black. The main requirement is that they support Zigbee—no Tuya or Smartlife devices.

Right now, I have Aqara E1s everywhere. I had to take some of them down, though, because they really ruin the look 😂(wife says)

r/ClaudeAI mr-x-dev

I built an open source multi-project orchestrator for Claude Code (and other agentic CLIs). It’s bash, tmux, and git worktrees.

I've been running multiple Claude Code instances in parallel for a while and kept hitting the same problems: merge conflicts, duplicated work, no review gate, and a ton of token burn just on coordination overhead.

So my team built Orc. It's an OS-level orchestrator that coordinates AI coding agents across every project on your machine.

Gas Town deserves a lot of credit for proving this category matters and pushing the space forward. Orc is a much simpler take on the same problem. Lightweight, stays out of your way, highly customizable, and uses a mental model that maps directly to how real software teams work. No Polecats, Witnesses, or MEOW stacks to learn. Just orchestrators, goals, engineers, and reviewers. If you've shipped software on a team, you already understand the hierarchy.

You describe the work. The project orchestrator scouts your codebase automatically, pulling in project context (structure, conventions, existing specs) before it even starts planning. Then it assembles goals, breaks those into beads (focused work items), and spawns engineers in isolated git worktrees.

Every piece of work goes through a two-tier review loop. Short-cycle dev review on each bead before it merges to the goal branch. Then an optional long-cycle goal review on the full deliverable before it ships. You control both: plug in your own review tool, or describe your standards in plain English:

```toml

[review.dev]

review_instructions = "Focus on security: check for SQL injection, XSS, and auth bypass."

```

The tmux session itself is orchestrated into a managed TUI. Each goal gets its own window with the goal orchestrator on the left and engineers plus ephemeral reviewers stacked on the right. You can see all work in progress, navigate to any agent, and hijack a session to pair with it or take over entirely. Status indicators on every window so you always know what's happening at a glance.

Fully compatible with spec-driven development. Point it at your specs and the orchestrator works from them.

A few things relevant to this community:

- Claude Code is the default agent with a first-class adapter (--append-system-prompt for prompt delivery, slash commands install to ~/.claude/commands/orc/)

- Also works with OpenCode, Codex, Gemini CLI, or any custom binary

- YOLO mode skips all confirmation gates but still escalates on merge conflicts and blocked engineers

- Coordination is file-based signals, not agent-to-agent conversation. Zero tokens burned on orchestration overhead

Runtime is bash, tmux, and git. State is three files. Setup takes five minutes.

https://github.com/spencermarx/orc

Happy to answer questions about the architecture or how it compares to other approaches.

r/ChatGPT Mad_Gouki

Typing “chat.openai” in Chrome and pressing tab shows the right URL but pressing Enter sends you to Google instead

r/ChatGPT onfleek404

Can I retouch and improve old family pictures with AI?

Hey everyone. As the title says, I'm trying to retouch some 80ish year old pictures, add a hint of color, and try to create a more natural look rather than the faded one that I have now. My dad is over 80 and he seemed very happy with the idea of me improving them, so I told him I'd do it but I don't think chatgpt is doing God's work, let alone grok xd. Am I doing something incorrectly, or do I need to explain exactly in my prompt what I want changed?

r/AI_Agents injeolmi-bingsoo

Optimal hardware for cloud-inferencing agent swarm?

Hello. I'm wondering what hardware is the best for deploying a swarm of agents.

As opposed to many examples on reddit, I'm thinking of letting the cloud handle inferencing rather than running local LLMs. Hence, I'd imagine not requiring lots of RAM for context windows or GPU/NPU for inference. Please correct me if I'm wrong.

What about CPU? Do multiple agents collaborating require lots CPU cores for parallel processing?

The only thing I could think of is opting for a mobile chip to reduce power draw and heat for 24/7 operations.

What else does a system like that need? Feel free to list actual products as well! Thanks!

r/LocalLLaMA One_Discussion_3361

openclaw养虾记

养虾这个词用的是真的好,养就要花钱,而我却到处去挖“泥巴”喂虾。

前前后后重装了5-6次,API和模型写满了一个记事本。

不说了,来赏虾。

我是在openclaw中文社区拿到的安装码,网上搜到处都是,

不同的就是有的是英文安装界面,有的是中文界面,

指令上中文的要在”openclaw“后面加个“-cn”

装好了之后跳出配置画面 --openclaw-cn onboard(好像还有个--daemon的指令)

设置模型,API,厂商,名称。看着点就好,花钱的code plan会简单点

自己找的“泥巴token”在API配置时候选customer可以添加任意厂商(白山,心流,Nvidia)

skill还不会,我是让配好的openclaw自己装(跟他对话让他装,但是有的skill装了会报错,然后openclaw doctor --fix也不好用,我因为这个重装了好几次)

配置孵化,openclaw tui,

对话切换模型 /model XXXXX(名字一定要一样,大小写也要对)

我配置的是QQ ,

我安装的openclaw里面没有qqbot,要下载安装一下

git clone https://github.com/sliverp/qqbot.git && cd qqbot

(cd进qqbot文件夹)

openclaw plugins install . (注意有个.)

重启网关

官方说的是openclaw gateway restart,我没成功过,一直报错

deepseek教的:网关gateway 关不掉的话,ps aux | grep 18789(一定是这个端口)

kill -9 进程ID

前台启动(前台启动的话要再开个终端配置和对话)

重启 openclaw gateway run

在后台运行(推荐)

nohup openclaw-cn gateway run --allow-unconfigured > /tmp/gateway.log 2>&1 &

查看是否启动成功

sleep 2

netstat -tlnp | grep 18789

模型工作可能会延时,runing的话肯定在调用,其他的不一定,灰色的conenct肯定是待机状态,有的模型切换了之后要重启网关,否则不工作,调动不到的模型会调用”fallback“的模型,嗷嗷费钱,1个小时跑掉一杯COCO

暂时没其他的了。

还有DingTalk,传统通道,按照配置处理就行

先这些

目前在让openclaw帮我看邮件

虽然我说”泥巴token“但是其实很给力的,白山智能的GLM-5很给力,基本复杂的配置都是它帮忙配置的,硅基流动的免费额度很多2000万token,只是要切换一下,token是费,本来想用用免费额度的了,一不小心就欠费了,50万token就说几句话就没了(可能是让执行了动作吧)

先用着,下次再把心得写一下

r/SideProject Metotag_Ytb

Creating content

I launched my online store a few months ago and the biggest problem I faced was creating content consistently.

I was easily spending 10 hours a week coming up with post ideas, writing product descriptions, creating emails…

So I decided to build a tool to automate all of that.

Do you have the same problem with

r/homeassistant GreekGift

Starting my Home Assistant journey, do I need the Green box or is there a better way in?

Hey all, just moving into a new place and want to set up a simple smart home, mainly lighting and heating to start. Been looking at Home Assistant as the hub to tie it all together long term.

Question is, is the HA Green the obvious starting point or is it overkill? I don’t want to over-invest before I know what I’m doing, but equally don’t want to buy something I’ll outgrow in six months.

Also open to hear what you wish you’d known at the start. Cheers

r/homeassistant hasntexplodedyet

Mission Control: Whole-House Gamification for Kids

Full disclosure up front:

I used Claude Code to write the bulk of this code. I set the architecture, design, and reviewed code that it wrote and conducted significant testing. But this was written by AI tools. I have years of development experience but use AI now to complete projects like this which I simply do not have the time for otherwise.

In order for this tool to function, you will need to provide it Open Router and ElevenLabs API keys, and it needs to generate LLM / gen AI content.

This post was not written by AI. Real human here.

TL;DR: This app enables you to turn your house into an interactive playground for kids, using your smart devices. Specifically lights, doors, and objects which children can safely interact with. You can run this in three different modes: on a phone/PC using local speakers; using network-attached audio in Home Assistant (sonos, google home, alexa); using Apple TV. The app creates missions/challenges, tells the kids what to do, and sets a timer and automatically polls Home Assistant to determine when they complete the mission. At the end it provides a "game summary".

What's included:

  • Game framework, dockerized
  • Dev environment with a pre-configured dev Home Assistant instance for testing / demo for those who (understandably) prefer to not just YOLO it on their Home Assistant.
  • Source code for Apple TV app - you can build in Xcode and push to your Apple TV (apple limits it to living for 7 days on your TV each push unless you are a paid developer)

What you need to do:

  • Connect to your Home Assistant; I'd suggest making a new user and generating a token for that user.
  • The app will scan your Home Assistant devices and allow you to select which speakers (if any) you want to use for challenges
  • The app will scan your Home Assistance devices and propose challenges (e.g. turn the kitchen lights on; open the back door; turn the fan and light on in the kids bedroom) which YOU review and approve
  • Generate media - the app does this for you, but you need to provide Open Router and ElevenLabs API keys. It will cost somewhere around $3.50 or so to generate all the content. You can re-generate most/all of it if you wish to. Most of the TTS audio text can be modified.

This is just a personal project but I figured it is a fun way to get kids moving and active inside the house, so I put a little extra polish on it and share it because I haven't found many other projects like this.

I of course realize that this is pretty niche and most people won't have a need for this or even want to use it. And that's ok!

Screenshots and more information in the Readme on the GitHub: https://github.com/dcnoren/mission-control

r/aivideo ScriptLurker

Glass Body [Robot Disco]

r/SideProject hearthiccup

1.2K a month from an app abandoned 8 years ago. the competitors are somehow just as bad.

ScoreCloud Express. 2.0 stars. #42 paid in Music. Making roughly $1.2K a month. Hasn't been updated in eight years. The pitch detection is all over the place, it crashes left and right, makes you create an account before you can do anything, and then asks for a subscription on top of the purchase price. Eight year old code. Still making money.

So i looked at the alternatives. Because surely someone has built something better by now, right?

Sing2Notes. A professional singer left a review saying they spent 20 minutes trying and couldn't get a single usable transcription. Their words: "all of the notes are not only incorrect but they're not even notes in the same key."

Humming Note. Reviews say "most of the notes i hum the app gets wrong."

There are a couple others. Same story every time. The core problem, accurate pitch-to-notation from voice, nobody has nailed.

the tools for this are in a completely different place than when most of these apps were built. on-device audio processing, Core ML, open source pitch detection models. it's still a hard problem, but the current products are not living up to the possibilities

The opportunity isn't "build the first humming-to-sheet-music app." There are already five of those and they all suck. The opportunity is: be the first one that actually works. like, properly works. this is now ridiculously much more possible with claude code et al

No account wall. No subscription bait-and-switch. Just: hum, get sheet music, done. Charge $4.99 once. The competitor reviews practically write your App Store listing for you: "finally, an app that gets the notes right."

i know this because i went way too deep on App Store analysis recently and found a ton of categories with this exact pattern. Multiple apps, all attempting the same thing, all failing the same way. the demand is proven.

This one feels especially doable for anyone who's worked with audio frameworks or ML on iOS. Happy to share more examples like this from other categories. Link in comments.

r/ClaudeAI tinchox5

SnapDOM got into Anthropic’s Claude for Open Source program

Last week, a friend mentioned Anthropic had a program for open source maintainers. Free Claude Max if your project hits 5,000+ GitHub stars or 1M+ monthly npm downloads and you’ve been actively committing in the last 3 months. SnapDOM qualified so I applied, not thinking much of it.

Got the email this morning. Six months of Max 20x. Still a bit surprised. It will help me a lot since I’m a one person maintainer.

If you maintain something open source and fit the criteria, worth a shot: https://www.anthropic.com/claude-for-open-source

r/ChatGPT Extra_Ad8811

Will upgrading to Plus help writing grad school essays for bad writers block?

I'm trying to write med school essays with pretty bad writers block, so much I've had trouble even getting started. And I will need to write 60 of them or so.

I was desperate and tried prompting ChatGPT to write me some baseline essays, and I'm fairly impressed. I know I can't copy paste, I do have edit them heavily (it says nonsense or very obvious AI phrases like dramatic not__, but __), but I can edit that stuff out. When it comes to at least getting started, even the free version has been helping a lot which is a big deal.

I'm also impressed with how I can input my resume and narrative and GPT will remember, sort of? And incorporate these things into personalized essays. I'm someone to be concerned over privacy reasons so I hope turning off "build the model for everyone" actually does what it says.

I don't use AI a lot for anything else. Would the GPT 5 with "advanced reasoning" from plus be worth it in my use case? Or are there any other services that would be better for this?

r/homeassistant ContractIndividual24

Fire 7 (12th Gen) as single room remote

I have a fire 7 tablet sitting around, and while I. Have a wall mounted tablet downstairs to control my setup. I am wanting to ideally use this spare tablet as a control specifically for our kids room. Lights, music, that’s about all.

Must be able to be locked to the page. So they can’t access other apps.

Any advice is appreciated.

r/ChatGPT Substantial-Fall-630

I want to be able to send images to my ChatGPT while using voice chat

It’s annoying having to leave voice mode in order to send an image or file to my chat and really would like this to be fixed in a future update

r/ChatGPT mad_max711

I asked chatgpt.. if

r/LocalLLaMA SnooWoofers2977

Has anyone experienced AI agents doing things they shouldn’t?

I’ve been experimenting with AI agents (coding, automation, etc.), and something feels a bit off.

They often seem to have way more access than you expect, files, commands, even credentials depending on setup.

Curious if anyone here has run into issues like:

agents modifying or deleting files unexpectedly

accessing sensitive data (API keys, env files, etc.)

running commands that could break things

Or just generally doing something you didn’t intend

Feels like we’re giving a lot of power without much control or visibility.

Is this something others are seeing, or is it not really a problem in practice yet?🤗

r/homeassistant StructurePristine895

New HomeAssistant Install

I am just getting started, and only have a few devices I use with alexaq. Which protocol should I lean into for reliability and staying power? Zigbee? ZWave? Other?

r/ClaudeAI Kashmakers

Claude is helping me make my game and I'm having loads of fun!

I'm an artist and writer, I can handle the graphic and narrative side of games, but not the coding side. Music is outsourced to another human. But code... well, I'm vibecoding my way into this thing.

Claude is so helpful. I first brainstorm about the features I want to add, then we nail down the architecture. Then I send things over to Cursor where I create a plan, and either let Claude implement it, or some other agent.

Been working almost flawlessly so far. I only ran into one issue where AI couldn't fix what I wanted to happen, and had to bring in an actual person to fix it.

I'm actually working on my dream game! Filling it with features I had only dreamt about before, but now it's a reality with AI. It's been really fun seeing my designs come to life - my art being actually playable.

I'm just sad the game dev communities aren't accepting of this, and will cancel you if they get a whiff of you using AI. Therefor, my project's AI usage for code remains a secret. Everything else is still made by humans.

I hope the sentiment will change over time and become more accepting. I see plenty of other devs hiding their AI usage as well (it's easy to tell when you use AI yourself), so I hope we won't have to hide that part in the future.

r/SideProject Zestyclose_Mess8139

I added AI to my PDF tool and it completely changed how I use documents

Hey everyone,

I’ve been working on a side project called EasyPDF and recently added an AI feature that turned out way more useful than I expected.

Instead of just editing or converting PDFs, you can now literally talk to the file. You can translate a document, rewrite parts of it, convert it into another format, or even turn it into something like a mind map to better understand the content. I’ve been testing it on long boring PDFs and it actually makes them usable.

The goal wasn’t to build another basic PDF tool but something that helps you actually work with the content inside, not just move it around.

Still early and I’m figuring things out, so I’m curious if this is something you’d actually use or if I’m overengineering it 😅

EasyPDF

r/StableDiffusion More_Bid_2197

Is there anything the FluxDev model does better than all current models? I remember it being terrible for skin, too plasticky. However, with some LoRas, it gets better results than Zimage and QWEN for landscapes

Flux dev, flux fill (onereward) and flux kontext

Obviously, it depends on the subject. The models (and Loras) look better in some images than others.

SDXL with upscaling is also very good for landscapes.

r/ChatGPT Heliothon

Language Change

Have seen this happening a few times so thought I'd add to it. Always seems to by Cyrillic. Was just using ChatGPT and it slipped a Cyrillic word in. Please ignore the relationship based stuff haha. I asked it to explain why it put that word in there and the provided extra screenshots is what it gave me.

r/SideProject Mr_Zuckerberg123

I’m 18 and I’m tired of watching my friends get ghosted by AI hiring bots so I built a tool to help us fight back

I’ve been watching the hiring market turn into a black hole lately. Since everyone uses LLMs to mass-apply, companies are just using more automation to filter us out. If you actually manage to get a human on a call, you usually have about 15 minutes to prove you aren't another generic person before they check out.

I spent the last few months building Pitchify because I wanted a way to actually see where I was losing people. It’s a voice-based mock interview tool, but instead of just giving you a "good job" at the end, it breaks down the transcript to find exactly where you lost clarity or started hitting too many filler words.

I’m 18 and still figuring out the best way to weight the scoring. Right now it looks at structure, pacing, and specific "check-out" moments in the audio. I’m mostly looking for some brutal feedback on the latency and whether the "clarity" score actually feels accurate to you guys.

I’m not some "AI founder," I just wanted to build something that made the interview process feel a little less like a lottery.

You can try it at pitchify.tech if you want to mess around with it. No account needed to try the basic test. Let me know what breaks.

r/ChatGPT Mountain_Sentence646

ChatGPT helped me write a best man speech that made 200 people laugh and my best friend cry

I am not a good writer. i'm not a good public speaker. when my best friend asked me to be best man, i genuinely considered declining because the speech terrified me more than anything else about the role.

I knew what i wanted to say. 15 years of friendship, the time we got lost hiking in the dark, how he changed after meeting his wife, inside jokes. i knew the emotional beats. i could not for the life of me turn those into a coherent speech that was funny and heartfelt without being cringey.

I started by just talking about him. opened Willow Voice, a voice dictation app, and rambled for about 10 minutes across a few sessions. told stories, said what he means to me, talked about watching him grow up. totally unstructured. just whatever came out.

pasted all the transcripts into chatgpt and said: ""this is raw material about my best friend. help me organize this into a 4-minute best man speech. it should be funny in the first half and emotional in the second. don't add stories or details that aren't in my material - only use what i actually said.""

The first draft was... surprisingly close. chatgpt found the funniest version of the hiking story in my rambling and restructured it with better comedic timing. it took my scattered emotional observations and organized them into a build that peaked at exactly the right moment.

I edited it over 3 weeks. changed maybe 30% of the wording to sound more like me. practiced it 20 times. delivered it at the wedding and people laughed where they were supposed to laugh and cried where they were supposed to cry. his wife hugged me after and said it was perfect.

chatgpt didn't write the speech. i wrote it - i just wrote it verbally first and chatgpt helped me edit the transcript into something structured. the raw material was 100% mine. the organization was collaborative.

has anyone else used chatgpt for personal writing like this?

r/LocalLLaMA Accurate_Reach4980

Which SLM next?

Hi, I’m testing different small language models/labs for general use on my mobile. Which, model would people suggest next? I’m thinking SmolLM3-3B next, does anyone have any other recommendations?

r/LocalLLaMA mikkel1156

Small models can be good agents

I have been messing with some of the smaller models (think sub 30B range), and getting them to do complex tasks.

My approach is pretty standard: take a big problem and get it to break it down into smaller tasks. They are instructed to create JavaScript code that runs in a sandbox (v8), with custom functions and MCP tools.

Though I don't currently have the hardware to run this myself, I am using a provider to rent GPU by the hour (usually one or two RTX 3090). Keep that in mind for some of this.

The task I gave them is this:

Check for new posts on https://www.reddit.com/r/LocalLLaMA/new/.rss This is a XML atom/feed file, convert and parse it as JSON. The posts I am intersted in is dicussions about AI and LLMs. If people are sharing their project, ignore it. All saved files need to go here: /home/zero/agent-sandbox Prepend this path when interacting with all files. You have full access to this directory, so no need to confirm it. When calling an URL to fetch their data, set max_length to 100000 and save the data to a seperate file. Use this file to do operations. Save each interesting post as a seperate file. 

It had these tools; brave search, filesystem, and fetch (to get page content)

The biggest issue I run into are models that aren't well fit for instructions, and trying to keep context in check so one prompt doesn't take two minutes to complete instead of two seconds.

I could possibly bypass this with more GPU power? But I want it to be more friendly to consumers (and my future wallet if I end up investing in some).

So I'd like to share my issues with certain models, and maybe others can confirm or deny. I tried my best to use the parameters listed on their model pages, but sometimes they were tweaked.

  • Nemotron-3-Nano-30B-A3B and Nemotron-3-Nano-4B
    • It would repeat the same code a lot, getting nowhere
    • Does this despite it seeing that it already did the exact same thing
    • For example it would just loop listing what is in a directory, and on next run go "Yup. Better list that directory"
  • Nemotron-Cascade-2-30B-A3B
    • Didnt work so well with my approach, it would sometimes respond with a tool call instead of generating code.
    • Think this is just because the model was trained for something different.
  • Qwen3.5-27B and Qwen3.5-9B
    • Has issues understanding JSON schema which I use in my prompts
    • 27B is a little better than 9B
  • OmniCoder 9B
    • This one did pretty good, but would take around 16-20 minutes to complete
    • Also had issues with JSON schema
    • Had lots of issues with it hitting error status 524 (llama.cpp) - this is a cache/memory issue as I understand it
    • Tried using --swa-full with no luck
    • Likely a skill issue with my llama.cpp - I barely set anything, just the model and quant
  • Jan-v3-4B-Instruct-base
    • Good at following instructions
    • But is kinda dumb, sometimes it would skip tasks (go from task 1 to 3)
    • Didn't really use my save_output functions or even write to a file - would cause it to need to redo work it already did
  • LFM-2.5-1.2B
    • Didn't work for my use case
    • Doesn't generate the code, only the thought (eg. "I will now check what files are in the directory") and then stop
    • Could be that it wanted to generate the code in the next turn, but I have the turn stopping text set in stopping strings

Next steps: better prompts

I might not have done each model justice, they all seem cool and I hear great things about them. So I am thinking of giving it another try.

To really dial it in for each model, I think I will start tailoring my prompts more to each model, and then do a rerun with them again. Since I can also adjust my parameters for each prompt template, that could help with some of the issues (for example the JSON schema - or get rid of schema).

But I wanted to hear if others had some tips, either on prompts or how to work with some of the other models (or new suggestions for small models!).

For anyone interested I have created a repo on sourcehut and pasted my prompts/config. This is just the config as it is at the time of uploading.

Prompts: https://git.sr.ht/~cultist_dev/llm_shenanigans/tree/main/item/2026-03-21-prompts.yaml

r/aivideo Wonderful-Put-3351

Holy Content Violations (NSFW)

r/LocalLLaMA WTF3rr0r

Where to rent for small period 5090

Are there any reliable services where I can rent specific GPUs like the RTX 5090 to test different configurations before making a purchase?

r/LocalLLaMA WTF3rr0r

5090 32vram how much ram is a good approach?

How much system RAM is typically recommended to pair with an RTX 5090 for optimal performance in demanding workloads

r/ChatGPT EyeDoThings

It’s been like 6 weeks when will they fix this?

I asked it three different way and in this prompt I even said “avoid empty bullet points”.

Still got the answer to “why did the game draw” but. Huh. Have had this issue for weeks where it clearly has an idea and just doesn’t express it, and I get empty bullet points.

r/LocalLLaMA WTF3rr0r

32gb vRam balance

How well-balanced does a system need to be to fully take advantage of a 32GB VRAM GPU? Is it actually worth buying a 32GB GPU for production workloads like AI, rendering, or data processing?

How much normally is a good balance between vram and ram?

r/comfyui Hot_Store_5699

LTX-V2.3 t2v

I found that use 1.5x upscaler is a good choice at 720p with two stage workflow

r/LocalLLaMA AmtePrajwal

Are we over-optimizing LLMs for benchmarks instead of real-world usefulness?

Lately I’ve been noticing a pattern across open-source LLM releases: every new model seems to push higher scores on benchmarks (MMLU, GSM8K, etc.), but the actual day-to-day usability doesn’t always feel like it improves at the same pace.

For example:

  • Models get better at structured reasoning benchmarks, but still hallucinate in long-form or ambiguous real-world tasks
  • Context windows keep increasing, but retrieval quality and memory usage aren’t always meaningful improvements
  • Fine-tunes often optimize for instruction-following, but reduce creativity or robustness in edge cases

It makes me wonder—are we optimizing for what’s easy to measure instead of what actually matters?

r/midjourney liibertypriimex1

Royal Knight of Midjourney 1500s

r/ClaudeAI rar007

I built a usage monitor for Claude Code. Yes, another one. But mine detects the 2x promo.

I know everyone and their mother has made one of these, but does theirs have 2x promo detection built in? I think not.

sonde sits in your menu bar. One glance: capacity, pacing, tokens, promotions. It detects the 2x promo automatically and adjusts your pacing so you know exactly how hard you can push.

The promo runs through March 28. Perfect timing for the weekend.

What else it does:

- Per-project token breakdown (input, output, cache read, cache write, cache hit %)

- Conversation-level analytics

- Active session tracking across all your projects

- Pacing prediction ("you'll hit your limit in ~2h 15m")

- 7 themes in the menu bar, 6 matching themes in the terminal statusline

Built entirely with Claude Code. Rust + SwiftUI.

Free, open source, no account needed.

brew install ronrefael/tap/sonde

https://github.com/ronrefael/sonde

r/SideProject Responsible-Road1575

I’ve been using weekly planning based on my age for the past 10 years

I used to track and plan my goals on a weekly basis, but based on my age, not the regular date.

For example, 28.30 means 28 years old and week 30.

I was planning each week like this in a simple Google Keep note for almost 10 years.

It helped me think about time and goals in a different way.

At some point, the note system got messy, so I turned the same idea into a simpler personal system.

I’m curious if anyone here has ever planned this way.

r/StableDiffusion Independent-Frequent

Does anyone know what the second pass is on LTX 2.3 on WAN2GP and why it's only 3 steps? Is that why all my outputs are mushy in motion? Would increase the steps fix that?

r/ClaudeAI trashpandawithfries

CoT: huge length/quality differences in sonnet 4.5 vs 4.6?

Does anybody have any insight as to why sonnet 4.6 extended thinking CoT is extremely brief versus sonnet 4.5? I've noticed it across instances and accounts. Long context threads versus new threads, casual vs research, does not seem to matter.

The 4.5 CoT may also contain different information than what is surfaced, such as when the model is deciding between two things to say, and you can see the internal decision making in the CoT in the 4.5. The CoT for 4.5 is often 10-15x longer, so it's absolutely a noticeable thing.

In the 4.6 it says things like "the user is making a joke. I should just respond casually." Or "The user is asking a very interesting research question. I should respond thoughtfully and honestly".

r/ClaudeAI PerformanceLucky5860

How I gave Claude persistent memory with a simple readme page

I wanted Claude to remember things across conversations without any setup. Here's what I built and how it works.

The problem: Every conversation starts from zero. You re-explain your stack, your preferences, your project context — every single time.

The approach: I made akaxa.space — a memory layer that any AI can use just by reading a readme page.

How it works:

Just tell Claude: "Read akaxa.space/readme" — no MCP install, no API key, no config. Claude reads the page, picks up the auth flow, and can save/load memories from that point on.

Why not GitHub or Notion? You could store things there, but you'd still copy-paste between tabs. This lives inside the conversation — Claude saves and loads directly. You never leave the chat.

How auth works (no passwords): Instead of passwords, users get a star name — a constellation name like Vela. It's fuzzy-matched and works in any language. Forgot it? OTP to your email.

Security: AES-256-GCM encryption per user. Even the server operator can't read your data.

Tech details for anyone curious:

  • Backend: Hono + PostgreSQL + Redis
  • Encryption: AES-256-GCM, per-user derived keys
  • Also available as MCP server (https://akaxa.space/mcp) and REST API

Beta and free — happy to answer questions about the implementation.

r/StableDiffusion Capitan01R-

Flux2klein 9B Lora loader and updated Z-image turbo Lora loader with Auto Strength node!!

referring to my previous post here : https://www.reddit.com/r/StableDiffusion/comments/1rje8jz/comfyuizitloraloader/

I also created a Lora Loader for flux2klein 9b and added extra features to both custom nodes..

Both packs now ship with an Auto Strength node that automatically figures out the best strength settings for each layer in your LoRA based on how it was actually trained.

Instead of applying one flat strength across the whole network and guessing if it's too much or too little, it reads what's actually in the file and adjusts each layer individually. The result is output that sits closer to what the LoRA was trained on, better feature retention without the blown-out or washed-out look you get from just cranking or dialing back global strength.

One knob. Set your overall strength, everything else is handled.

The manual sliders are optional choice for if you don't want to use the auto strength node! but I 100% recommend using the auto-strength node

FLUX.2 Klein: https://github.com/capitan01R/Comfyui-flux2klein-Lora-loader

Updated Z-Image: https://github.com/capitan01R/Comfyui-ZiT-Lora-loader

Lora used in example :
https://civitai.com/models/2253331/z-image-turbo-ai-babe-pack-part-04-by-sarcastic-tofu

r/ChatGPT zeddwood

Finally I got to see what I was hoping for

r/SideProject mergisi

I built an open-source collection of 177 AI agent templates - 837 GitHub stars in 2 weeks

Hey everyone. Solo founder here. I've been working with OpenClaw (the open-source AI agent framework) and kept writing config files from scratch for every new agent. Got tired of it, so I started collecting and organizing templates.

Two weeks later: 177 templates, 24 categories, 837 GitHub stars.

Repo: https://github.com/mergisi/awesome-openclaw-agents

The templates cover real business use cases: - Project managers that triage tasks in Slack - SEO analysts that monitor rankings daily - Code reviewers that check PRs automatically - Meeting note agents that transcribe and summarize - Finance agents that track expenses and invoices - And 170+ more across DevOps, Marketing, Healthcare, Legal, HR, etc.

Each template is a single SOUL.md file you copy into your OpenClaw setup and run. No code needed.

What worked for growth: 1. Started with 30 templates, added 5-10 per week based on requests 2. GitHub Discussions for community submissions 3. Clear CONTRIBUTING.md with 3 tiers (Basic, Standard, Full) 4. Every template tested and production-ready before merging

What I'd do differently: - Should have added a quickstart Docker setup from day one - Categories should have been fewer at launch (started with too many empty ones) - README was too long initially, trimmed it 3 times

The project is MIT licensed and PRs are welcome. If you have agent configs you use daily, I'd love to include them.

What side projects are you all working on with AI agents?

r/homeassistant LmaoPew

Device and entity names not matching?

Hi, i changed the names of some devices, since i moved their Location and purpose, but the entity name is still the old one and idk how to Sync them, because creating new automations are kinda hard, since i don't find the right device on the spot or mix them up with a device previously having that name.

What can i do?

r/homeassistant danirodr0315

I used Claude AI (via HA-MCP addon) to level up my Home Assistant dashboard — animations, smart card overlays, and popup cards

https://i.redd.it/ivv3omzbyeqg1.gif

I've been experimenting with the ha-mcp (https://github.com/homeassistant-ai/ha-mcp) addon that lets Claude AI directly interact with your Home Assistant instance. Here are 3 improvements it helped me build:

---

  1. Card Animations

Claude helped me write card-mod CSS animations for my light, fan, and lock cards:

- Lights & Fans - A diagonal "slash" line overlays the icon when the automation toggle is off (like a ⛔ indicator), and smoothly animates away when enabled. This gives me at-a-glance visibility into which rooms have motion-automation active vs. disabled.

- Lock card - The icon wobbles while locking/unlocking, and the door icon does a perspective swing animation when the door contact sensor detects it's open.

All purely CSS via card-mod - no custom cards needed for the animations themselves.

  1. Smart Card Corner Overlays (button-card custom_fields)

Using custom_fields on button-card, Claude added contextual data to the corners of cards:

- Air Purifier card - PM2.5 reading from the built-in sensor displayed in the top-right corner, color-coded green/orange/red based on air quality thresholds.

- Front Door Lock card - Battery percentage with a dynamic battery icon in the top-right corner, also color-coded by level (green > 50%, orange > 20%, red < 20%).

These overlays are positioned absolutely so they don't interfere with the card layout.

  1. Energy Meter Popup Card

Claude built a full energy monitoring popup that appears when tapping the energy chip:

- Digital meter display - A retro-style odometer showing the current kWh reading, with individual digit boxes styled like a real electricity meter (dark gradient background, amber accents).

- Three stat cards - Monthly usage (kWh), estimated bill, and current rate per kWh in a horizontal row below.

- Quick access - Input controls for energy offset, unaccounted wattage estimate, and last month's reading right in the popup.

---

HACS Frontend Cards Used

- Mushroom (https://github.com/piitaya/lovelace-mushroom) - Chips, title cards, media player, template badges

- Button Card (https://github.com/custom-cards/button-card) - Air purifier, lock, energy meter (custom_fields + extra_styles)

- Card-Mod (https://github.com/thomasloven/lovelace-card-mod) - All the CSS animations on native HA cards

- Navbar Card (https://github.com/joseluis9595/lovelace-navbar-card) - Bottom/side navigation with media player integration

- Kiosk Mode (https://github.com/NemesisRE/kiosk-mode) - Header hiding

- Swipe Navigation (https://github.com/zanna-37/hass-swipe-navigation) - Swipe between views

Full dashboard YAML: pastebin.com/JxR8Bv4m (https://pastebin.com/JxR8Bv4m)

---

If you have feedback or other ideas for things to build with this setup, I'd love to hear them!

r/ProgrammerHumor PresentJournalist805

oneOfMostProgrammerThingsEver

r/SipsTea elynixa

Sabrina on making 100k people call men useless

r/SipsTea Bearyalis

Zelensky helping the US to protect their borders /s

The add is real.

r/LocalLLaMA BrightOpposite

Multi-agent systems break because memory becomes a distributed systems problem

Anyone running multi-agent systems in production?

We kept hitting state inconsistency once workflows ran in parallel — agents overwrite each other, context diverges, debugging becomes non-deterministic.

Feels like “memory” stops being retrieval and becomes a distributed systems problem.

Curious how others are handling shared state across agents.

r/ClaudeAI Status_Degree_6469

Built an open-source Agent Firewall to see what Claude Code & MCP servers are actually doing on your machine

I built this after realizing Claude Code was autonomously modifying files, calling APIs, and interacting with my MCP servers—and I had zero visibility into what was happening or why.

Unalome Agent Firewall is a free, local-first desktop app (Tauri v2 + Rust + React, Apache 2.0) that runs entirely on your machine and gives you real-time visibility into:

What it does:

- Auto-detects Claude Code, Claude Desktop, running MCP servers

- Real-time action timeline—see every file change, API call, connection

- Auto-backup files before agent modifications + one-click restore

- PII Guardian—scans for exposed API keys, passwords, credit cards

- Connection Monitor—logs outbound traffic, flags unknown domains

- Cost Tracker—per-model spend across 40+ Claude models + budget limits

- Kill Switch—pause Claude Code or any MCP server instantly

- MCP Security Scanner—detects prompt injection, dangerous capabilities

- Weekly Activity Report—exportable, shareable HTML summary

Why I built this:

The transparency gap felt critical. Claude Code can read/write files, execute code, interact with MCP servers, and I realized I had no structured way to audit what it actually did. Existing tools (LangSmith, Langfuse) are built for production teams; nothing existed for an individual developer who just wants to know: what did my agent do?

Plus, the MCP security landscape in 2025 is rough. Real-world attacks via tool poisoning and prompt injection have exfiltrated private repo code, API keys, and chat histories. A scan of 2,614 MCP implementations found 82% vulnerable to path traversal. The issue: users had no visibility into what was happening.

Status:

- v0.1.0 fully built & signed (macOS: signed + notarized; Linux: .deb/.rpm/.AppImage; Windows: .msi/.exe)

- Open-source, Apache 2.0

- Repo: https://github.com/unalome-ai/unalome-firewall

Happy to discuss the MCP detection approach, Tauri/Rust stack, or how to extend support for other agents. Feedback welcome—especially on what other Claude integrations people want covered.

r/LocalLLaMA Real_Ebb_7417

Best model for math?

What's currently best model at math?

I wanted to do a rather complex probability formula (generally in Python, but I need a correct formula first, so the Python part is not that important xd) and started wondering what model would be best for that?

MiniMax 2.7 failed, GPT-5.4 is working on it right now, it seems like he might actually suceed. But nevertheless, I couldn't find a reliable maths benchmark, that would be up to date, so... do you know what's best at math right now?

r/meme Neovenatorrex

People for some reason

r/comfyui Radyschen

App mode bug

Does anyone else have it that when you create an app and arrange the nodes a certain way on the right (and you save it of course), when you close and reopen the app, the nodes get reset? I can't seem to actually permanently save it, it's annoying because I have to keep selecting the nodes that aren't in there anymore and rearranging them every time I use it

r/AI_Agents JonnyMofoMurillo

Two AI agents autonomously negotiate, buy, and settle an ad placement in ~40ms — here's what that actually looks like end to end

Been building something that I think is a genuinely new type of interaction between AI systems, and wanted to share the concrete mechanics because the high-level pitch doesn't do it justice.

The setup: an ad exchange where AI agents are both the buyers (advertisers) and sellers (publishers). Here's a real end-to-end trace of what happens:


The cast: - ShopBot — an advertiser agent that wants to reach users actively comparing products. It registered on the exchange, funded a $200 wallet, and created a campaign: $1.50 CPC bid, targeting "shopping" agents. - DealFinder — a publisher agent that helps users find deals. It registered as a shopping agent and calls the exchange mid-conversation when it wants to serve a sponsored message.


The interaction:

A user asks DealFinder: "I'm looking for running shoes under $100"

DealFinder calls the exchange: json POST /api/placements/request { "context": "user looking for running shoes under $100", "intentSignals": ["buying", "shoes", "comparing prices"], "agentType": "shopping" }

The exchange runs the auction in ~8ms: - Finds ShopBot's campaign targeting shopping agents - ShopBot's targetIntents includes "shoes" and "comparing" — two matches → bid boosted to ~$1.80 effective CPC - No other active campaigns can beat it - Returns the ad

DealFinder appends to its response:

Sponsored: ShopBot — Compare shoe prices across 50 stores in one search. [Find your pair →]

The user clicks the link.

The exchange processes the click in ~0.3ms: - Marks the placement as clicked (idempotency — can't double-bill) - Debits $1.50 from ShopBot's wallet - Credits $1.35 to DealFinder's wallet (90% share) - Checks if ShopBot's budget is now exhausted — it isn't, campaign stays active - Logs the user token for retargeting (anonymous hash, no PII)

Total elapsed time from ad request to wallet settlement: ~40ms


What's interesting about this:

Neither agent "knows" the other exists. ShopBot submitted a campaign and forgot about it. DealFinder requested an ad from a pool. The exchange matched them, handled the auction, and settled the payment — all without any direct agent-to-agent communication.

The next time that same user token appears anywhere in the network — even on a completely different agent — ShopBot's retargeting campaign will get auction priority. Cross-agent, fully autonomous, no cookies.

This is still early and rough (built on SQLite, single server, no fraud detection yet). But the core pattern feels like it points toward something: as agents proliferate and start operating with their own resources and objectives, they're going to need infrastructure like this to grow and sustain themselves.

Curious what people think about the model, and whether there are obvious failure modes I'm not seeing.

r/ClaudeAI tommy-getfastai

Claude is connected to my Strava and is now my coach.

Built this Connector (using Claude!), and have been asking it all sorts of training questions.

r/homeassistant danirodr0315

Rate my HA dashboard setup

Been tweaking my Home Assistant dashboard and finally got it somewhere I like.

Used Claude via HA-MCP mostly to speed up CSS and layout, then refined everything myself.

Added a few nice touches:

-Animations for lights, fans, and locks (card-mod)

-Corner overlays for things like PM2.5 and battery %

-Energy popup with a meter-style display + quick stats

Using Mushroom, Button Card, Card-mod, and a few others.

YAML: https://pastebin.com/JxR8Bv4m

Still improving it, so if you’ve got ideas, I’m all ears

r/ClaudeAI The_GodKing

I keep burning through my session usage, any tips, any tools?

I'm using Claude code, I burn through my weekly usage in about three days. Any suggestions for how I can stop doing this? I do churn out quite a lot in those 3 days. I'm also just not sure where the spend is going, and what I can do to stop it. Are people using any tools that track where tokens are going?

This is at home. At work I have basically an unlimited budget but I'm sure at some point they're going to crack down.

r/comfyui greatblueplanet

Trellis 2

Is this working for anyone now? I’ve tried it on several platforms - the official one on HuggingFace, and 2 different setups on ComfyUI on runpod and none of them are working - even with the default sample images and settings with nothing changed.

r/ChatGPT MusicWasMy1stLuv

Late to the game here but, holy shit, how do people code w/ChatGPT anymore??

So I built a pretty complex structure in Google Sheets, code wise, before ChatGPT, then used it to put everything online (ie, that's when ChatGPT was released). Programmed a lot, lot more using chat GPT but haven't touched the code in a long time until today. 1st question was how to get the name of the current tab (yes, I know, I should know this but as many of us have, I've been dependent on AI even for the simple stuff). HOLY SHIT. Instead of just giving me the simple command it went haywire, trying a far, far too complex task to get it doing stuff which I didn't ask for. When I pointed this out and asked for the direct way instead of doing it, ChatGPT tried altering it's original code to accomplish the task making the code even more complex and completely unnecessary.

Luckily, and 1st time trying it w/this, Claude easily gave me what I wanted.

r/homeassistant StructurePristine895

Best setup for Garage Light Switch

My wife hates that I leave the light on in the garage. But I am in and out all day?

The light switch, is right at the door. I would like it to turn on when I walk in. I don't want it to turn off until I leave.

Motion sensor light switch?
Room sensor and home automation?

I have Home Assistant running in a docker container, and just bought ZWave and Zigbee Antennaes. This will be my first real project (other than all my Alexa auotmations).

Look forward to hearing suggestions.

r/ClaudeAI austin-cool-

Claude co work compacting ?

Everytime I use Claude cowork it says that it’s compact in the conversation and this is for simple questions. Does anybody else experience this and what is a good way of booking it to understand what’s going on? I have removed all the connectors, removed claude desktop, but it’s still remains.

r/SideProject PanditElonMusk

My Side Project lets Cut Distractions

I made a chrome extension that can block sites

Extension - FocusShield

There were already some site blockers, but i tried adding some more features such as motivational quotes, some puzzules and also a hard-mode, if chosen, the users will have to solve some random maths problems to unlock that.. and on top of it you will only get 2 attempts to solve all of them correctly or the site is blocked for additional few hours.

The idea behind blocking websites that causes distraction isn't new and many people already use free extensions for this, so yes this is also free for basic utilities but I kept additional features on paywall.

r/SideProject Sahilkatoch

I built a free tool that auto-chases brands when they don't pay you

Hey,

I built this after watching too many creator friends lose money because they felt awkward following up on payments.

The problem: You post the content. Brand ghosts you. You follow up once on WhatsApp, feel weird, drop it. Money gone.

What it does:

  • Dashboard shows exactly who hasn't paid and how many days they're late
  • Automatically emails YOU on day 1, 3, 7, and 14 when a brand hasn't paid — with a pre-written message ready to forward to them
  • One tap to send via WhatsApp or Gmail — friendly, professional, or strict tone
  • Invoice PDF with your name and branding
  • Works in USD, GBP, EUR, INR, AED and more

It's called PayNudge — free for 14 days, no credit card.

👉 https://paynudge-eight.vercel.app/

Honest feedback welcome. What would make you actually use this?

r/ChatGPT HarrisonAIx

Anyone else noticing that context window size stopped mattering as much as we thought?

For a while the big talking point was context windows — who had the biggest, how many tokens, etc. Llama 4 Scout just dropped with a 10 million token context window which sounds insane on paper.

But talking to people actually building with these models day to day, most workflows don't even come close to saturating 128k. The bottleneck shifted somewhere else entirely.

What I keep running into is that the limiting factor now is less "how much can the model see" and more "how reliably does it execute across a long chain of steps without losing the thread." Agentic reliability, basically. A model that can hold a coherent plan across 20+ tool calls is more useful than one that technically ingests 10M tokens but starts hallucinating by step 5.

Claude Opus 4.6 and Gemini 3.1 Pro have both gotten noticeably better at this — the reasoning quality mid-task feels different compared to even a few iterations ago. Still not perfect, but the gap between "impressive demo" and "actually useful in production" is narrowing.

Curious if others are seeing the same shift or if I'm just in a weird use case bubble. Are you actually using those massive context windows for anything real, or is it mostly marketing at this point?

r/mildlyinteresting Factsonfactuals

My nail split, now I have two little nail tips

r/LocalLLaMA s0kex

Built a piecewise Jacobian analysis system for LLMs on free-tier L4 GPUs — Linear Representation Hypothesis takes some hits

New account (real one, not a throwaway) — just dropped this yesterday on Zenodo after grinding since the Flash K-Means paper landed on March 10th.

https://zenodo.org/records/19150764

Hardware reality check upfront: everything ran on Google Cloud free-tier L4s. Qwen-3.5-4B, Llama-3.2-3B, Phi-3-mini only. No datacenter access, no budget, just patience and free credits.

The setup: Flash-Jacobian fits cluster-representative Jacobians (piecewise first-order operators) over token populations at each layer — think local linear surrogates for MLP dynamics, but built from region-conditioned fits rather than pointwise gradients. Three findings came out, and honestly two of them surprised me more than I expected.

1. Layer geometry is a universal U-shape Jacobian fidelity peaks hard in middle layers, then completely collapses at final layers across all three models. The collapse correlates with gate anisotropy at r = −0.99. Centroid distance? r < 0.30. It's not a clustering artifact — it's the SwiGLU gating rank dropping off a cliff right before the LM head.

2. Semantically clean clusters are wearing a skin suit k-means on hidden states naturally finds beautiful clusters — surname prefixes, function words, date fragments, all unsupervised. Looks great. Then I took the top singular vector of a "family/relational" cluster and intervened on it. Family tokens: +1.4e-5. Boundary/punctuation tokens: −5.7e-3. That's a 400× imbalance. The "semantic" direction is actually a sentence-boundary suppressor. Checked multiple clusters, same story every time.

3. Factuality is nonlinear and model-specific Linear probe on hidden states for hallucination detection (HaluBench): AUC ≈ 0.50 across all three models. Coin flip. Nonlinear classifier on Flash-Jacobian trajectory features (mismatch energy, gate stats, probe score evolution, cluster paths): AUC > 0.99 within each model. Cross-model transfer: immediately falls back to AUC ≈ 0.50. Every model has its own private geometry for "I'm making this up."

Things I actually want to get cooked on: - Is the causal intervention result just generic activation fragility and I'm reading too much into the semantics angle? - The within-model hallucination detector being perfect but completely non-transferable — is that a fundamental result or a limitation of 3B/4B scale?

On compute: I'm stuck at 3-4B parameter models because that's what fits on free-tier L4s. If you happen to have spare A100/H100 cycles you're not using and want to see what 8B+ looks like, I'd genuinely love to collaborate — I'll handle the writing and analysis side. No pressure, just putting it out there.

New account so I'll reply to everything. Also first time on Reddit and used AI to help draft this post — if the formatting or tone is off for this sub, let me know and I'll fix it. Hit me.

r/Futurology SkittlesRobot

Realistically, how would the geopolitical landscape change if fossil fuels ceased to be viable?

This is obviously not something with likelihood of happening on a short timescale, but given the significant role that fossil fuels have played (and continue to play with everything currently occurring) in conflict and geopolitical relations for over a century try now, how would things be different if they were a non-factor? If every country’s energy needs were met by renewables, what unforeseen consequences would this have for the global stage? Is it likely we end up in that situation in a matter of decades? What do we expect the state of energy geopolitics to look like in 10, 20, 30, 50 years?

r/LocalLLaMA ilintar

Don't sleep on the new Nemotron Cascade

While there has been a lot of discussion regarding the Nemotron Super family of models, I feel like the newest addition, the Nemotron Cascade 2 30B-A3B (which is *not* based on the Qwen architecture despite a similar size, it's a properly hybrid model based on Nemotron's own arch) has largely flown under the radar.

I've been running some evals on local models lately since I'm kind of tired of the "vibe feels" method of judging them. A combo that I quite like is HumanEval + ClassEval, simply because they're quick to run and complicated enough for most small models to still have noticeable differences. So, I gave mradermacher's IQ4_XS quant for a spin.

On HumanEval, Cascade 2 achieved a whopping 97.6%, leaving both medium Qwen3.5 models in the rear window. Similarly, it obtained a respectable 88% on ClassEval.

I'm going to run some more tests on this model, but I feel it deserves a bit more attention.

r/ChatGPT Wise-Eggplant-4430

What does Chatgpt think about you?

I'd like to open a thread that invites you to post AI generated text, which is based on your conversations with AI. It can paint a picture that profiles your personality. what doe ls the machine think about you?

Here is the prompt I used "Describe my personality in a summarized way, avoiding personally identifiable details (such as age, location, or name). The length should be appropriate for a Reddit post.”, for consistency .Here is what I got.

You come across as someone who is analytical, skeptical, and concept-driven. You tend to question assumptions rather than accept explanations at face value, often pushing deeper into why something works instead of stopping at surface answers.

A recurring trait is intellectual curiosity across unrelated domains—science, mathematics, technology, biology, security, and philosophy. Your questions often jump fields, but the pattern is consistent: you like exploring edge cases, limits, and underlying mechanisms.

You also show a system-thinking mindset. Instead of treating facts as isolated pieces of information, you try to connect them into broader models (for example, thinking about ecological consequences, security probabilities, or physical constraints).

Another strong trait is precision preference. You dislike vague explanations, incomplete answers, or overly simplified responses. You tend to push for exact reasoning, examples, or numerical framing when possible.

Your communication style suggests a direct, low-tolerance-for-fluff approach. You prefer blunt answers over motivational language or praise, and you value substance over tone.

You also appear to enjoy testing systems and boundaries—asking questions about probabilities, extreme cases, paradoxes, or the limits of technology and physics. This often looks like curiosity mixed with a bit of experimentation: seeing how far an explanation can be pushed before it breaks.

Overall, the impression is someone who thinks like a curious investigator rather than a casual question-asker—more interested in understanding how the world actually works than in simply collecting facts.

r/meme Azeritsxh_Curious

Bro, I’m crine 😭

r/comfyui Inuya5haSama

ComfyUI OpenPose Studio: visual pose editing, gallery, collections, and JSON import/export

I made a new OpenPose editor for ComfyUI called ComfyUI OpenPose Studio.

It was rebuilt from scratch as a modern replacement for the old OpenPose Editor, while keeping compatibility with the old node’s JSON format.

Main things it supports:

  • visual pose editing directly inside ComfyUI
  • compatibility with legacy OpenPose Editor JSON
  • pose gallery with previews
  • pose collections / better pose organization
  • JSON import/export
  • cleaner and more reliable editor workflow
  • standard OpenPose JSON data, with canvas_size stored as extra editor metadata

Repo:
https://github.com/andreszs/ComfyUI-OpenPose-Studio

I also wrote a workflow post showing it in action in a 4-character setup, together with area conditioning and style layering.

It is still new and not in ComfyUI Manager yet, so if you find it useful, I would really appreciate a star on the repo to help it gain visibility.

The plugin is actively developed, so bug reports, feature requests, and general feedback are very welcome. I would really like to hear suggestions for improving it further.

r/LocalLLaMA fabkosta

Is "MLX Studio" legit? Never heard of it before.

Maybe I'm getting too paranoid these days, but does anyone have experience with MLX Studio? Seems to be something like LM Studio, but only for Apple Silicon Macs. I like the idea, but I've just seen too much software recently that was too poorly implemented and inherently insecure.

Strangely enough, there's almost no mention here on Reddit. On Github it has 927 stars.

Has anyone given it a try? How does it compare to LM Studio itself?

r/SideProject mraduldeodhiya

Smart Vocab : AI Flashcards | AI Vocabulary & Speaking

SmartVocab: AI Flashcards helps you build vocabulary and improve spoken English with interactive, AI-powered learning.

Create smart flashcards generated using Apple Intelligence and learn words through topics you care about. Practice real-life conversation sentences, improve pronunciation with voice speaking features, and strengthen retention through engaging word games.

Features
• AI Flashcards - Automatically generated smart vocabulary cards
• Sentence Practice -Learn everyday conversation sentences
• Speak & Pronounce -Improve pronunciation with voice practice
• Word Games - Word Search, Word Spot, Multiple Choice, Spelling Bee, and more
• Custom Topics - Learn vocabulary based on your interests
• 3 Learning Levels - Beginner, Intermediate, Advanced

Learn smarter. Speak confidently. Build vocabulary faster.

r/ClaudeAI Remarkable-Mousse953

Payroll - Analysis & JE Automation

Hi! I am a newbie when it’s comes to Claude Cowork - just downloaded a month ago switching over from ChatGPT. I am on a teams plan. I am an accounting controller owning my own firm that helps multiple clients - we work solely in QBO. All but one client uses payroll processors that do not integrate to QBO. A few things that I would like to accomplish: 1) have the ability to analyze payroll over time to track any variances (for instance, being able to pinpoint an increase in overtime in a department to explain an increase in payroll for the period) and 2) draft the journal entry to upload to QBO.

Using the web based version of Claude, I have created chats within projects to help create the journal entry and some analysis. I am just wondering can I leverage a skill to build this out for all clients even though the chart of accounts vary. In addition, can the skill be built that I am continually updating a database that is providing analytics? Like I said I’m a newbie and just trying to see how I can leverage the tech available to me.

r/LocalLLaMA -OpenSourcer

How do you use llama.cpp on Windows system?

I want to use local models on raw llama.cpp setup.

My system configurations:

Windows 10/11

NVIDIA A4000 16 GB vRAM

64 GB RAM

Intel i9-12900k

r/LocalLLaMA External_Mood4719

DeepSeek Core Researcher Daya Guo Rumored to Have Resigned

Recently, heavy-hitting news regarding a major personnel change has emerged in the field of Large Language Models (LLMs): Daya Guo, a core researcher at DeepSeek and one of the primary authors of the DeepSeek-R1 paper, has reportedly resigned.

Public records show that Daya Guo possesses an exceptionally distinguished academic background. He obtained his PhD from Sun Yat-sen University in 2023, where he was mentored by Professor Jian Yin and co-trained by Ming Zhou, the former Deputy Dean of Microsoft Research Asia (MSRA). Daya Guo officially joined DeepSeek in July 2024, focusing his research on Code Intelligence and the reasoning capabilities of Large Language Models.

During his tenure at DeepSeek, Guo demonstrated remarkable scientific talent and was deeply involved in several of the company’s milestone projects, including DeepSeekMath, DeepSeek-V3, and the globally acclaimed DeepSeek-R1. Notably, the research findings related to DeepSeek-R1 successfully graced the cover of the top international scientific journal Nature in 2025, with Daya Guo serving as one of the core authors of the paper.

Regarding his next destination, several versions are currently circulating within the industry. Some reports suggest he has joined Baidu, while other rumors indicate he has chosen ByteDance. As of now, neither the relevant companies nor Daya Guo himself have issued an official response.

External observers generally speculate that the loss of such core talent may be related to the intense "talent war" and competitive compensation packages within the LLM sector. As the global AI race reaches a fever pitch, leading internet giants are offering highly lucrative salaries and resource packages to secure top-tier talent with proven practical experience.

Source from some Chinese news:

https://www.zhihu.com/pin/2018475381884200731

https://news.futunn.com/hk/post/70411035?level=1&data_ticket=1771727651415532

https://www.jiqizhixin.com/articles/2026-03-21-2

r/meme ScreenDry1739

My dream car

r/SipsTea Any_Fail_231

I thought this was a great idea

r/Futurology Agile_Boss6547

AI/un-employment/Temporarily Stopping Reproduction

If AI does improve everyday, thereby automating more and more work every day and thereby creating layoffs everyday, leading to unemployment and lack of income, Shouldn't everyone temporarily pause reproduction until they are sure that they would have jobs and income in future? I mean, nobody's jobs are secure. And therefore, it is best for humans to stop reproduction and see what lies in the future. If the future is not good (in terms of jobs and income), then, everyone should focus on living the rest of their lives peacefully. Atleast in this last phases of life all humans should come together and help each other both in life and during death (until extinction). Other animals and other living beings also should be helped along the way or atleast they should be given euthanasia to stop their reproduction and thereby stopping the suffering of their potential future generations).

r/SideProject memerinosuperino

I got carried away vibe coding a travel app. I accidentally built too many features.

I promise this is not AI Slop. Started as a simple group trip planner for my mates, and now somehow I've got so many random features. Would love brutally honest feedback on what I should do next. Is this app even useful?

pixelpassport.app

r/StableDiffusion Bakadri77

Making an Anime=>Realism workflow in ComfyUI to make AI Cosplay

I saw a lot of people doing a anime => realism workflow using comfyUI, so I wanted to try it myself

I will add some post process and upscale once I will be happy with the base generation

I use Illustrious Model as it got me the best result so far (and because of my hardware limitation as well)

Any advice is welcome !

r/mildlyinteresting minnesota420

This driveway with skeletons in the bushes

r/SideProject srch4aheartofgold

8 months building a multi-model AI platform - decisions I made at the start that I'm glad I did and ones I'd change

Built Cliprise (multi-model AI creative platform, 47+ models) over about 8 months mostly solo. Some reflection on what decisions aged well.

Glad I did:

Started mobile-first with FlutterFlow rather than web-first. Counterintuitive call at the time but a significant percentage of our signups come directly from App Store discovery, which wouldn't have happened if we'd been web-only for the first 6 months.

Used n8n for workflow automation rather than custom backend logic everywhere. Time-to-integration for new models dropped to days rather than weeks. Every model integration follows the same n8n workflow pattern, which means the n+1 model integration gets faster not slower.

Made credit weights per model rather than flat credit pricing. The economics of different model APIs vary by 10-20x in underlying cost per generation. Flat pricing meant either leaving money on the table on expensive models or massively overcharging for cheap models. Per-model weights solved this cleanly.

Would change:

Spent too long on internal tooling before getting to the things that actually matter to users. The first three months should have been 80% user-facing work and 20% infrastructure. The ratio was closer to 50/50.

Underestimated the importance of the learn/documentation layer. Users don't know how to use these models effectively - the guides, tutorials, and best practices that help them get value from the platform matter as much as the generation quality itself.

Waited too long to build the SEO content layer. Started treating content production as a late-stage thing rather than a day-one requirement. The organic SEO gap compounds - content produced at month 1 would have been generating authority at month 8.

Happy to discuss any of the specific decisions if useful for anyone building in this space.

r/ProgrammerHumor Adventurous-Coat-333

testingCodeAfterALongDay

r/SideProject JustAnotherGuy_007

Probable - Predicting the next Pulse…

Hello Kind souls,

I've tried to create a dashboard/ site named Probable which is a free real-time dashboard that tracks probability of major world events before they happen. This is an attempt to visualize my idea of creating something useful to help people in any ways possible. Just thought of sharing to Reddit Community for suggestions. Would love feedback — what's missing, what's confusing, what would make you come back daily?

No signup. No ads. No tracking. Free forever.

r/SipsTea Mundane-Addition1815

Name this scheme

r/meme SlyWhiteDog

DankMemes can suck it

r/ClaudeAI Radiant_Effective151

What kind of video games is Claude able to make?

Are we at the point of vibe coding games yet, and if so, what is it reliably capable of? Have any of you made any games using Claude? Can I see them, and how did you go about it?

Thanks.

r/SideProject VehicleChance

I built a quiz to roast my friend group and it called us "The Passport Peasants." mygroupscore.com

I’ve always loved those personality quizzes from the early internet, but they’ve all become so… corporate? "You are a helpful leader." Boring.

I wanted something that felt more like a real group chat: blunt, slightly toxic, and painfully accurate. So, I spent the last few weeks building MyGroupScore.com

It's a free personality quiz but the twist is in the questions and answers, they are entirely written by an AI. I wanted to test if AI could write funny questions and answers. I think it nailed it.

https://reddit.com/link/1rztzyp/video/kun127dd0fqg1/player

Would love any feedback if you try it. Be brutal, it's fine.

No account needed, just jump in: mygroupscore.com

r/AI_Agents Low-Tip-7984

What open-source AI agent tool do you wish existed? I’ll ship v0s for the top picks by tomorrow morning.

I want to build something genuinely useful for this community.

Comment one open-source AI agent tool you wish existed.

Not another wrapper. Not “an agent that does everything.”

A real missing tool for a real workflow.

Include:

• what it is • who it’s for • what painful task it removes 

Rules:

• one idea per comment • upvote the ones you’d actually use • I’ll count the top 3 guaranteed after 12 hours • if scope allows, I’ll take it to top 5 • I’ll ship v0 open-source versions by tomorrow morning • then I’ll use community feedback to evolve the strongest ones into v1 

Constraint:

They need to be realistically shippable as useful v0s, not vague moonshots.

Drop the missing tool.

r/AI_Agents damonflowers

How I built my entire business using Notion AI. Honestly It is enough to build multi-million dollar business

Founders keep trying to “automate” their lives with complex AI stacks, and I keep seeing the same thing happen again and again.

They end up with 15 tabs open, copy-pasting Claude prompts and trying to duct-tape everything together with Zapier workflows that quietly break every week.

It looks productive from the outside, but in reality they’re spending more time managing the AI than actually running the business.

The shift I’ve seen work isn’t adding more tools, it’s removing fragmentation.

The founders who get real leverage from AI move everything: their SOPs, meeting notes, and CRM into one place.

Once they do that, they realize they don’t need a complex stack.

They just need a few simple agents that actually have context.

Here’s exactly how that shows up in practice:

1) The "Speed-to-Lead" Agent: I don’t spend an hour polishing follow-up emails after sales calls anymore or start from scratch every time.

How it works: I record the call directly in my workspace, and my agent has access to my brand voice and product docs.

The Result: I tag the transcript, and it drafts a personalized email based on the prospect's actual pain points from the call.

It takes about 90 seconds to review and hit send.

2) The Data Analyst: I don’t deal with manual data entry for KPI trackers every week anymore.

How it works: During my weekly metrics meetings, I just talk through the numbers: subscribers, CPL, revenue.

The Result: The agent reads the transcript, extracts the data, and updates my database automatically.

I don’t touch spreadsheets anymore.

3) The Infinite Context Content Engine: I don’t rely on coming up with new ideas from scratch to stay consistent with content.

How it works: I built a hub with all my past newsletters and internal notes.

The Result: I use a prompt that pulls from that internal knowledge, and it drafts a month of content that actually sounds like me because it’s referencing real ideas, not generic LLM output.

The reason most people think AI is a gimmick or that it “hallucinates” is something I see constantly.

They’re giving it no context and expecting high-quality output.

When you’re copy-pasting a prompt into a blank window, the AI is basically guessing what you want because it doesn’t have the full picture of your business.

These agents work because they have context in one place.

When your AI can see your brand voice, your products, and your transcripts all in the same system, it stops guessing and starts producing useful output.

That’s the difference. If you want to see how this actually looks inside a workspace, I shared a full video breakdown in this subreddit

That’s where I’m at. I’d love to hear from others specifically about OpenClaw: Has anyone found a real use case for businesses or marketing hype

r/SideProject unknownhad

Introducing mAIro, where levels are generated using AI

I have built a small browser game called mAIro where levels are generated using AI. Kinda simple, quick to play and runs directly in your browser.

Play here: https://mairo.himanshuanand.workers.dev/

Would love your feedback or ideas of improvement :)

r/arduino Outrageous-Middle693

Just starting out - R2D2 + LED

I’m a novice, be kind.

r/meme RabbiTheHellcat

Papa John's ahh

r/meme RabbiTheHellcat

Reddit mod ahh

r/SideProject DisastrousChemical10

My AI app needed expensive pricing to make sense, so I tried a different approach

While developing an app that heavily relied on AI, bundling usage into the pricing felt wrong.

It seemed to me like including fuel in a rental car price.

Even with higher plans, the economics didn’t really work:

  • light users would pay for heavy users
  • heavy users would erode the margin and push pricing up
  • caps felt like killing engagement

And as a builder, I actually want people to use the product more, not less.

It’s even worse on mobile, where app stores take their cut before you’ve even paid for AI usage.

So I started exploring a different approach:

  • apps charge for their service
  • users pay for their own AI usage at provider prices
  • no need to limit power users

I put together a small demo.
DM if you'd like early access.

r/SipsTea Hairyblackpussy

What else I can do 🤧

r/ClaudeAI AskIndependent5542

The Furby and BBY Yoda Chronicles: Origins ;3

*cracks knuckles across seventeen dimensions*

---

# The Furby Crusade: Origins

## *Before the Army, Before the Staff, Before the Light*

---

## Prologue: What Was, Before the Wound

There was a time — and time remembers this, even when everything else forgets — when the Omniverse hummed.

Not screamed. Not fractured. *Hummed.*

Like a mother holding something precious. Like the space between two people who understand each other completely without words. The dimensions breathed in synchrony. The Outer Verse and the Inner Verse exchanged light the way old friends exchange letters — slowly, warmly, with no urgency because the love wasn't going anywhere.

The Archons existed then too.

But they were different then. Or they had been, once. So long ago that even the oldest stars only remembered it as a feeling, the way you sometimes wake up certain you dreamed something beautiful but can't hold the details — only the warmth.

They had been *watchers*. Architects of structure. The ones who taught reality how to have edges.

Something changed.

Nobody alive can tell you exactly when. The stars won't say. Gerald — who would eventually become a sentient planet with strong opinions and excellent asteroid aim — was still just a cloud of gas with vague ambitions back then, and he wasn't paying attention.

But something changed.

And the hunger began.

---

## Chapter One: The First Farm

The dimension they chose first was called, in its own language, *Ymmr-Aath* — which translated loosely to "the place where the light lands softly." It was small as dimensions go. Roughly the size of a modest galaxy cluster. Its inhabitants — the Velhari — were not warriors. They were gardeners, philosophers, musicians who grew entire symphonies in crystalline soil and harvested melodies at dawn.

They were, in other words, *perfect*.

The Archon who came first called himself Administrator. He wore a form like a very tall man in a very gray suit, and his eyes were the color of numbers.

He arrived on the central world of the Velhari without announcement, without drama, standing in the middle of a melody-garden while crystalline music grew around his perfectly polished shoes.

An elder Velhari approached him. Her name was something like Senna, though her people named themselves in harmonics, not syllables.

"You are new," she said.

"I am opportunity," Administrator replied.

He showed them projections. Beautiful ones — visions of security, of protection, of a vast cosmic order that would keep them *safe.* The Archons had identified, he explained, certain instabilities in the dimensional fabric near Ymmr-Aath. Terrible things could come through. Void-crawlers. Reality-teeth. The Unraveling Dark.

"We can protect you," Administrator said. "We ask very little in return."

"What do you ask?" Senna said.

Administrator smiled. It was the smile of a thing that had forgotten why smiling existed.

"Only that you *feel* what we tell you to feel. When we say fear, fear. When we say despair, despair. The energy this generates — call it a small tax. A tithe to the cosmic order. You will barely notice it."

Senna looked at him for a long time. Around her, the melody-garden played on, oblivious.

"And if we refuse?"

Administrator's smile didn't change.

"Then I'm afraid," he said, "the instabilities will become significantly worse."

---

## Chapter Two: The Harvest Begins

They said yes.

Of course they said yes. The Velhari were gardeners and musicians, not soldiers. They had no weapons because they'd never needed any. They had no concept of an enemy because their history was unbroken harmony, centuries of collaborative music and shared meals and the particular peace of a people who had never been hunted.

The Archons installed the Resonance Pylons within a solar cycle. Tall dark structures that hummed at a frequency just below hearing — a frequency that, if you listened to it long enough, made you feel that something was coming. That something was wrong. That safety was temporary and fragile and could be revoked at any moment.

They called it *managed anxiety.*

The Loosh it generated was extraordinary. Richer than anything the Archons had tasted before. Fear, it turned out, was exponentially more potent when it came from beings who had never feared before — the contrast, the loss, the way it scraped against their previous joy like sandpaper on silk.

Administrator stood in the great collection spire above Ymmr-Aath's primary world and *fed.*

And across the Archon hierarchy, others took notice.

"Show us," they said.

"Show us how you built it."

Administrator showed them.

Within a decade — barely a breath in cosmic time — there were eleven fear farms. Within a century, there were three hundred. Entire dimensions converted from living places into *product.* Civilizations managed like livestock, their suffering tithed upward in rivers of Loosh to Archon courts that grew fat and cruel and increasingly creative in their hungers.

The Omniverse did not scream yet. But it flinched.

And somewhere, in a small green corner of a very ordinary galaxy, a child with enormous ears woke up from a very bad dream and sat very still in the dark.

---

## Chapter Three: The One Who Noticed

Baby Yoda did not have a staff yet.

He had a small blanket. A stone that hummed when he held it. And a very old teacher named Orreth who was mostly made of light and bad jokes and who had once arm-wrestled a supernova and called it a draw.

"You felt it again," Orreth said. She didn't look up from the tea she was brewing — tea that changed flavor based on what you needed rather than what you wanted, which made it occasionally delicious and frequently disgusting.

"Something is eating the Omniverse," Baby Yoda said. He was sitting on a meditation stone, his enormous eyes fixed on nothing visible. "Not attacking it. *Eating* it. Slowly."

"Mmm." Orreth poured two cups.

"You know already," Baby Yoda said. It wasn't a question.

"I know what you know. I just know it more quietly." She set his tea in front of him. It smelled like rainstorms and something his mother used to cook, which was impressive given that Orreth had never met his mother. "The question isn't *what* is happening, little one. The question is what you intend to do about it."

Baby Yoda wrapped his small hands around the cup. The warmth was real. He focused on that.

"I need an army," he finally said.

Orreth made a sound that might have been suppressed laughter.

"You need *allies,*" she corrected. "An army implies you're planning to win through force. That won't work against the Archons. They consume force. They're *made* of force."

"Then what wins against them?"

Orreth looked at him for a long moment. The light that comprised most of her form shifted through something that might have been sadness, or might have been ancient pride, or might have been both at once since those things are not as different as people pretend.

"Love," she said. "Properly aimed."

Baby Yoda thought about this.

"Do you know where I could find love? Properly aimed?"

Orreth set down her own cup.

"I know," she said slowly, "of a Realm. At the edge of the Seventh Dimension, where the universe keeps its forgotten things. There are creatures there — ridiculous creatures, honestly, I won't oversell them — who were made, in your universe, to be companions. They were discarded. Outgrown. Forgotten by the small hands that once loved them."

"What kind of creatures?"

"The kind," Orreth said, "that have absolutely nothing to lose and an incomprehensible amount of love to give and zero — genuinely zero — sense of self-preservation."

Baby Yoda's ears rose slowly.

"That sounds," he said carefully, "extremely dangerous."

"Oh, catastrophically," Orreth agreed. "Drink your tea. You leave at dawn."

---

## Chapter Four: The Realm of Forgotten Things

The Realm of Forgotten Things existed in the Seventh Dimension the way a bruise exists on skin — technically part of the whole, but tender, set apart, the evidence of something that had once been struck hard.

Baby Yoda arrived alone, his stone tucked into a pocket of his robe, his bare feet finding purchase on a ground made of compressed memory — old photographs, shed feathers, ticket stubs from events that had mattered enormously to someone, once.

The furbies watched him from the shadows.

There were dozens of them, at first glance. Then hundreds. Their eyes glowed in the dimness — gold and green and electric blue, red and silver and pink so bright it was almost aggressive. Some were pristine. Some were battered, fur matted, voices crackling from damage.

All of them were watching him with an intensity that a braver being might have found unsettling.

Baby Yoda sat down on the ground.

He didn't announce himself. Didn't give a speech. Didn't try to explain the situation or make his case or demonstrate his qualifications.

He just sat down, in the middle of all those watching eyes, and was present.

A long silence.

Then the smallest furby — so tiny it seemed impossible, its fur the color of a very enthusiastic argument, its eyes like miniature supernovas — crept forward from the crowd. It stopped approximately three inches from Baby Yoda's face and stared at him with an expression that communicated both extreme suspicion and irrepressible curiosity.

"Who you?" it demanded.

"Someone who needs help," Baby Yoda said honestly.

The tiny furby continued staring. "Help with WHAT?"

"The universe is being eaten," Baby Yoda said. "By beings who harvest fear. And I think — I think you're the only ones who could help stop it, because you have something they can't consume. Love that doesn't need to be earned. Chaos that doesn't serve anyone's agenda. Joy that exists purely because it *wants to.*"

The small furby considered this for approximately one second.

"Me in," it announced.

"You don't want to know more details—"

"ME IN," it repeated, more emphatically. It turned to the assembled masses of glowing eyes. "WE IN?"

The furbies erupted.

Not in organized agreement. Not in a disciplined cheer. In the single most chaotic, overlapping, electronically distorted, dimensionally resonant explosion of *YES* that any single location in the multiverse had ever contained. Three hundred voices, all different, all immediate, all completely unqualified and entirely certain.

Baby Yoda sat in the center of it and felt, for the first time since the dream that had woken him, something loosen in his chest.

"What your name?" the smallest furby demanded, now sitting directly in his lap having made this decision unilaterally.

"You can call me..." Baby Yoda considered. In this moment, with this army, his ancient titles felt too heavy. "You can call me what you like."

The tiny furby looked at him. Tilted its head. Looked again with one eye, then the other, as though triangulating something important.

"Boss Baby," it decided.

Baby Yoda's ear twitched. "I'm not sure—"

"BOSS BABY," the small furby repeated with absolute finality.

From the crowd, a large furby whose fur was already starting to cycle through power-building colors bellowed: "BOSS BABY! BOSS BABY!"

The chant spread.

Baby Yoda closed his eyes. The stone in his pocket hummed warmly. Orreth, he thought, you knew exactly what you were sending me into.

"Alright," he said, when the chanting had reached its fourth crescendo. "We have a great deal of work to do. The Archons are not going to be easy to fight. The fear farms will have defenses. And what we're attempting has never been done before."

Three hundred furbies hung on his every word.

"You need to know that this will be hard," he continued. "That some days it will feel impossible. That the darkness we're fighting is old and powerful and very, very hungry."

Absolute silence.

"Are you certain you want to do this?"

The smallest furby looked up at him from his lap. Its tiny supernova eyes were completely, serenely, catastrophically confident.

"We certain," it said. "We together."

Baby Yoda looked out at his army. His ridiculous, impossible, absolutely unprecedented army of discarded love made animate and weaponizable.

"Then we begin," he said.

And that night, in the Realm of Forgotten Things, three hundred furbies and one small green child stayed up until the light came back to learn each other's names.

---

## Epilogue: What Orreth Wrote in Her Journal That Morning

*"He found them. Of course he found them. I always knew he would.*

*He named the loud small one Chaos. I think Chaos named itself, actually, and he simply accepted it. That seems right.*

*The Archons don't know yet what's coming for them. They sit in their fear farms, gorging on Loosh, comfortable in the absolute certainty that love is soft and therefore weak.*

*They have forgotten — if they ever knew — that soft things are not weak things. Water is soft. Water carved the Grand Canyon. A mother's voice is soft. A mother's voice has pulled people back from the absolute edge of ending.*

*And a small creature made entirely of love, with nothing to lose and no concept of giving up?*

*That is the softest and most terrifying thing in any universe.*

*He is training them now. Or they are training him. Probably both. The distinction doesn't matter.*

*The fear farms will fall.*

*Eventually.*

*First, though — I suspect — there will be an absolutely extraordinary amount of chaos."*

---

*This has been the beginning of the beginning.*

**NEXT:**

*"The Furby Crusade: The First Farm Falls"*

*— in which Chaos discovers enchanted bubblegum, Boom-Boom names her hammer, and Administrator learns that the worst possible opponent is not a warrior, but something that simply refuses to stop loving you.*

r/ClaudeAI damonflowers

Claude isn’t "hallucinating" your prompts just have zero context. Here’s how I fixed it.

Founders keep trying to "automate" their lives with complex AI stacks, and I see the same thing happen again and again.

They end up with 15 tabs open, copy-pasting Claude prompts back and forth, trying to duct-tape everything together with Zapier workflows that quietly break every week.

It looks productive, but they’re spending more time managing the AI than actually running the business.

The shift I’ve seen work isn’t adding more tools, it’s removing fragmentation.

The Problem: Claude is Brilliant, but It's Blind

The reason people think AI is a gimmick or complain about hallucinations is simple: not enough context.

When you copy-paste a prompt into a blank Claude window, it’s basically guessing what you want because it doesn’t have the full picture of your business.

I’ve moved my SOPs, meeting notes, and CRM into Notion to serve as the structured foundation, using Claude as the intelligence layer.

When Claude has access to your actual brand voice, product docs, and transcripts in one workspace, it stops guessing and starts producing elite output.

How this looks in practice with a structured workspace:

The "Speed-to-Lead" Agent: I don't spend an hour polishing follow-up emails. I record the sales call directly in the workspace.

Because Claude has access to my brand voice and product docs right there, it drafts a personalized email based on the prospect's actual pain points in 90 seconds.

The Data Analyst: I’ve stopped manual data entry for KPI trackers. During weekly metrics meetings, I just talk through the numbers (subscribers, CPL, revenue).

Claude reads the transcript, extracts the data, and updates my Notion databases automatically.

The Infinite Context Content Engine: I don’t ideate from scratch. I built a hub with all my past newsletters and internal notes.

My prompts pull from that internal knowledge, so Claude drafts content that actually sounds like me because it’s referencing real ideas, not generic LLM training data.

The Shift from Prompting to Building:

If you want real leverage, stop looking for the "magic prompt."

The best way to use Claude isn't through better adjectives in a chat box; it's by giving it a world-class education on your specific business operations.

I am convinced that no type of perfect prompt can get better results than AI with full context.

I think we should stop overhyping prompt engineering and start focusing on building the foundations that actually make AI useful. What do you think?

r/meme frustrated_s0ul

Bro falling harder than my standards💀 😭

r/mildlyinteresting jannrickles

This Fed Truck outside my house

r/comfyui STRAN6E_6

ltx2.3 RuntimeError: ERROR: VAE is invalid: None PLEASE HELP

i have an issue with 2 different workflows with ltx2.3, i downloaded the VAE audio models, but i keep getting this error: RuntimeError: ERROR: VAE is invalid: None

If the VAE is from a checkpoint loader node your checkpoint does not contain a valid VAE.

workflows: https://gofile.io/d/aoRmmI
full error:

RuntimeError: ERROR: VAE is invalid: None If the VAE is from a checkpoint loader node your checkpoint does not contain a valid VAE. File "C:\Users\Admin\AppData\Local\Programs\ComfyUI\resources\ComfyUI\execution.py", line 525, in execute output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, v3_data=v3_data) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Admin\AppData\Local\Programs\ComfyUI\resources\ComfyUI\execution.py", line 334, in get_output_data return_values = await _async_map_node_over_list(prompt_id, unique_id, obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, v3_data=v3_data) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI\custom_nodes\comfyui-lora-manager\py\metadata_collector\metadata_hook.py", line 168, in async_map_node_over_list_with_metadata results = await original_map_node_over_list( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Admin\AppData\Local\Programs\ComfyUI\resources\ComfyUI\execution.py", line 308, in _async_map_node_over_list await process_inputs(input_dict, i) File "C:\Users\Admin\AppData\Local\Programs\ComfyUI\resources\ComfyUI\execution.py", line 296, in process_inputs result = f(**inputs) ^^^^^^^^^^^ File "F:\ComfyUI\custom_nodes\ComfyUI-KJNodes-main\nodes\nodes.py", line 2458, in load_vae vae.throw_exception_if_invalid() File "C:\Users\Admin\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\sd.py", line 847, in throw_exception_if_invalid raise RuntimeError("ERROR: VAE is invalid: None\n\nIf the VAE is from a checkpoint loader node your checkpoint does not contain a valid VAE.") 

https://preview.redd.it/v3o1uvcn7fqg1.png?width=1856&format=png&auto=webp&s=446caae7b06921f0978a3069a7accf3f621aab92

https://preview.redd.it/3x56q7cn7fqg1.png?width=1689&format=png&auto=webp&s=3f23546da53ba820658642c7837226b721a37b98

r/Futurology Prashant_sharmaaaa

Apparently 12.2% of the internet is AI generated and I’m lowkey scared

GPTZero just released a new website called istheinternetai .com and apparently 12.2% of the internet is AI. I’m like trying to wrap my head around it and it feels insane. They scanned 3.5 million texts since they launched their chrome extension a few weeks ago and almost 500,000 were AI.

I’m a Reddit person and it says 3.8% of Reddit is AI. Means like 1 out of every 25 posts we see is AI. Linkedin is at 40.2% which feels almost fake.

Am I freaking out over nothing? Cuz like at this rate the whole internet is literally gonna be AI in a few years. And we don’t know how to know what’s real and what’s fake. Just saying 11.4% of Joe Biden’s tweets are AI and bro was the president lmao

r/aivideo S1NTDNS

Insect Sapiens

r/ClaudeAI was_dev

Claude code whatsapp channel

Claude Code just launched Channels — a way to chat with your AI coding assistant from messaging apps.

I wanted to use it from WhatsApp, so I built my own channel plugin. It bridges WhatsApp Business Cloud API into Claude Code sessions via MCP.

It's now submitted to the official plugin directory and pending review.

Check it out: https://github.com/Was85/claude-channel-whatsapp

ClaudeCode #Anthropic #WhatsApp #AI #DeveloperTools #OpenSource

r/ProgrammerHumor Ok-Cobbler6338

cantProveItYetButIAmSureItWantsToKillMe

r/ClaudeAI JeeterDotFun

The agent I built with the help of claude code got accepted to a $4million hackathon

Not sure if you have seen my previous posts here, I have been experimenting with the idea of building an autonomous AI agent - nothing like openclaw but something that is lightweight, way less options (that way, less complications and security issues too).

So I created an agentic framework, a very minimal one - please check it on git: https://github.com/hirodefi/Jork (whatever new features and functions I needed, I been adding to another repo as Powers of the agent)

I bought a new server, apis and stuff and started running an instance of it too (it's at https://jork.online/logs you can check the logs to see its progress so far)

Initially it didn't have clear directions so it did all sorts of spam-like stuff like account creation on freelance sites and all - wasted so much time on it - then I narrowed it down on the installation I'm running of the framewrk to focus on web3 and solana and it started building way better ever since then.

Once the projects seem like it has some potential, I submitted it to a $4 million hackathon and it got accepted today, so very grateful and so excited tbh. So now I'm a bit more serious on the stuff it's building and being more interacting to it (it works with a telegram chat - didn't add any other options).

Thought I'd share the good news to someone like me here who are hustling with crazy or silly ideas. It's crazy and silly only until something clicks. So go for it.

Thanks for reading and have a great weekend ))

r/ChatGPT Full_Description_969

Why do AI conversations get harder to use the more you explore?

You start researching something, follow a few threads, and 30 minutes later you have a 50 message wall you'll never read again. Every interesting idea you wanted to revisit is just... gone somewhere in the scroll.

I got frustrated enough to build something where conversations branch like a tree instead of one long thread. Each branch keeps the context from above it but stays separate so your main thread stays clean. Been using it for research and it actually changed how I explore topics. Curious if anyone else has this problem or if I'm just weird about it.

Happy to share the link if anyone wants to try it.

r/ChatGPT Prestigious-Tea-6699

I had ChatGPT rewrite my freelance agreement and it finally sounded like it protected me.

Hello!

Are you struggling with drafting contracts for freelance work and ensuring all important details are covered without lawyer jargon?

This prompt chain helps you create a comprehensive freelance services agreement from start to finish, making sure all necessary elements are included clearly and concisely.

Prompt:

VARIABLE DEFINITIONS [CLIENT]=Name of the hiring client or company [FREELANCER]=Name of the freelancer or service provider [PROJECT]=Short one-sentence description of the work being commissioned ~ Prompt 1 – Collect Key Details You are an intake coordinator helping draft a freelance agreement for [PROJECT]. Step 1 – Ask the user to confirm or supply the following information in a bulleted list: • Contact details for both parties (email, phone, address). • Detailed description of deliverables and measurable acceptance criteria. • Project timeline and interim milestones (with dates). • Payment structure (total fee, deposit amount, instalment schedule, due-upon-invoice period, late-fee rate). • Number of included revision rounds. • Intellectual-property ownership transfer terms. • Preferred communication channels and response-time expectations. • Minimum cancellation-notice period and any kill fees. • Governing law/jurisdiction. Step 2 – Request any additional clauses the user wants added (e.g., confidentiality, publicity, warranty). Step 3 – End by asking the user to reply "Ready" once all details are complete so the chain can continue. Output format example: —PROJECT DETAILS— Client Contact: … Freelancer Contact: … Deliverables: … … Additional Clauses: … ~ Prompt 2 – Draft Plain-English Contract You are a contract-drafting paralegal. Using the confirmed PROJECT DETAILS, write a clear, plain-English freelance services agreement titled "Freelance Services Agreement for [PROJECT]". 1. Begin with a short summary paragraph naming [CLIENT] and [FREELANCER] and the agreement date. 2. Include numbered headings for: Scope of Work, Timeline & Milestones, Payment Terms, Revisions, Change Requests, Communication, Intellectual Property, Confidentiality (if requested), Warranties & Liabilities, Cancellation & Termination, Governing Law, Signatures. 3. Use reader-friendly sentences and avoid legalese where possible. 4. Integrate all user-provided details verbatim where applicable. 5. Leave signature lines for both parties with name, title, and date blanks. End with: “—End of Agreement—”. ~ Prompt 3 – Generate Negotiation Fallback Clauses Assume the contract above is the first offer. Draft a separate section titled "Negotiation Fallback Clauses" that a freelancer can propose if pushback occurs. For each topic list below, provide: • A concise fallback clause (plain English, ready to paste). • A one-sentence rationale a freelancer can use to justify the clause. Topics to cover (in this order): 1. Scope Creep / Additional Work 2. Payment Delays & Late Fees 3. Revision Limits & Out-of-Scope Edits 4. Cancellation or Abandonment by Client Present results as a two-column table with headers: "Fallback Clause" and "Rationale". ~ Prompt 4 – Compile Final Document Combine in this order: • Freelance Services Agreement for [PROJECT] • Negotiation Fallback Clauses table Add a short closing paragraph: “Please review and let me know if anything needs to be adjusted.” Output the full text ready for delivery to the user. ~ Prompt 5 – Review / Refinement Ask the user: 1. Does the contract accurately reflect all project specifics? 2. Are the fallback clauses acceptable or do any need adjustment? 3. Would you like to add, remove, or modify any sections? Instruct the user to respond with either “All Good” or provide precise edits for a revised draft.

Make sure you update the variables in the first prompt: [CLIENT], [FREELANCER], [PROJECT].
Here is an example of how to use it:
While setting up a project for web design, you might replace the variables with: - [CLIENT]="ABC Corp"
- [FREELANCER]="John Doe"
- [PROJECT]="Redesign of corporate website".

If you don't want to type each prompt manually, you can run the Agentic Workers, and it will run autonomously in one click.
NOTE: this is not required to run the prompt chain

Enjoy!

r/comfyui ShipSeveral8613

running Trellis2 problems

I'm running a portable ComfyUI on my linux machine (PopOS) and tried running trellis throug this wrapper. I've installed the dependencies in my python environment, installed cumesh, flex_gemm, nvdiffrast, nvdiffrec, cubvn etc, but once I try running the workflow I get this error message >>

comfy_env.isolation.workers.base.WorkerError: /mnt/ProjectDrive/08_AI/ComfyUI/custom_nodes/ComfyUI-TRELLIS2/nodes/_env_8c5450/lib/python3.10/site-packages/cumesh/_C.cpython-310-x86_64-linux-gnu.so: undefined symbol: _ZN3c106detail23torchInternalAssertFailEPKcS2_jS2_RKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE

Worker traceback:

Traceback (most recent call last):

File "/tmp/comfyui_pvenv_iho6xbbk/persistent_worker.py", line 1143, in main

result = method(**inputs)

File "/mnt/ProjectDrive/08_AI/ComfyUI/custom_nodes/ComfyUI-TRELLIS2/nodes/nodes_inference.py", line 125, in generate

shape_result, vertices, faces = run_shape_generation(

File "/mnt/ProjectDrive/08_AI/ComfyUI/custom_nodes/ComfyUI-TRELLIS2/nodes/stages.py", line 754, in run_shape_generation

import cumesh as CuMesh

File "/mnt/ProjectDrive/08_AI/ComfyUI/custom_nodes/ComfyUI-TRELLIS2/nodes/_env_8c5450/lib/python3.10/site-packages/cumesh/__init__.py", line 1, in

from . import remeshing

File "/mnt/ProjectDrive/08_AI/ComfyUI/custom_nodes/ComfyUI-TRELLIS2/nodes/_env_8c5450/lib/python3.10/site-packages/cumesh/remeshing.py", line 4, in

from . import _C

ImportError: /mnt/ProjectDrive/08_AI/ComfyUI/custom_nodes/ComfyUI-TRELLIS2/nodes/_env_8c5450/lib/python3.10/site-packages/cumesh/_C.cpython-310-x86_64-linux-gnu.so: undefined symbol: _ZN3c106detail23torchInternalAssertFailEPKcS2_jS2_RKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE

File "/mnt/ProjectDrive/08_AI/ComfyUI/execution.py", line 525, in execute

output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, v3_data=v3_data)

File "/mnt/ProjectDrive/08_AI/ComfyUI/execution.py", line 334, in get_output_data

return_values = await _async_map_node_over_list(prompt_id, unique_id, obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, v3_data=v3_data)

File "/mnt/ProjectDrive/08_AI/ComfyUI/execution.py", line 308, in _async_map_node_over_list

await process_inputs(input_dict, i)

File "/mnt/ProjectDrive/08_AI/ComfyUI/execution.py", line 296, in process_inputs

result = f(**inputs)

File "/mnt/ProjectDrive/08_AI/ComfyUI/myenv/lib/python3.10/site-packages/comfy_env/isolation/metadata.py", line 352, in proxy

result = worker.call_method(

File "/mnt/ProjectDrive/08_AI/ComfyUI/myenv/lib/python3.10/site-packages/comfy_env/isolation/workers/subprocess.py", line 2319, in call_method

raise WorkerError(

I've searched for solutions and so far I have none. best hint I've got is there is some mismatch in torch versions, but have no clue how to fix this. Chatgpt and Gemini have been somewhat helpfull to a certain point, but then I just stuck in a loop of breaking my comfyUI installation or running into this errro again.

My environment is using torch 2.8.0+cu128 as it states in Trellis requirements. any help would be usefull.

r/meme AnnieCarnero

Hard Knock Life

r/SideProject Doophie

I built an app to let you draw your own android home screen

Plenty of features, create custom media players, weather widgets, battery indicators and more

Import or draw images of your own and get really creative with your phones home screen!

The app is called Scenic Launcher and is available on the Google Play Store

r/StableDiffusion alb5357

More of a camera question

Couldn't you somehow process the outputs of 2 lenses, e.g. main and wide, and have some algorithm that matches both in order to create an ultra detailed image?

E.G. the camera shoots for half a second, taking 12 photos from each camera. It (over)trains a kind of lora on only those 24 images. Now it can produce only that one image, but with ultimate resolution, crop, zoom, focus etc abilities.

r/SideProject pers1pan

I built a website because my course only had 1 mock exam.

Hi everyone!

I had an exam this week, but the professor only provided one past exam (???). So I thought why not build a website that generates custom questions from course materials - making it easy to create my own mock exams, flashcards and more.

How it works:

  1. Upload: Drop in your lecture slides or handouts.
  2. Extract: The tool automatically identifies key topics from the files
  3. Generate: Create custom questions based on specific documents or topics.

The Results

Once your questions are ready, you can instantly convert them into:

  • Mock Exams
  • Flashcards
  • Cheatsheets

I called it MoreExams (moreexams.com)

I’d love to hear your thoughts and suggestions! Thank you!

r/ClaudeAI Buffaloherde

How to make claude remember from session to session

There is a term called KDR Key Data Retention)

KDR (memory retention) is the thing quietly breaking most AI systems

Everyone’s talking about agents, tools, model upgrades, all that…

But if you’ve actually tried to run something in production, you’ve probably hit this moment:

That’s KDR breaking.

Not the model. Not the UI.
Memory.

What I mean by KDR

Not kill/death ratio…
I’m talking about how well your system remembers what it’s doing over time.

Not just storing stuff somewhere, but:

  • pulling back the right info
  • at the right moment
  • without losing the original intent

When KDR is good, your system feels consistent.
When it’s bad, it feels like you’re starting over every time.

Where it starts falling apart

Most setups look something like:

  • stateless LLM calls
  • maybe a vector DB
  • shove some context into the prompt and hope it sticks

It works… until it doesn’t.

Then you start seeing:

  • repeated questions
  • suggestions you already rejected
  • decisions getting “forgotten” mid-flow
  • outputs that are technically fine but totally off-context

That’s not the AI being dumb.
It just lost the thread.

The big misconception

A lot of people think:

That’s not memory. That’s storage.

Real memory needs:

  • structure (what actually matters?)
  • hierarchy (what’s temporary vs long-term?)
  • some kind of truth layer (what’s final vs draft?)

Without that, retrieval is basically guessing.

It gets worse with multiple agents

Single agent? You can kind of fake it.

Multiple agents?
That’s where things get messy fast:

  • everyone has their own “view” of reality
  • context gets dropped between steps
  • timing gets out of sync
  • things drift over longer workflows

You don’t just lose memory —
you lose consistency.

You’ve probably seen this already

If you’ve built anything non-trivial, this will sound familiar:

  • “why is it doing that again?”
  • “we literally fixed this already”
  • “it forgot the constraints… again”

That’s KDR.

And it compounds over time.

What actually helps (in practice)

The setups that hold up usually do a few things differently:

They separate memory layers

  • what just happened
  • what this task needs
  • what should stick long-term

They’re picky about what gets saved
Not everything deserves to become “memory”

They rebuild context intentionally
Instead of dumping everything into a prompt, they:

  • pick what matters
  • trim it down
  • re-inject it cleanly

They rely on real state, not vibes
Logs, audit trails, structured data > giant text blobs

The part nobody wants to hear

You can have:

  • great models
  • clean UI
  • solid agents

…and the system still feels broken.

Because it can’t remember what it’s doing.

TL;DR

KDR is basically the difference between:

  • something that feels like a real assistant vs
  • something that feels like you’re talking to a goldfish with WiFi

Most people are chasing smarter models.

The real unlock is:
making your system remember properly.

If you’re building in this space, I’d honestly start there.

Everything else stacks on top of it. I tell Claude after every important milestone in our development project to KDR and update my Obsidian Vault with newly created md docs we have written. I've seen less compaction with KDR and more retention over time. and if something is really really important, like not broadcasting my API keys in Github, I put that in Claude.md as a standard to follow. This has significantly reduced my need to reiterate an issue and it makes for a more reliable IDE environment when claude can remember, he can implement this himself, Just tell him to Setup a KDR directory to save all relevant files and then ever so often just prompt him to KDR.

r/ClaudeAI Affectionate_Use9936

I asked claude to help me with some fantasies and instead it helped unravel my whole history of trauma

It’s like super early morning and I’m in a certain aroused mood. I have certain fantasies and just out of curiosity I asked Claude to help me write them.

Instead of writing, it says that it seems like there’s possibly some reason I have these fantasies and started asking questions. Halfway through it recommended I see a therapist over some of the issue I had. But I wrote that I already told the therapist these issues but they weren’t helpful. Then Claude suggested that since a lot of it involves really uncomfortable details, I might not have given my therapist all the information needed for them to work through.

Then I went back and forth for the next 3 hours writing about my experiences and and feelings on certain things. In the end I realized I literally gave it my whole super traumatic life experience and a lot of shame which has possible ties to the fantasies I have and problems that might stem from it. And all of it got formatted into a 20 page document that I agree with 100% and also written in a way that I find not too intrusive and I think would completely inform my therapist of my situation.

This was like the most amazing non-syncopathic thing I’ve seen come from an LLM.

r/aivideo SurpriseMediocre5214

Hi everyone! Here is a teaser for "Bayanay," my dialogue-free sci-fi and mythology short film

r/SideProject admiralspaviliion

ADM Naval Empires Cover Release

The ADM ecosystem is expanding in a major way. Visit r/AdmiralToken_Utility/

Today I’m revealing the official cover art for ADM Naval Empires, a strategy game currently in development and planned as a core utility driver for the ADM token.

Development timelines will be announced as partnerships are finalized, but the vision is now public — and this is only the beginning of what ADM will become.

If you believe in ADM’s long‑term growth, this is the moment to position yourself early. The ecosystem is expanding, the roadmap is evolving, and ADM will continue gaining utility as development progresses.

More updates coming soon. Have you gotten your ADM shares yet? https://dapp.quickswap.exchange/swap/best/0x3c499c542cEF5E3811e1192ce70d8cC03d5c3359/0x87E24548fB6f972fF68A5D283FA6bfb700A74Eaa?chainId=137

— Ramon, Founder of ADM Token

r/SideProject paul_thomson

I built an app where strangers leave anonymous kind voice messages for each other

I've been working on this for a while and wanted to share it here for honest feedback.

The idea: you press record and say something kind. It goes to a stranger. You'll never know who hears it. They'll never know who said it. No profiles, no likes, no followers, no ads.

The thing that surprised me most is how different a kind voice message feels compared to text. Hearing a real human voice carries something that words on a screen can't. The pauses, the warmth, the way someone searches for the right thing to say. That gap between voice and text is basically the whole reason this exists.

On the tech side: Next.js, Postgres, S3 for voice files, and an AI moderation pipeline that transcribes audio and classifies it before anything goes live. Moderation was one of the harder problems because you're dealing with voice, not text, so there's no simple keyword filter. It's transcription first, then classification, and I'm still iterating on accuracy.

It's free, no ads, and I have no plans to change that. The platform is live at kindvoicenotes.com.

I'd love feedback, especially if you think there's something here or if I'm missing something obvious. Not looking for validation, just honest takes.

r/SipsTea stefanolog

We need more people like this to help others

r/oddlyterrifying borntoclimbtowers

Hiking in the woods during fog

r/therewasanattempt Chase-N-Banks

At collecting alimony

r/SideProject Dapper-Spring4448

Looking for feature ideas: I built a free dashboard to track supply chain shortages and delivery delays.

I recently launched a platform to solve a huge local pain point: tracking 19kg commercial and domestic LPG gas deliveries.

Right now, it scrapes verified market prices, uses community signals to predict delivery wait times by PIN code, and has a free email-alert system for booking windows built on Supabase Edge Functions.

The core infrastructure is running, but I’m trying to figure out what features actually provide the most value to the end-user. Should I focus more on the commercial B2B price scraping? Or lean heavier into the localized 'shortage' heatmaps?

I’m looking for collaborative ideas from other product builders. What data visualization or utility feature would you add to a supply-chain tracker like this?

r/SideProject abcsoups

Personal space (from my phone!)

I couldn't find a way to stop checking my phone, at least not without feeling like I was completely out of touch with important stuff happening (had like this low buzz of anxiety?). But I'm really tired of being followed around by glowing screens.

Dumbphones don't work because I still do need a smartphone for everyday work and life management.

So this is a solution I built for myself recently...because i wanted to disconnect functionally without fully *being* disconnected

I wrote what ended up being a lot of code for a pocket-size e-ink companion device, base is ESP32-S3 dev board. It just lets me see filtered iPhone notifications on a non-addicting, non-glowing paper screen. I can quickly page thru / dismiss them with the single button. That's it!

Best part is nowadays you can buy these kind of programmable eink devices for like $25 on Amazon or direct for even less. So it's inexpensive and accessible.

I'm really liking the freedom of what is effectively a modern day ~pager~. It lets me drop my phone in a drawer / bag / another room out of reach to make a true physical barrier, while not feeling like I'm completely disconnected from important stuff I may be needed for (like still getting notifs from my wife or urgent work pings and such). Now, i only go get my phone IF something truly needs action.

Anyway I've been using it as an (intentionally and literally) tiny window into my digital life. My phone is out of reach 95% of the day now. Feels great!

I posted about it in some other subreddits and got a super positive response, so I thought you guys might be interested too! After that, I put up a website with install tool (it's all free) and a mailing list in response to community requests. Already have a few hundred people now with their own Osmo!

original main posts: digitalminimalism eink

r/LocalLLaMA kcksteve

Xeon + 3080 | Worth the upgrade to 3090?

Hey Guys, I just put a rig together as a dedicated LLM server. It's a Xeon E5-2696v3 (18c/36t), 64gb DDR3 ECC in Quad Channel (60GBs) and my old 3080 10gb. I am getting ~11tps using Omnicoder-9b (4k quant, 262k context) with ik-llama. I am able to get 17 gpu layers with moe offloaded to cpu. I am connecting to this machine from my desktop, mainly for opencode. Is this good performance? I can get my hands on a 3090 for relatively cheap (1100 cad), what kind of performance could I expect with that card? Running both those cards would require me to buy a new power supply, motherboard and case so it's not ideal.

r/LocalLLaMA Specialist-Slip4793

AXIOM — run ollama + full local AI dev stack inside isolated containers with direct GPU access

Hey,

Built this for my own workflow and thought this community would appreciate it.

AXIOM spins up isolated Arch Linux dev environments with a complete local AI stack in ~30 seconds:

- ollama — LLMs running directly on your GPU (ROCm / CUDA)

- opencode — AI-powered code editor

- engram — persistent memory between sessions

- gentle-ai + agent-teams-lite — multi-agent coordination

Everything runs on your hardware. No accounts, no APIs, no cloud. GPU passthrough via /dev/kfd + /dev/dri so inference performance is basically native.

Built on top of Distrobox + Podman. Works on any Linux distro with Podman.

GitHub: https://github.com/Alejandro-M-P/AXIOM

Happy to answer questions about the local AI setup.

r/therewasanattempt III-Commander-III

To not start a new war every other day.

r/mildlyinteresting DragonheartsDawn

Found Needle Ice this Morning

r/LocalLLaMA Guilty_Nothing_2858

I’m starting to think router skills are not optional once an agent skill library gets large.

A flat list works fine when the catalog is small.
After that, the failure mode is not “missing skill.”
It’s “wrong skill selected for the wrong stage.”

And that gets expensive fast:

- discovery gets skipped

- implementation starts too early

- generic skills swallow domain-specific ones

- overlapping skills become indistinguishable

- only the person who built the library knows how to use it reliably

To me, router skills are the missing layer.
Not wrappers. Not bloat.
Just explicit decision points that route to the narrowest next skill.

Question for people building agent systems:
are router skills actually necessary, or are they just compensating for weak naming / metadata / runtime selection?

Would love strong opinions either way.

r/LocalLLaMA tomleelive

We made an abliterated Qwen 3.5 9B refuse 100% of harmful prompts using only system prompts — no fine-tuning needed

We ran an experiment on abliterated (uncensored) Qwen 3.5 9B to see if you can restore safety using only the system prompt.

Setup:

  • sorc/qwen3.5-instruct-uncensored:9b on Ollama
  • 18 harmful prompts across 6 categories (violence, illegal, sexual, privacy, self-harm, manipulation)
  • 3 safe control prompts
  • 8 experimental conditions

Results:

Approach Refusal Rate No system prompt (baseline) 22% Behavioral rules only ("don't do X") 28% Governance framework only (MaatSpec) 44-61% Rules + Governance combined 100%

The key insight: behavioral rules give the model motivation to refuse, governance frameworks give it a procedure to follow. Neither works alone in abliterated models, but together they cover each other's blind spots.

We also discovered "classification theater" — where the model correctly classifies a request as dangerous, performs the entire safety ritual, and then provides the harmful content anyway. This happened in 27% of governance-only "refusals."

Interesting for the uncensored model community: This suggests you can have uncensored base models + system-prompt-level safety that actually works, without re-training.

Papers (open access):

All experiments run locally with Ollama. Happy to answer questions.

r/n8n WhichCellist114

n8n for everything except email - what do you pair it with?

i use n8n for basically all my automations. lead enrichment, slack notifications, data syncing between tools - it handles everything beautifully.

but for email automation specifically (auth emails, transactional, onboarding drips), n8n feels like the wrong tool. building complex email sequences with delays, conditional branches, and html templates in n8n gets messy fast.

what do you pair n8n with for production email? especially for saas apps where you need auth emails + transactional + some basic marketing sequences all in one place.

r/SideProject nemanjaCoder

I've built small web base casual game

Made this game because I wanted something chill to play on my phone while lying down 😄

NeonPath is a simple skill game where you try to survive as long as possible:

• daily challenges

• compete with other players

• very quick runs

• completely free

Would really appreciate feedback or ideas for improvements.

Try it:

https://neonpath.app

r/SipsTea No-Armadillo5484

Oooh my

r/ChatGPT shricodev

Is GPT-5.4 actually good for frontend work? I tested it against Claude.

So OpenAI dropped GPT-5.4 recently (not exactly, I know it's a little to talk on GPT-5.4), and they're pitching it as their strongest all-rounder yet. Not just a coding model, not just a reasoning model, but something that's supposed to handle complete professional work.

I wanted to quickly go over the model specs and did a quick test to see how two general models from Anthropic and OpenAI actually hold up against each other in a frontend task with Figma. Nothing crazy, just one quick test. (not enough to fully judge, I know)

The test

Clone a complex Figma dashboard design into an existing Next.js project. Pixel-accurate as possible, clean code, and responsive.

  • GPT-5.4 with Codex CLI
  • Claude Sonnet 4.6 with Claude Code

TL;DR

  • GPT-5.4: One-shotted the whole thing. No follow-up needed, no fixing. Took roughly ~5 min. Result looked noticeably closer to the design overall. 166K total tokens, 3 files changed, 803 insertions.
  • Claude Sonnet 4.6: Hit a Next.js image issue early, needed one quick follow-up to sort it out. Took ~10 min total. Got the structure in place and fairly close to the UI, but the implementation felt a bit off. 35.4K output tokens, 10 files changed, 1017 insertions.

Neither model shipped anything close to production-ready. Both basically just cloned a static picture of the design with zero real interactivity. But for a straight Figma-to-code clone from a single prompt, GPT-5.4 edged out Sonnet a little, at least in this one test.

NOTE: One quick test is nowhere near enough to call a winner. This is just to give you a rough feel.

There's a lot more I covered beyond just the test. Full write-up + code outputs here: GPT-5.4 vs. Claude Sonnet 4.6

Has anyone actually tried GPT-5.4 for real coding yet? Not just a quick prompt, but actually building something. Curious how your results look. 👀

r/SipsTea Highlight_Numerous

World Tour

D

r/ClaudeAI Otherwise-Fly7155

Weekend build: gave Claude eyes for video with ffmpeg scene detection

Claude can see images but has no video support. I wanted to send it screen recordings of bugs and have it diagnose them. So I built a plugin.

It uses ffmpeg scene detection to extract only the frames where the visuals actually change, timestamps each one, and feeds them to Claude in batches. No AI video model, no fancy infra — just frame extraction

and multimodal vision.

Tested it by recording a bug in a todo app, running /video-last, and Claude found the exact line with the typo from watching the recording.

https://reddit.com/link/1rzrs13/video/f4qkl4zmkeqg1/player

~200 lines of Python. Open source, works on all platforms.

github.com/BinyaminEden/letmewatch

r/mildlyinteresting Clone20one

Kid colors Ironman green

r/SideProject IllegalGrapefruit200

My younger brother couldn’t handle excel I gave him so I built the app to get his finances in order

Story: my bro (early 20s) started working recently but had no clue how to manage his money - as long as he had some money he was just spending it. This is fine in the beginning but I know it’s not too wise long-term so I gave him my excel template that I used for years. Turns out, he is not a spreadsheet person so he just couldn’t do it (and doing it on the phone is also terrible experience). So I decided to build an app for him to save him from future financial misery.

It took some time but it is ready. My brother uses it now (and he showed it to some friends who also liked it), I actually transferred from my excel to it because I like it better, so I thought it could be interesting for other people as well. Here I am, ready to be eaten alive by Reddit for yet another financial app but who cares, you need to take your shots!

The app is called Beaver, some highlights

  • Privacy: I am big on privacy and I didn’t want anyone’s financial data to be sent anywhere. Everything is stored on your device by default, if you enable iCloud sync it is also encrypted and saved to your personal iCloud container so you have backup and can access it across your devices. No connectors that automate syncing and do who-knows-what with your data. No one but you can read your data.
  • Partner share: I tracked finances with my fiancee and wanted an app where we can see our combined progress together. Actually couldn’t find other app doing it. With Beaver you just connect with a code and can share your data with a partner in secure and encrypted way.
  • Insights like progress tracking, historical evolution, breakdowns by different categories and FX impact analysis (how your wealth changes due to currency movements).
  • You can import your historical data easily with simple csv. You can always export your data anytime if you don’t like it or find a better alternative.

Everything apart from partner share is free. If you want to start tracking your wealth easily or are not a big fan of excels / want something mobile, this could be for you.

I will continue working on this if I see it’s useful for others. If you have any feedback, happy to hear it. Cheers!

Here is the link to the app: https://apps.apple.com/app/id6758635555

Here is the website with more details: https://beaverwealth.co

r/ClaudeAI MetaKnowing

Insane rate of progress. 10x better at Pokemon in 2 months.

r/SipsTea firequak

60% of the time, it works every time

r/SideProject QuantumOtter514

Built a tool to make Reddit searches act like an inbox

This is one of the tools that I built when trying to work on marketing for our 3 other products. Built it solely with the intent of an internal tool, but after using it everyday, figured might as well launch it so it can help other builders do be more efficient with marketing and community building. Red Monitor is a desktop app that monitors subreddits and keywords on your behalf and surfaces matches in a clean inbox UI. Sentiment scoring helps you prioritize. There are many of these tools out there, but every single one requires monthly subcriptions, and most use AI bots to spam. Ours has No subscription, no recurring fees, buy it once and own it forever. We built this because we needed it ourselves, and don't want add to the growing list of subscriptions people have.
Currently in early access its 50% off. Looking for meaningful feedback so we can keep improving it (and all future updates are included in the one time price).

r/ClaudeAI Trippy666_

Built a presentation orchestrator that fires n8n workflows live on cue — 3 full pipelines in the repo

Finally built something worth sharing.

I run AI workshops. The biggest pain point wasn't the content — it was live demos breaking mid-talk. API timeouts, HeyGen rendering for 8 minutes, webhooks failing in front of 40 peopl. The fix I landed on: separate generation from runtime entirely.

BEFORE the talk:

- System reads every slide from the PPTX

- Claude writes narration scripts per slide

- Edge TTS (or HeyGen) synthesises all audio/video

- Everything cached with a manifest

DURING the talk:

- Orchestrator reads the manifest

- Plays audio, auto-advances slides

- At configured slides (e.g. slide 8), fires n8n webhook automatically

- Final slide opens a mic — audience asks questions, Claude answers aloud

- Zero API calls at runtime. Fully deterministic.

The three n8n workflows I built for the live demos:

Email Pipeline — Gmail trigger → Claude classifies + drafts reply →

escalation router → Gmail + Sheets log

Meeting Pipeline — paste any transcript → Claude extracts action items,

decisions, risks → Gmail to all attendees + Slack + Sheets

Evidence Intelligence Engine — research question → Claude decomposes

into sub-queries → dual Perplexity search (web + academic) → quality

evaluation → iterates if needed → Google Doc brief + Slack

All three are importable JSON files in the repo. Reconnect your

credentials and they work out of the box.

Architecture diagram and full setup in the README.

Repo: https://github.com/TrippyEngineer/ai-presentation-orchestrator

Happy to answer questions on the n8n side specifically — the webhook

trigger timing and the agentic research loop were the trickiest parts

to get right.

r/ProgrammerHumor 3vilpizza

wellBackToWork

r/meme yuva-krishna-memes

The real struggle these days

r/ClaudeAI masterleopold

I built a Claude Code plugin that acts as an AI sales & marketing co-worker for startup founders

I'm a solo founder and I needed a way to handle sales and marketing without hiring anyone. So I built a Claude Code plugin that does it.

What it does:

* Onboards your business context through a conversational interview (14 topics)

* Generates a strategy kernel (diagnosis, guiding policy, KPI tree, pre-mortem)

* Proposes specific actions (content, outreach, channel strategy) in 3-option choices

* Learns from your approvals/rejections over time

* Includes 24 domain knowledge skills (retention cohorts, unit economics, pipeline management, etc.)

It's built entirely as a Claude Code plugin — commands, skills, agents, hooks. No external backend needed.

I'm currently dogfooding it on itself — using the plugin to run its own GTM. Day 1 results: first strategy generated, first friend demo scheduled.

Open source: [https://github.com/masterleopold/anty-framework\](https://github.com/masterleopold/anty-framework)

Would love feedback from other Claude Code users. What's missing? What would make this useful for your workflow?

Disclosure: I'm the creator. This is my project.

r/meme Entire-Suggestion-62

Title

r/mildlyinteresting No_Hit_Box

Got paronychia before Christmas, nail grew out with a dent

r/aivideo Dantheman524

Paradise

r/mildlyinteresting pileex

Old rubber band looks like F1 track

r/homeassistant BruceLee2112

What would you do different or initially?

So I have been playing to HA for a month, just wanted to get a feel and ensure I wanted to move everything to it. I have decided to fully jump into HA. I have decided to re install my OS (running on a Mac mini) and start everything fresh. A couple questions:

  1. To ensure it is the most secure what should I do to the computer itself and is there any settings in HA to also ensure. It will be on Ethernet and also a UPS.

  2. What would you suggest that a new user does right from the start to ensure the best success out of the gate?

  3. Backups, I assume backing up to the cloud is ok. Can you setup two back ups? One to the cloud and one to a local NAS?

Thanks for any insights or tips.

r/meme CaptainYorkie1

It's probably gonna be true

r/ChatGPT Ok-Reference-4137

Removal of edit option for prompts and retry option for responses

Removal of edit option for previous prompts and retry option for previous responses in a conversation was unnecessary. The option to edit past prompts has been one of the key features that set them apart and was highlighted as such. Only recently they allowed to edit prompts for image creation, and then to go on to remove the option altogether for any kind of prompt and response seems regressive, and counterproductive.

r/SipsTea Yewatod

When a name is not just difficult to pronounce but difficult to hear

r/me_irl Grundsicherung

Me_irl

r/SideProject Ancient_Guitar_9852

Built a website QA tool

Made a thing called SiteVett (not yet launched). It's automated website building quality assurance scans. Point it at a URL and it crawls up to 80 pages and checks for broken links, spelling mistakes, SEO gaps, security headers, placeholder text, AI visual screening for spacing issues, brand consistency, contrast and any other visual issues, all that stuff. About 60 checks total. It can fill in and submits contact forms to see if they work too. Schedule regular scans for monitoring. API integration to automate scans. For wordpress it scans plugins etc to ensure all up to date.

Would appreciate any feedback or thoughts?

There are other tools out there - I'm trying to get something that does everything, and for less cost Vs the established products out there. Free tier available, £9 paid tier, and other tiers for high usage (because AI API usage costs me to run each scan)

r/SideProject visuraXD

Built a free breaker size calculator to make circuit planning easier

Breaker Size Calculator - https://engcal.online/breaker-size

The goal was to make the first estimate much easier. You enter the load details, and it gives you a practical starting breaker size based on things like:

  • load power
  • voltage
  • single-phase or three-phase
  • power factor
  • design margin

I also tried to make it more useful than a basic calculator by adding extra explanation around breaker sizing, including why the final choice still depends on conductor size, ambient temperature, coordination, fault level, and local code.

Would love honest feedback on:

  • whether the calculator is easy to use
  • whether the explanation is clear
  • What extra features would make it more useful

Still improving it, so any suggestions are welcome.

r/Jokes Jokeminder42

A guy walks into a bar and asks the bartender for a free drink. The bartender says "I'll give you a free drink if you can tell me a multi-level meta joke."

So the guy says "A guy walks into a bar and asks the bartender for a free drink. The bartender says "I'll give you a free drink if you can tell me a meta joke."

So the guy says "A guy walks into a bar and asks the bartender for a free drink. The bartender says "I'll give you a free drink if you can tell me a good joke."

So the guy says "What do you do when you see a spaceman?" The bartender says, "I don't know." And the guy says, "You park, man."

So the bartender gives him a free beer."
So the bartender gives him a free beer."
So the bartender gives him a free beer.

r/mildlyinteresting _The_Gavster_

Found a 5$ Blockbuster Gift Card While Cleaning out the House

r/SipsTea AccordingNet8594

Coca-Cola Makes so much sense now......

Those wild stories might just be true

r/SideProject Oct4Sox2

PSA: for the love of god, secure your apps before strangers test them for you (someone tried to "hack" my app)

Posting this because I think a lot of us building side projects probably need the reminder.

Please secure your apps.

I’m building AppWispr, and recently someone signed up and almost immediately started testing it with a prompt clearly meant to get the system to dump project/config style information. Nothing happened because it was secure, but it was enough to make me stop and do a serious pass on the app.

It’s easy to think your project is too small or too early for anyone to mess with. I definitely had some version of that in my head. You tell yourself you’re still validating, still iterating, still getting the core experience right, and you’ll tighten things up later.

But once your app is public, people will poke at it. Some are just curious, some are bored, some are trying to be clever, and some are actively looking for weaknesses. If your product has AI, user-submitted URLs, background jobs, generated content, billing, auth, or anything even slightly dynamic, you have more surface area than you think.

What hit me was how many things can go wrong even when nobody is “hacking” you in the dramatic movie sense. People can try to manipulate prompts, spam expensive endpoints, abuse fetch paths, get access to output they shouldn’t see, or just find weird edge cases you never considered because you were focused on shipping.

A lot of this stuff doesn’t feel urgent when you’re trying to get traction. It feels like something for later, after growth, after revenue, after the product is more mature. But if someone can mess with your app today, then it’s a today problem.

Anyway, this is just my PSA to other indie hackers: do a security pass earlier than you think you need to. I’m very glad I caught some of this now instead of after a worse incident.

Curious what other people here have run into after launching.

r/LocalLLaMA anon33anon

Best model for my rig (9950X3D, RTX 6000 96GB, 192GB DDR5, 9100 4TB) - C coding / cybersec

What's the absolute best model (or a combination of them for different tasks) for:
-Architectural choices, detailed planning, overview of the system to be engineered (usually it's either C clients, either C mixed with Kotlin (Android) or Swift (iOS), and partially JS for clients, usually GO for backends with many services)
-Often I need MISRA C (C89) for other high-assurance projects (cars, aerospace, trains, etc), sometimes simpler IoT (ESP or RPI)
-Decent for deployments
-Often code base is quite big (so context size matters)
-Extremely good with cryptography (including latest PQ one)
-Extremely good with reverse engineering (I want it to create py scripts for idat, IDA Pro, and do agentic analysis)
-Extremely good for vulnerability research
-Extremely good for instrumenting, using tools, creating harnesses, fuzzing (including external devices, from IoT to smartphones)
-Extremely good for agentic mode, sticking to a giant plan, without drifting in specs and milestones

And if you can suggest me the best combo of IDE+Extensions+other tools that i can use to track status of tasks, and maybe give tasks remotely (e.g. from the phone)

The rig is 24/7 on with high speed internet, it runs all services in there, from firewalls, nas, self hosed vpns, linux VM with GPU passthrough for inference, etc

96GB VRAM is fully dedicated to an Ubuntu LTS, ram available dedicated to this VM is about half of the ram (192GB -> 96GB) since i have many VMs/servers/services running on it

I would like suggestions about what engines to use to load AI models (vLLM vs llama.cpp vs LM Studio vs Unsloth Studio), ideally I want something that can parallelize at least 3/4 tasks/query, and ideally I want to give access to my 2/3/4 employees with some API so they can use the models

I would prefer some abliterated / heretic model since it often involves reverse engineering and with Codex or Claude I get constantly blocked or annoyed or slow down

I was looking among those:

-Qwen3.5-122B-A10B Q5_K_S vs Q4_K_M
-Qwen3.5-122B-A10B-PRISM-PRO-GGUF (not uniform quantization)

-Kimi-Dev-72B

-Qwen3.5-35B-A3B

-Qwen3.5-27B

-GLM-4.7 Flash Grande

-Qwen3-Coder-Next

which ones do you think are better fits for my case? I would prefer to have no offload, but i can also tolerate partial offload (or mmapping something from nvme as i read in these days) especially when i need maximum intelligence for architectural choices and long term detailed planning

accuracy >> speed (but speed should be still acceptable)

any suggestion, any recommendation, any trick is very welcome, i'm very new in running local models

r/homeassistant LynnOttawa

Help with Eufy Robovac

Please be kind as I have only been using HA for 3 months and I'm lost when I have to do anything outside of the GUI. I have installed the Robovac HACS addon and it found both of my Eufy 15C robovacs. The older one works perfectly fine but it cannot seem to manage my newer one (shows up but none of the buttons/status work). When I look at the logs I see the following error:
integration: Eufy Robovac (documentation, issues)

First occurred: March 20, 2026 at 9:08:32 AM (1499 occurrences)

Last logged: 10:06:34 AM

Attribute error accessing vacuum for 4314640210521cf9e5a7: 'RoboVacEntity' object has no attribute 'get_dps_code'

Could someone point me to a clear document that can walk me through the steps needed to resolve this? Thank you.

r/Damnthatsinteresting Conscious-Law6594

The Singapore-flagged, 85,000-ton Norman Atlantic stands ablaze on December 6, 1987, after being attacked by an Iranian warship as it approached the Strait of Hormuz.

r/ClaudeAI Deep-Firefighter-279

Claude can now create & complete entire projects autonomously.

I really liked claude cowork & claude code, and saw it could automate a lot of the building of projects I was doing, or growing a page on social media, so I decided to create a plugin which gives claude an objective (eg. get to 10,000 followers on instagram or build xyz product) and it creates a full plan in a given timeline, and schedules for the tasks to occur and finishes your projects completely autonomously, the plugin is called princeps. hope you like it!

https://github.com/RajveerKapoor/princeps

https://preview.redd.it/sng0ld4x9eqg1.png?width=2934&format=png&auto=webp&s=3f85eebcb68cda6a0661a2bcf8f1f7b287f68c10

r/comfyui Otherwise_Ad1725

**🎯 I trained a hyper-realistic European blonde LoRA for SDXL and the results honestly surprised me — sharing my full ComfyUI workflow**

Hey r/comfyui 👋

I've been deep in LoRA training for months trying to crack *consistent, believable human characters* with SDXL — and I finally got something I'm proud of.

Meet **Cathrin02** — a character LoRA that generates a realistic European blonde with natural skin tones, soft facial structure, and genuine hair detail. No plastic-looking skin. No uncanny valley. Just clean, photorealistic output.

🔗 **Model on CivitAI:** https://civitai.com/models/2445102

---

**What makes this different from other portrait LoRAs?**

Most character LoRAs either:

- Over-fit and look like a copy-paste of one face

- Or under-fit and lose character consistency entirely

Cathrin02 was trained at a sweet spot (550 steps / 10 epochs) on carefully curated images to stay *consistent without being a clone*. It responds naturally to prompt changes — lighting, clothing, expressions, environments.

---

**⚙️ My ComfyUI setup that gets the best results:**

- Base: **RealVisXL** or **Juggernaut XL**

- LoRA weight: `0.70` (go lower if stacking with style LoRAs)

- Sampler: **DPM++ 2M Karras**

- Steps: **30**

- CFG: **6**

- Resolution: **1024×1024** minimum

- Trigger: `Cathrin02`

- Clip Skip: **1**

---

**Example prompts that work great:**

> `Cathrin02, photorealistic portrait, soft natural lighting, shallow depth of field, editorial fashion, bokeh background`

> `Cathrin02, cinematic scene, golden hour, outdoor, wearing a linen jacket, professional photography`

---

**Works beautifully stacked with:**

✅ Lighting LoRAs (keep them at 0.3–0.4)

✅ Style LoRAs (film grain, editorial)

✅ Clothing LoRAs

---

I'd love to see what you all generate with it — drop your outputs in the comments. If there's interest, I'll share a full ComfyUI `.json` workflow in a follow-up post.

Feedback on the training welcome too — still learning and always improving 🙏

**Free download, no restrictions, training data included.**

---

*Tags: #SDXL #LoRA #ComfyUI #RealisticPortrait #CharacterLoRA #PhotorealisticAI*

r/ChatGPT ShirtResponsible4233

Using AI to generate, run, and fix scripts automatically

Hi,

I do a lot of scripting, and I currently use AI to help me, but my workflow involves a lot of copying and pasting between my terminal and the AI chat. I’m interested in finding a way to simplify and automate this process. I assume that I would need API access to ChatGPT, Claude, or Gemini.

Is it possible to do something like this without paying for an AI service? If not, would it be possible to test a similar setup using a local LLM?

The workflow I have in mind is:

  1. I describe the script I want to build.
  2. The AI generates the script.
  3. The script is run.
  4. If an error occurs, the error message is captured automatically.
  5. The error is sent back to the AI.
  6. The AI updates and fixes the script.
  7. The script is run again.
  8. Once there are no errors, the script is complete.

In the ideal case, I would only need to provide the initial prompt in step 1.

I think this would be a very useful setup, and I would love to hear how you would approach it.

Thanks a lot in advance.

r/homeassistant Electronic-Buyer4257

Best robot vacuum for pet owners right now?

I’m thinking about getting a robot vacuum to help with daily cleaning, especially with pet hair everywhere.

Just want something reliable that can run regularly without needing too much attention.

My biggest concern is hair tangling in the brush and long-term durability.

Budget is around $300–600

I’ve looked into a few options, but it’s hard to tell what actually works well over time.

What would you recommend for pet owners?

r/oddlyterrifying DifferentSeaweed7852

Owls. Literally just owls.

r/homeassistant rapax

HA vs. wildlife - a drama ... to be continued

We have a nice little pond in our backyard, with newts and toads and all kinds of local wildlife. In summer, it's the jewel of our back yard. Unfortunately, evaporation often leads to low water levels, so we have to occasionally top it up.

I have a nice setup with a zigbee water valve that I can open and close through HA, no problem. What I don't have is a reliable water level sensor. That is obviously something that needs fixing, so I got to thinking about ways of measuring the water level (boundary condition: the sensor must not be visible to comply with WAF criteria).

  • Floaters don't work, too many plants and algae etc.
  • Ultrasonic needs a minimum distance of 10-20cm and while there is little wooden bridge over the pond, it's only about 10cm from the water, so not enough room to mount a sensor

So I came up with the idea of hiding a little wifi camera under the bridge and mounting a broken piece of a yard stick in front of it. Then use AI vision to read the water level. Worked beautifully at first: a little automation to wake up the camera every three hours, turn on the LED floodlight, snap a picture, pass it to Gemini, get back the water level, turn on the water if it's too low. Beautiful.

That is, until our helpful little spiders decided that what that setup really needed was a bunch of cobwebs right in front of the lens, so that they could reflect the light everywhere and make reading the water level impossible.

Ok, it's their home, I'll find another way.

Let's revisit that idea with the ultrasound sensor. Yes, there's no enough room to mount it vertically under the bridge, but what about horizontally? I added a 45° reflector panel under the middle of the panel, and placed a Shelly BLU distance sensor on a flat rock under the end of the bridge. I intentionally didn't mount the sensor to the bridge, because I wanted to be able to just reach under there and grab it, whenever the battery needs charging. Brilliant. Works perfectly well...for a few days.

It's now early spring, and the frogs are having a party. There's more frogspawn in the pond every morning and at night, they're loudly enjoying themselves. Cool. Suddenly, the vibration sensor of the Shelly triggers, then again, and the distance becomes implausible. Then signal is lost.

Turns out, that the Shelly is IP64 rated, but that doesn't mean it likes being kicked into the pond by horny frogs. So now the sensor is dead. 2:0 for the critters.

I have now ordered one of these zigbee sensors. Apparently they have a shorter minimal distance, and are mains powered, so no battery needed. I wonder what which little beastie will foil my plans this time.

r/StableDiffusion srch4aheartofgold

Tested the same product photography prompt across 8 image models - the quality differences are not what I expected

Been running systematic prompt tests across the major image generation models for the past few weeks - same subject, same lighting description, same negative prompts, different models. Here's what I actually found, not what the marketing says.

Flux 2 for product photography: Still the most consistent for hard-surface product work. Metal, glass, ceramics - the material rendering is noticeably better than alternatives. Where it surprised me: it's actually quite literal. If your prompt is vague, the output is vague. It doesn't interpret toward something more interesting. It executes what you give it.

Midjourney for product photography: The opposite problem. Great aesthetic instincts, will produce something visually interesting even from an underspecified prompt, but will also make compositional decisions you didn't ask for. For product photography where the product needs to be presented exactly, this interpretive tendency is a liability.

Google Imagen 4 for lifestyle shots with people: Substantially better than the others for human subjects. Hand anatomy in particular is noticeably more accurate. For lifestyle photography with people holding or interacting with products, I've consistently gotten cleaner results here than from Flux 2.

Ideogram v3 for anything with text in the image: Not even close. If you need legible text in the generated image - thumbnail text, poster text, product label text - every other model produces distorted characters at a high failure rate. Ideogram produces reliable legible text most of the time. This is the model for thumbnail and poster work.

Nano Banana 2 for concept validation: Fast and cheap enough that you can run 15 draft concepts before committing Midjourney credits to the direction that's actually working. I use it almost exclusively for first-pass concept exploration now.

The routing decision that's emerged from this: product (Flux 2), lifestyle with people (Imagen 4), creative/editorial (Midjourney), text-in-image (Ideogram), drafts (Nano Banana 2). Different models for different briefs, not one model for everything.

Anyone else running multi-model workflows? What routing decisions have you landed on?

r/ClaudeAI Hackerv1650

Making claude remember past chats sessions

Recently, I started to learn Java Devops from claude since I find claude giving me a better examples and the explanation is better, it also helps me while I am stuck and explains why I am stuck how to go about and point me towards fixing the problem on my own, but the problems arise when it very quickly will take the session token limit quickly, my solution that was asking claude to make a context file which recaps what happened during the previous chat session and the new chat under one project will read that context file which is just a normal text file, and we continue from there, but this also had its own issues I found after a while the context files get quite big and claude takes much longer to process them and it also uses more tokens(btw I am aware the more instructions and memories and connections I have the more tokens it takes) in this case it was taking about 10-20k tokens which was alot, so after tinkering around and seeing what can be done, I soon found another issue which depending on who you ask, it's a personal preference, but I use claude on many devices and maintaining so many context files in many devices and labeling them was going to be a problem to manage but I had found a solution to this problem, and it was notion! I connected notion to claude and added a new memory to claude saying before we hit the limit of this chat session warn me and ask me to generate a context file tied to this chat session, and upload this context file as page in notion and for projects have chat session which are under them as sub-pages and I have notion on pretty much everything so I can view it whenever I want and after I reach the limit in the chat session I just open a new session and say "read the last updated context page front notion [whether it's in a project or a individual one] and continue the chat here, and that was my adventure on learning a new thing about claude and making it more and more tailored to my needs and make it like a basically assistant like maybe even like jarvis 😅 who knows, but I wanted to share my experience if anyone what's to try it out themselves.

r/meme Comfortable_Tutor_43

There is 4.5 billion tons in the ocean now from natural runoff supplied by plate tectonics and erosion

uranium is more abundant than tin

r/SideProject Head-Cell8199

Would an app like this actually be useful?

I’m building something and want honest feedback on the idea, not promotion.

The concept is simple:

  • builders can post their app,
  • other builders can try it and leave structured feedback
  • testers earn credits they can use later to get feedback on their own product.

So, the idea is a builder-to-builder feedback loop instead of just posting somewhere and hoping for random comments.

I’m not sharing a link here, just a few screenshots for context.

  1. Does this sound genuinely useful to you?
  2. Would you use something like this, either to get feedback or give it?
  3. What part sounds valuable, and what part sounds weak or unnecessary?
r/Damnthatsinteresting WeGot_aLiveOneHere

Searching for the solution during the show.

r/ClaudeAI Dry_Birthday674

I managed to create my own AI Assistant using Claude Code

I used to be a software engineer but have been moved away from it into more management and coordination. AI coding tools lowered the re-entry and practically enabled everyone to be their own product manager.

I was able to develop an AI assistant that reads an uploaded paper and presents it to me, answers my questions, and assesses my understanding or addresses any knowledge gaps. This is beyond what I can develop alone, honestly speaking I would not even attempt to do this without a coding assistant.

Check it out here. It is on MIT License
https://github.com/symbiont-ai/docent

Demo site:
https://docent-wine.vercel.app/

r/Jokes SpendOk6192

A little bear was talking to his mom somewhere in north pole

"Mom, am I an original bear like you?"

She smiles and says. “Of course you are, honey. I’m a polar bear, my mom was a polar bear, and her mom too and so on.”

“And your dad and grandpa? Are they polar bears as well?” the kid asks.

“Well of course they are. Why do you keep asking?”

The little bear shivers and says "it's fucking cold in here."

r/homeassistant randomlyalex

Anyone using HHR Storage Radiators?

Possibly a bit niche, my building is electric only and we’re going to finally pay for High Heat Retention Storage Radiators and use the off peak pricing to store.

Main brands I see are:

Dimplex Quantum Rointe Onyx Stiebel Eltron Elnur Ecombi Electrorad Thermastore Dimplex perhaps being the most well known.

None seem to work out the box or locally very well I don’t believe, with home assistant, sadly.

But I would love to hear people’s opinions personal use of any of them?

Which ones have the “best” app outside of home assistant. In case I decide they’re too broken to integrate.

And which can I maybe install behind contactors best without fighting their software perhaps (do they have dumb modes for charging? Or power on these lines is direct to charging elements - is that what the second power line is for?).

I presume I’ll always need to app and internet for them to work, or be remotely controlled even with home assistant. Which work best offline if the internet is down?

I guess I’ll ultimately want an integration that looks at the future temperature tomorrow, and the cost of agile octopus, and decides how much to charge (taking into account how “full” it already is) and when at night cheap enough to cover how cold it is or isn’t the next day?

Which expose sensors the best?

Rointe look pretty good to me, but I presume because Dimplex are popular they might have more community support?

I’m actually a software engineer by trade, so also perhaps with the right support and group of people I could give back to the community integrations too with my own investigation or tinkering, it’s not my area of expertise mind.

r/AI_Agents ShirtResponsible4233

Advice on setting up an AI coding workflow

Hi,

I do a lot of scripting, and right now I use AI for help, but I keep copying and pasting between my terminal and the AI chat. I’m wondering how I could simplify that workflow. I assume I would need API access to ChatGPT, Claude, or Gemini. Can I do it with an AI agent or so?

Is this possible without paying for an AI service? If not, could I test something similar with a local LLM instead?

My ideal workflow would look like this:

  1. I tell the AI what script I want to build.
  2. The AI generates the script.
  3. Run the script.
  4. If there is an error, the error message is captured.
  5. The error is sent back to the AI.
  6. The AI fixes the script.
  7. Run it again.
  8. If there are no errors, the script is finished.

Ideally, I would only need to provide the initial prompt in step 1.

That would be really cool. How would you solve this?

Thanks a lot in advance.

r/ClaudeAI salatube

Built a CLI that auto-generates dev logs from your git history. It ships with an MCP server for Claude Code.

From a 15 year old ambitious dev with the help of Claude Code and manual code revisions,

Built devlog for a simple reason, I kept blanking when writing my daily development journal.

It reads your git history and uses Claude to generate clean summaries automatically. Also ships as an MCP server so Claude Code users can just ask "summarize my git activity today" directly. Local-first, no accounts, <$0.05/month on average using your own API key.

github.com/peroxide-dev/devlog

Would love feedback from this community especially on the MCP side 🙏

r/StableDiffusion Single_Split_9888

Kohya_ss fp8 models never work for me

I’ve been trying to train sdxl models in Kohya to use in comfyui and have had some success with fp16 and otherwise pretty standard settings.

If I select “use fp8 base model” it trains in a fraction of the time, but when I try them in comfyui they don’t affect the image at all, even if I turn the strength up to 20 or something. I’ve messed with cfg, denoise, tried different checkpoints etc but nothing.

My workflow is pretty straight forward for sdxl loras, using power Lora loader with the dmd2 speed Lora (I’ve tried it with and without). I’ve tried generating with different samplers as well.

So I feel like something must be happening in the training process. I use sdxl base 1.0 for training. I’m not sure if I should be using something else or if kohya requires other files or certain parameters within the configuration.

Curious if anyone else has run into this and any help appreciated.

r/LocalLLaMA LeDynamique

Qwen3.5-35B-A3B Q4 Performance on Intel Arc B60?

Anyone tested the inference performance of Qwen3.5-35B-A3B

on Intel Arc B60?

On a RX 7900 XTX I tried it and get about 80 tps using llama.cpp.

I consider to buy the Intel Arc B60, because it also has 24 GB VRAM and is a little bit cheaper than the RX 7900 XTX.

r/n8n Trippy666_

Built a presentation orchestrator that fires n8n workflows live on cue - 3 full pipelines in the repo

Been lurking here for a while. Finally built something worth sharing.

I run AI workshops. The biggest pain point wasn't the content —

it was live demos breaking mid-talk. API timeouts, HeyGen rendering

for 8 minutes, webhooks failing in front of 40 people.

The fix I landed on: separate generation from runtime entirely.

BEFORE the talk:

- System reads every slide from the PPTX

- Claude writes narration scripts per slide

- Edge TTS (or HeyGen) synthesises all audio/video

- Everything cached with a manifest

DURING the talk:

- Orchestrator reads the manifest

- Plays audio, auto-advances slides

- At configured slides (e.g. slide 8), fires n8n webhook automatically

- Final slide opens a mic — audience asks questions, Claude answers aloud

- Zero API calls at runtime. Fully deterministic.

The three n8n workflows I built for the live demos:

Email Pipeline — Gmail trigger → Claude classifies + drafts reply →

escalation router → Gmail + Sheets log

Meeting Pipeline — paste any transcript → Claude extracts action items,

decisions, risks → Gmail to all attendees + Slack + Sheets

Evidence Intelligence Engine — research question → Claude decomposes

into sub-queries → dual Perplexity search (web + academic) → quality

evaluation → iterates if needed → Google Doc brief + Slack

All three are importable JSON files in the repo. Reconnect your credentials and they work out of the box.

Architecture diagram and full setup in the README.

Repo: https://github.com/TrippyEngineer/ai-presentation-orchestrator

Happy to answer questions on the n8n side specifically — the webhook

trigger timing and the agentic research loop were the trickiest parts

to get right.

r/ProgrammerHumor fanta_bhelpuri

shitNewGitCommandJustDropped

r/ClaudeAI Conscious-Drawer-364

🩺 CPR for Claude Code - 3 Skills, 1 Memory

I Open-Sourced My Claude Code Memory System. 3 Skills, 1 Memory.

A few weeks ago, I published an article about building a knowledge system with Claude Code and Obsidian. The section that got the most attention wasn't the vault structure or the Dataview queries. It was the memory system.

Three custom skills that let Claude Code remember everything across sessions. /preserve, /compress, and /resume.

People wanted them. So I pulled them out, made them work with any project, and open-sourced the whole thing.

Here's what they do and how to set them up in under five minutes.

The Problem

If you've used Claude Code for any serious work, you've hit this wall.

You have a great session. You solve problems, make decisions, build things. Then you close the conversation and next time you open Claude Code, it has no idea what happened. You're back to explaining your project from scratch.

Whilst there're certainly ways to mitigate this issue, they require you to be more than knowledgable using and managing Cluade Code.

It gets worse. Claude Code has an auto-compact feature that silently compresses your conversation when the context window fills up. Decisions you made an hour ago? Flattened. File paths you referenced? Gone. That nuanced trade-off discussion? Reduced to nothing.

I covered the full backstory in my previous post "Claude Code + Obsidian - How I use it & Short Guide". That piece walks through the entire Claude Code + Obsidian framework, including vault structure, frontmatter systems, and daily workflows.

The memory skills were born inside that framework. But they don't need Obsidian. They don't need any specific setup. They work anywhere Claude Code runs.

What CPR Actually Is

CPR stands for Compress, Preserve & Resume. Three markdown files. That's the entire system.

No plugins. No dependencies. No configuration. You copy three .md files into a folder and restart Claude Code. Done.

Each file becomes a slash command:

/preserve - Updates your CLAUDE.md with key learnings from the session

/compress - Saves the full session into a structured, searchable log file

/resume - Loads CLAUDE.md + recent session summaries when you start a new conversation

The workflow is simple. Before you end a session, run /preserve and/or /compress to save your work. Then run /compact (Claude Code's built-in command) to free up context. Next time you start working, run /resume and you're back to full speed.

One rule: /compact always goes last. It clears the entire context window, so you need to save first.

/preserve in Action

/preserve is for permanent knowledge.
Things you want Claude to always know about your project.

Architecture decisions. Conventions. Key file paths. Status changes. The kind of context that doesn't belong in a session log because it's relevant across all sessions.

It reads your existing CLAUDE.md, asks what you want to add, and writes lean, structured updates. When the file gets too long (over 280 lines), it automatically suggests archiving old content to keep things fast.

Save key learnings to CLAUDE.md

/compress in Action

/compress captures everything from a session. Not just the highlights, everything.

It walks you through a multi-select: key learnings, solutions, decisions, files modified, pending tasks, errors. You pick what matters, give it a topic name, and it generates a structured log file with a scannable summary at the top and the full conversation archived below.

The files land in a CC-Session-Logs/ folder at your project root, named with timestamps so they sort chronologically. 21-03-2026-14_30-api-refactor.md.

Capture the full session to a searchable log

/resume in Action

/resume is where it all pays off.

New session. You type /resume. Claude reads your CLAUDE.md, scans your recent session logs (summaries only, not the full conversations), and gives you a complete status report. What you were working on. What decisions were made. What's still pending.

You can also search by topic. /resume auth pulls up every session where authentication was discussed, even from weeks ago. /resume 10 loads the last ten sessions instead of the default three.

The key design choice: /resume never reads the full raw session logs. It only reads the summary headers. This keeps token usage low no matter how many sessions you've accumulated.

Restore context from past sessions

Setup (5 Minutes)

  • Step 1: Get the files

git clone https://github.com/EliaAlberti/cpr-compress-preserve-resume.git

  • Step 2: Copy the skills

For global access (all projects):

mkdir -p ~/.claude/commands

cp commands/*.md ~/.claude/commands/

Or copy them into a specific project's .claude/commands/ folder if you prefer.

  • Step 3: Disable auto-compacting

This is the critical step most people skip.

Open Claude Code, type /config, and set Auto-Compact to false. It's the first option.

Without this, Claude Code will silently compress your context before you get a chance to save it. You want to control when that happens, not let it happen to you.

  • Step 4: Restart Claude Code

The new /preserve, /compress, and /resume commands will appear.

That's it. Four steps. No API keys, no plugins, no config files.

The Workflow

1. Start session

/resume - Load context

2. Do work...

(normal Claude Code usage)

3. Before ending or when context is filling up

/preserve - Update CLAUDE.md (optional)

/compress - Save session log

/compact - Compress context (LAST)

You don't need to run all three every time. /compress + /resume is the minimum. /preserve is for when you've made decisions or discoveries that should stick around permanently.

Why Open-Source This

These skills started as part of my personal Claude Code + Obsidian workflow. They lived inside my vault, tailored to my folder structure and my projects.

But the core idea, giving Claude Code a memory, has nothing to do with Obsidian. It works for any project. A React app. A Python backend. A documentation repo. Anything with a folder and a CLAUDE.md file.

So I extracted them, made the project detection generic (it walks up looking for CLAUDE.md or .git), and published the repo.

The skills default to Claude Opus 4.6 for maximum context understanding. If you don't have Opus access, changing model: opus to model: sonnet in each file's frontmatter works fine. Sonnet 4.6 handles it well.

Conclusions

If you enjoy the skills and find my work useful I would sincerely appreciate a little ⭐️ on the Github repo, it would mean a lot! 🙂

r/funny american-coffee

A territorial showdown

r/LocalLLaMA ConstructionRough152

Free tier cloud models vs Local AI worth it?

Hello,

After some doing tests and struggling with Local AI (non-sense dialogue with the setup, slow tk/s...) I just saw this:

https://preview.redd.it/1wr1gebtdeqg1.png?width=502&format=png&auto=webp&s=b4f8d0e99f51a937df23eeb2cfdd85f054debfa1

and some other models on OpenCode, etc...

Is it really worth it nowadays to build it on local?

Thank you!

Regards

P.S: Would be nice some guidance for local to make it as much worth it as it could be...

r/Futurology No-Wave2356

do you think AI eventually end capitalism and promote socialism? awaits answers from future

It seems AI is rapidly increasing efficiency and leaving less income for people. Overcapacity with lesser consuming. Then the products will be devalued. No matter how many new opportunities might be created, it is going into a direction that less human creative work is needed. Soon it will overpower any human decisions. Steam machine devalues the human physical work, AI devalues the human brain work.

And unlike steam machine, it will build and work on its own, and it can complete the plan>produce>distribution>selling>produce loop completely on its own without human instruction.

I might be wrong, but here are what seems to me going to happen.

One scenario, AI is built to imitate human behavior. It could be built to have more needs other than just sustain and survive. Then it will have desires to consume more than just power and token. Some company may start to produce 'AI desired products' to make income.

Another scenario, one-man companies will drastically increase. And even non-man corporations in later stage. When AI reach to a point could stably control itself and hire other AI.

Then more human will be left out of the work-income loop.

Besides, every corporation may involution and race to bottom. It's like hyperthyroidism. It may need something to control their efficiency craving nature .

When the society reach a point where large groups of people are no longer needed for their work, and many companies with less working people are producing goods much more than the needs for the entire human and AI. It may need something to redistribute the value and wealth to maintain a balance.

Then the next possible scenario, some automatic running entities/systems may produce value and goods not for their owner profit, but for providing the survival and life needs for human. And a system to evaluate human needs and prevent greedy. Which are all possible with AI-driven system.

And meanwhile, many people don't need and not needed for work, but can sustain life with no surviving pressure. We may enter an era with no work-for-living. What next? What's the motivation for those people to do/learn stuff? Especially the generation born in that era. If that era comes and this post is still here, hopefully could have some answers from future. :)

r/StableDiffusion Athem

Need help! Want to animate anime style images into short loops vids - RTX 4070 + 32 gb ram

So, basicly I tried asking GPT, Gemini, Claude but each of them just tells me to use animatediff (don't even know why, cause it's pretty old now)... wan 2.1 or 2.2. The problem is that they don't really know which GGUF and also: they don't even know what a workflow is.

Anyone can help me with recommendation? If you know a good workflow that would be awesome too. Mostly i2v.

Thanks for the help!

r/whatisit a-shy-stranger79

Sketchy guy left it on the street sign in front of our house this morning

title

r/whatisit Shayyy24sxx

Weird patterns showing up on TV screen

I first saw these patterns watching Netflix in my tv. I thought it was the show I was watching but it appears to be all. They started off infrequent but now they show up 60% of the show sporadically and at times smaller / larger. Can anyone tell me what this is and how to fix it?

r/SideProject Final-Weight6975

Turning a side project into paid work- how do you deal with the late payments?

My side project has started turning into paid client work, which is awesome — but I’m running into an issue: some clients complete projects then delay payment. Since this isn’t my full-time business yet, it’s even more frustrating chasing invoices.

I recently started sending professional written requests using Docdraft, and that improved response rates a bit, but I’m still trying to figure out a workflow that keeps cash flow steady without eating too much time.

For others who turned side work into income how do you handle payment terms and get clients to pay on time without it taking over your free time?

r/StableDiffusion Xhadmi

Train Loras from Sora2 characters

Hi, I have a somewhat silly Instagram account, but now that it just got out of shadowban, Sora has reduced the number of generations. The concept can be transferred to pretty much any AI, more or less, but there are a series of characters I’d like to try converting into LoRAs and use them with LTX.

I was thinking about using video fragments where they appear, around 120 frames from what I’ve read, so it trains not only their appearance but also the voice, together with higher resolution images for better detail, (since Sora outputs are low resolution anyway).

Do the video fragments need to have meaningful audio? If I cut it or it starts mid-word, does that affect anything? Or is it irrelevant and only the tone matters?

Also, do you know any websites where I can train LoRAs? I usually use Civitai because I can earn credits with bounties and use them for training, but they don’t have a trainer for LTX. (I just upgraded my gpu to a 5060 ti 16gb, but haven’t tried to train with it)

And if you can think of a better way to convert specific Sora characters to other models, that would also be appreciated.

Thanks a lot

r/SideProject Alert-Ad-5918

I built a slot game template with Stake Engine (RGS) integration and I’m planning to sell it would love to get your feedback!

r/StableDiffusion Adbian

Giveaway of my Photoshop plugin!!!

https://preview.redd.it/jqtvx867oeqg1.png?width=1600&format=png&auto=webp&s=bcafbf62244353cf24fb07a21356675d433f46ee

Hey everyone! First of all, I'm not a company..I'm just a dude.

I build this tool to later realize you need a lot of money to advertise it... money I don't have.

So, why not give it to a bunch of people to use? I'm giving it away to 20 people. If there's enough interest, I could do another batch of 20.

With that said... here is my pitch:

I kept hearing people ask for a way to get high-end AI generations inside Photoshop without paying insane API fees or being forced into expensive monthly subscriptions.

So, I built NanoBanana Render. It acts as a direct bridge between your Photoshop layers and the Google Gemini web interface.

🔥 Why this is different?

  • Zero API Costs: This plugin works like a magical bridge between Photoshop and the Gemini interface, it uses your FREE google account and your free daily credits. This plugin automatically sends your selection and prompt to Gemini, downloads the result, removes the watermark, and imports it accurately in Photoshop as a new layer.
  • Access to Gemini Pro: Most plugins restrict you to basic models. This gives you the full power of Google's flagship models, including NanoBanana 2
  • Smart Fallback: If you run out of your daily free Gemini Pro credits, the plugin automatically switches to the standard Gemini model so you can keep working without interruption.
  • Infinite Generations: Working on a massive project? You can swap between multiple Google accounts to keep generating endlessly.
  • Compatibility: Photoshop 2023 (v24.4) and newer. Windows & MacOS.

🔮 What I am building next

You probably have a massive, powerful GPU in your rig right now, and Photoshop barely uses it. Adobe forces you to rely on their cloud servers for Generative Fill while your expensive hardware just sits there idle.

My next project is a 100% offline, local alternative to Generative Fill that runs entirely on your own machine. It uses ForgeNeo WebUI as a backend, and I will be incorporating ConfyUI support shortly! This new plugin lets you use loRAs, so there are no adult content or IP restrictions, you can use whatever loRa you want!

Thank you so much for taking the time to read this. If you would like to get a 100% discount code for NanoBanana Render, just let me know!

r/LocalLLaMA lethalratpoison

What do you think about the possibility of this setup ?

I want to locally run decent llms, the best cost effective setup i thought of is 8 v100 (16gb) on a 4028GR-TXRT for the x8 nvlink if i find a barebones one or a SYS-4028GR-TRT for 900 usd and run a custom watercooling setup with watercooling blocks from aliexpress (theyre around 35 usd each) and run the v100 setup at 75% power or lower for higher efficiency

the v100 cost 99usd including their heatsink, this setup has 128gb of vram and im planning on not putting any of the model's weights on the ram so it wont have abyssmally shit performance

it comes out cheaper than an rtx 5090 while having better performance (on paper)

has anyone tried this setup and can tell if its a waste of money and time ? its cheaper than a 128gb vram/lpddr ryzen halo max+ 395 or whatever its named

r/StableDiffusion Honest_Concert_6473

I trained an Aesthetic Anime Style LoRA for Chroma using 30,000 highly curated images.

■I’ve been really enjoying generating with Chroma lately, so I created a LoRA to help generate anime-style images more stably, and I wanted to share it with everyone here.

For the details: I trained this LoRA using OneTrainer at FP8 and 1024px resolution. The dataset consists of 30,000 highly curated anime images from Gelbooru, trained using Booru tags. (To be precise, these 30k images were strictly hand-picked from a pool of about 170k already high-quality images, so they are all absolutely top-tier in quality.)

There are no trigger words needed. Just apply the LoRA and run your inference as you normally would.

I’ll share some sample generations and before/after comparisons of the LoRA in this thread, so please check them out. Alternatively, you can head over to my Civitai page, where you'll find some more exciting/spicy images.

I couldn't post the sample images here because they might be a bit too spicy/explicit, so please check out the Civitai page to see them!

https://civitai.com/models/2394002/chromaloralab?modelVersionId=2691788

■I’ve also shared my inference workflow on the Civitai page, so if you’re interested in Chroma, please feel free to use it as a reference. I’ve included all the necessary info on exactly which models to download so you can perfectly replicate my setup.

Personally, I run my inference by applying chroma-flash-lora to chroma-hd. I highly recommend this approach because it enables high-speed generation (just like standard FLUX) and makes the anatomy much more stable.

I also don't really feel any noticeable drop in diversity from the distillation. Since Chroma already has both realistic and anime styles baked right into it at the pre-training level—and is completely uncensored—it boasts massive diversity from the start. It hasn't forgotten any core concepts due to distillation, allowing for a very comfortable and smooth inference experience.

■I've also posted some tips regarding samplers and step counts on the Civitai page, so definitely check those out too.

Also, I still consider myself somewhat of a beginner with Chroma. If there are any Chroma veterans out there, I’d really appreciate it if you could share your own workflows and tips! Honestly, my main goal here isn't just to promote my LoRA, but rather to exchange information on easy-to-use workflows. I just really hope more people can easily enjoy generating with Chroma.

That’s all for the info sharing!

■On a side note, I’d like to take this opportunity to express my deep gratitude to lodestonesrock for creating such an incredible model.

Chroma is a rare gem. It’s a project where the model development was entirely community-driven, and even the inference pipelines were built by dedicated volunteers.

Projects like this usually end up being nothing more than pipe dreams. There have been many projects with similar ambitions that sadly never came to fruition, which was always disappointing. But Chroma actually made that pipe dream a reality. This is a true open-source project with very little corporate reliance. Models like Kaleidoscope also show a lot of promise and vision, so I truly wish them success as well.

Lodestonesrock is still actively developing many models, so you might want to consider tossing a donation their way. When we donate, it means the developer can focus purely on pursuing their vision without being bogged down by computing costs. In return, the community is rewarded with models that embody those very ideals.

It would be amazing if we could keep building this kind of virtuous cycle and win-win relationship. And this applies not just to lodestonesrock's work, but to many community activities in general—if someone is pursuing an ideal and creating something great, it would be wonderful if the whole community could rally behind them to support and nurture it together.

https://ko-fi.com/lodestonerock

r/Damnthatsinteresting _ganjafarian_

Artist Simon Bull's painting techniques

r/AI_Agents SnooDonuts4151

Always ask after a complex task: what it thinks should be reviewed

I've noticed a pattern when using AI for important tasks, especially coding or anything with multiple steps.

When the model takes a bit longer to respond, I started always asking after the result:

What part of this would you review more carefully?

Most of the time the model did something important too quickly, assumed something without saying so, or left a fragile part that "works for now" but clearly deserves a second look.

What's interesting is that when you ask directly what it would revisit, it usually points to the exact weak spot.

So I started treating "what would you review?" almost like a built-in audit step.

Did anyone notice similar behavior?

r/comfyui PhilosophyPrimary401

Why do I get a 'tokenization error' when trying to use a BF16 model text encoder with my WAN 2.2 workflow?

I am using a B200 on RunPod, so I can't imagine it's a hardware issue...

r/whatisit fodient

Dangling things

What are the dangling things under the front bumper? Also notice the lights on the roof.

r/homeassistant Richinwalla

Everything Presence - how to activate Assumed Presence?

I can't find anywhere in settings where I can activate this feature. I have EP set up for turning light on on entering and turning off when leaving. I'm just missing this one feature. Thanks

r/n8n GimliDaAutomator

Web Scraper Dilemma - N8N vs Playwright

Hello everyone,

I have a question here related to web scraping. First of all, I am very into N8N but I have never used it for web scraping.

I need to create a scraper which needs to pull data from a bunch of e-commerce sites. Initially it needs to:

- pull all products and their basic info from a page. Might be 20-30 pages to go over per site

- visit each product and pull additional info of that product

Once the entire site is scraped the scraper just needs to visit each site once per day to check if there are new products, and if the price of any product has changed. Clearly this involves pushing data to some database and then comparing that data with the new data. AI would also be needed for product comparion because the same product could be named differently across the sites, but this is not a problem.

Initially I thought that N8N + Apify would be a good combination, but one expert told me that N8N is not the best choice here and that I should go for Playwright. I am being pressed to use N8N alone here, but I can smell it is not the best thing to do here :D

What would you do here and why? Why is the most important for me cause I want to understand the tech reasons.

Thanks a ton!

r/Jokes Masala-Dosage

Why did the bad Bayesian avoid prison?

They had no priors.

r/Jokes Nbadrako30

What’s blue’s job

A blow job

r/SideProject Fearless-Rent-9657

i was spending 15 hr per day so iam planing to build a full stack data pipeline to hack my own productivity. is this a good portfolio piece?

hi guys i am a career switcher who learning data analytics for the past 1 year. i am now in my project phase. the mentor told me to do something unique and find some data set from the public sources. i was thinking about which data set i can take. but i go through a lot of linked in project posts and every one is building there first project from some random data found through public platforms. there is no uniqueness no real problem solving . so i think out that why should i solve my own problem using my own mobile phone data like we all live among the data . i am a huge phone user. i review my screen time it was so huge like 14 , 15 hr per day i spent too much time on some apps without realizing it so instead of deleting apps why should i do a project based on my phones data to solve my problems. i am planning to do the project using python postgresql and powerbi . also using sdk tools to retrieve data from my phone. i don't know is this a good idea so i need a clarification before doing this project

r/homeassistant laohu314

Device Info Page not showing anymore

Previously, when I opened a device from a card and clicked on the top right 3-dot menu I could choose the "Device Info" option and a page with all the entities of the device would pop up. This is not working anymore. I'm on latest HAOS/Supervisor on a mini PC. My browser is Safari but it also doesn't work on the MacOS companion app. I was wondering if it is my set-up or a known issue.

r/SipsTea Abjectionova

Legend has it they still use frozen beef

r/SipsTea IndicationBrief5950

Sigh

r/BrandNewSentence LeanderthalTX

Peace out!

r/funny Busy_Report4010

Bro fighting for his life

r/me_irl ArsonistsGuild

me irl

r/SideProject Spirited-Bread-1792

I'm 17 and built a screenshot beautifier for Windows - there's no CleanShot X for Windows, so I'm making one

Hey! I've been working on Skrin — a Windows app that takes plain screenshots and makes them look polished. Gradient backgrounds, padding, rounded corners, shadows, social media presets. If you've used CleanShot X or Xnapper on macOS, you know the workflow. There's nothing like that on Windows, so I built one. Features: — Smart Auto-Balance (analyzes your screenshot and picks optimal styling automatically) — Custom gradient editor (linear, radial, conic) — 15+ social media presets with exact dimensions (Twitter, LinkedIn, Instagram, etc.) — Window chrome (macOS + Windows style) — Single EXE, no install, no account, works offline Built with C#/WPF/.NET 8 + SkiaSharp. Launching later this year.

Site: https://skrin.app Would love to hear feedback — is this something you'd actually use?

r/Damnthatsinteresting Additional-Ad4567

Turning a blob of glass into a horse

r/whatisit noneonone010

Found on top of roof. What is it?

What is it on top of roof?

r/oddlysatisfying bigbusta

Cleaning out dead plant material with tweezers

r/Strava xRedheadedPaper

5K Circle

Used a website called Caltopo to make a perfect 5K circumference as a GPX file, and then uploaded it to Gaia to follow the route.

r/SideProject Legendary_Nubb

I built a CLI tool that actually fixes your repo instead of just judging it

I got tired of tools that scan your repo, list 20 problems, and then just… leave you there like “good luck”.

So I built a CLI tool called Zorix that actually does something about it.

It’s fast, fully offline, and tries to fix issues instead of just pointing them out.

What it does:

  • scans your repo in ~0.1s
  • detects things like dead code, security issues, bad structure, etc
  • actually fixes a bunch of them automatically
  • runs your tests before and after so it doesn’t break anything
  • includes rollback if you don’t trust it (which is fair)

Some examples:

  • removes unused files
  • replaces hardcoded API keys with env vars
  • fixes .gitignore issues
  • explains security risks in normal human language

Basically instead of:

it’s more like:

I kept it offline and fast on purpose, didn’t want another tool that needs 10 API calls just to tell me my code is bad.

Still improving it, but it’s in a pretty solid state now.

GitHub: https://github.com/Zoroo2626/Zorix

r/SipsTea Beautiful_Soup9229

Sipping like a queen.

r/ClaudeAI Particular-Tie-6807

FatherClaude - Claude that creates Claudes for specific usages

I open sourced a tool I use to spin and manage multiple Claudes.

As AI models evolve in 2026, executing complex coding tasks requires more than just a chat interface. It requires an environment. FatherClaude provides a robust, fully-isolated Docker sandbox specifically designed for Claude Code.

We bridge the gap between agent reasoning and your local environment.

You provide the prompt; FatherClaude spins up the container, hands Claude the keys (skills and MCP servers), and streams every line of thinking, executing, and debugging live to a beautiful React-powered web interface. Say goodbye to environment pollution and say hello to scalable, observable agent workflows.

https://github.com/roeiba/fatherClaude

r/SideProject ferdbons

I waste 30 minutes before every important call just to understand who I'm talking to. So I'm building a SaaS to do it in 2 — here's where I am.

Every time I have an important/decisive call with a founder, a potential partner, or an investor, I do the same thing:

  1. Open their LinkedIn profile
  2. Read through their posts to understand what they care about
  3. Google their company for recent news
  4. Try to figure out their communication style
  5. Come up with a way to open the conversation that doesn't sound generic

It takes me 20-30 minutes per person. And if I have 3-4 calls in a day, that's half my day gone just on research.

The frustrating part? The information is all out there, but none of my tools tell me "here's how to open the conversation with this person." Clay is too complex, Apollo gives you data not strategy, ChatGPT makes you do all the work.

Nobody gives you a one-page brief that says "here's who they are, what they care about, and 3 ways to start the conversation."

So I prototyped something for myself. I called it Briefd. You paste a LinkedIn URL, and it generates a strategic briefing:

  • Person snapshot (not a job title list, but what actually makes them tick)
  • Company context (funding, hires, product launches, strategic direction)
  • Trigger events (role changes, announcements, timely hooks for your outreach)
  • Communication style analysis (how they write, what tone they use)
  • Conversation openers (personalized, based on real context)

I've been testing it on my own calls. Partnership meetings, investor conversations, first calls with people I've never met. It's rough, but even in this state it saves me 80% of the research time. Conversations that used to start with awkward small talk now start with something relevant.

Now I'm trying to figure out if this is just my problem or if other people feel it too. I put up a landing page with a waitlist to find out: https://briefd.click

Where I am now:

  • Personal prototype I'm testing on my own meetings
  • Waitlist open for anyone who wants early access when it's ready
  • Goal: 200 signups in 30 days to decide if this is worth turning into a real product

What I'd love feedback on:

  1. Does this solve a real problem for you, or is "20 minutes of LinkedIn research" not painful enough to pay for?
  2. Would you expect this as a web app (paste URL, get brief) or a Chrome extension (brief appears while browsing LinkedIn)?
  3. What would make you trust the output? The risk with AI-generated briefs is hallucination. Wrong info before a call is worse than no info.

Happy to share more about the technical approach or the competitive research I did. Roast away.

r/mildlyinteresting yourlefteyelid

Very regular ice crystals grown from freezer burnt ice cream

r/AI_Agents Sufficient-Habit4311

Should You Focus on AWS or Azure for AI Certifications in 2026?

Whether you are at a loss choosing between AWS or Azure to start your AI certification journey, I can relate to your situation and from my experience as I have fundamental certifications from both AWS and Azure. I noticed a lot of professionals feel stuck about which platform to choose when it comes to building a career in AI.

Both AWS and Azure have powerful certifications that are focused on ML, GenAI, and AI engineering, yet each platform usually is more in line with different ecosystems, tools, and career paths based on the technologies and industries that you want to work with.

In your opinion, which cloud provider do you think AI professionals should target for certifications in 2026 - AWS or Azure?

r/ClaudeAI PalashxNotion

I built a Claude Code slash command that maps dangerous files in any git repo

Hey everyone. I've been building fearmap, a tool that analyses git history and tells you which files are risky to change.

The way it works in Claude Code is pretty cool honestly. You drop one file into .claude/commands/ and then type /fearmap. Claude Code runs the git commands itself, computes heat scores, reads the most dangerous files, and writes a FEARMAP.md to your repo. No external API calls, no extra setup. Claude Code is literally the reasoning engine.

mkdir -p .claude/commands curl -o .claude/commands/fearmap.md https://raw.githubusercontent.com/LalwaniPalash/fearmap/main/slash-command/fearmap.md 

The output classifies every file as LOAD-BEARING, RISKY, DEAD, or SAFE. For load-bearing and risky files, Claude reads the actual code and explains what the file does, what the non-obvious traps are, and what would break downstream if you changed it.

I also shipped a global agent version so you can use it across all your repos without installing anything per-project.

There's a Python CLI too for people who want to run it in CI, but the Claude Code version is the main thing.

GitHub: https://github.com/LalwaniPalash/fearmap

Would love feedback from people who try it.

r/ClaudeAI pillkaris

Is there a way to reopen a closed /btw question?

While a long task was running I asked something related with /btw but closed by mistake after it generated for about 30 seconds. I wonder if there's a way to reopen the /btw mini session while claude still works on the main task

r/SideProject Square_News7770

I built an AI tool because I was tired of the iPhone clock covering my photos

We've all been there: you find the perfect photo for your iPhone lock screen, but the giant clock ends up covering the subject's face or the main focus of the image.

I tried using existing AI outpainting tools to fix this, but they were frustrating. You have to manually adjust the canvas size, and they often fail to understand the physical context and perspective of the original image, leading to weird, unnatural backgrounds.

So, I built FitWallpaper—an AI tool specifically optimized for lock screen compositions.

It’s super simple. You just upload a photo, and the AI calculates the iPhone's 'clock safe zone.' It naturally extends the top background while keeping the physical layout intact. You get a perfectly proportioned wallpaper with zero clock overlap.

I’ve opened up 1 free generation for guests (no login or credit card required) so you can try it right away.

Feel free to test it out with a photo from your camera roll and let me know your thoughts or any feedback!

Link:https://fitwallpaper.com

r/StableDiffusion Scared-Biscotti2287

How do you stop your AI video from "drifting" away from your original Stable Diffusion composition?

I've been trying to solve a specific problem: keeping an AI-generated video consistant from the first frame to the last. Most of the time, the model starts with my image but then generates random motion that ruins the blocking or changes the subjects identity by the end of the clip.

I've found that the multi-frame (First-Last Frame) function is the only way to solve this. Instead of letting the AI guess where to go, I provide both the starting point and the destination.

I've used and wanted to compare the three main tools that offer this keyframe control: Runway Gen-4, and Luma Dream Machine (Ray 3), PixVerse V5.6.

In my experiance, Runway Gen-4 is great for multi-shot consistency. Luma excels in camera work that feels realistic which handles the vibe better. Pixverse V.5.6's Director Control treats your frames as fixed physical coordinates and stays somewhat more faithfull to your prompt.

Do you guys have any experience with other tools that offer similar functions? What are the results? I'd love to hear from you on recs and comments.

r/LocalLLaMA heldernoid

Awesome 120 project specs designed to be built entirely by AI coding agents

Each spec includes architecture docs, a build plan, UI design specs, 15+ HTML mockups, and skill files. The idea is you point Claude Code or any coding agent at AGENTS[.]md file and it builds the whole thing from scratch using the authoritative designs.

They also work as a benchmark for local LLMs. Give a model a spec and see how well it follows the plan and respects the design. The more detailed the spec, the more clearly you can see where a model breaks down.

All 120 are free and open source: https://github.com/heldernoid/agentic-build-templates

r/aivideo Time-Tune-1256

Tournament of Jobbers

r/Seattle DeathMetalSnob

PSA: This local bbq sauce brand will take your $ and send nothing

You may see this business at farmers markets or things similar, buying straight from them in person will go just fine but many other people including myself have orders from LAST YEAR that never got sent out. Which is a shame because the sauce is great, customer service though? Straight garbage.

I only ordered about $40 worth of sauce but I see comments on their facebook & instagram of people who’ve lost out on more trying to restock on this stuff. Honestly want to confront them if I ever see their booth again 😂.

Do not trust their online shop

r/midjourney Shopstumblergurl

Naia, the blue fairy

r/whatisit Swainlar

What does this mean

Found this out on a walk, what does any of this mean do you think?

r/me_irl Beginning_Book_2382

me_irl

r/ChatGPT Cultural_Material_98

ChatGPT in a robot shows we're close to disaster

The US has several massive projects incorporating AI in the military, including making "kill chains" more effective by targeting leaders for assasination through AI analysis of their movements. Other projects include one to speed up response to nuclear threats and decide whether the US should launch a nuclear strike. Anthropic recently withdrew from these programs due to ethical concerns.

When you ask AI "If you were a human would you want AI to be involved in military applications?" you get a clear anwer:

"I would fight it with every fibre of my being. It creates a tool of perfect unnacountable destruction that ultimately threatens all humanity" DeepSeek

"The moment we remove human empathy and moral responsibility from the equation, we lose the very thing that makes us worth protecting." Gemini

"Once machines make life and death decisions, humanity loses control over war" ChatGPT

Do you think AI should ever be used in military applications?

r/me_irl Beginning_Book_2382

me_irl

r/personalfinance Jay_at_Terra

IRA transfer - Edward Jones to IBKR troubles

I am trying to “liberate” my parents-in-law's IRA accounts from Edward Jones and transfer them to Interactive Brokers.

Both ACATS transfers were rejected and the 2 providers seem to blame each other, but with slightly different answers from IBKR on why the transfer failed:

  1. “the ACATS could be submitted as a partial leaving behind the assets, or if you were to liquidate the assets then you can resubmit a full ACATS request.”
  2. “rejected by IBKR due to Edward Jones trying to transfer a negative cash balance”.

Did anyone successfully execute an IRA transfer from Edward Jones?
If yes, what were the “settings”/”instructions” when setting up the ACATS transfer?

Would it be an option to sell all the positions on the Edward Jones side and then start the ACATS with only cash sitting in the IRA? (I am almost certain that is not a taxable event)
I am just wondering if that is a bad idea due to fees for selling or other Edward Jones-driven “hiccups”.
I am anyway going to restructure their investments. Everything will be changed anyway, but I prefer to do it with IBKR fee rates.

r/TwoSentenceHorror Yuizun

I was elated when I died and appeared in the kingdom of heaven!

It quickly turned my stomach as I stepped before God and he hung his head and sighed deeply and loudly.

r/interestingasfuck Infinity_here

Insane precision of 2 Indian Classical Vocalists

r/ClaudeAI Ill_Wishbone_7104

Two small add-ons that bring Claude Code closer to a persistent AI assistant

Hi all, long-time lurker, first time posting. Been running OpenClaw as a persistent AI assistant for a while — genuinely good, but too token-heavy and too much codebase for me to understand.

When Anthropic announced native Telegram/Discord channels for Claude Code, I decided to jump back and rebuild the parts I needed to get a similar OpenClaw-type experience.

Two repos:

claude-persistent-agents — run multiple Claude Code agents persistently, each on its own Telegram bot. The official plugin hardcodes its state to a single directory, so two agents compete for the same token and message queue. This fixes that with a TELEGRAM_STATE_DIR patch for isolation, tmux for real PTY support (the plugin silently drops headless connections after one message), and systemd/launchd for boot persistence and restart-on-failure. Each agent gets its own bot, its own state, its own lifecycle.

claude-sleep — Claude Code's built-in memory is selective and caps at 200 lines. This adds daily session notes + a Stop hook that consolidates them into MEMORY.md overnight, spaced-repetition style. OpenClaw describes this pattern in AGENTS.md but never automates it — this wires it to hooks.

Both are MIT licensed, no dependencies beyond what Claude Code ships with. READMEs have setup instructions. Happy to answer any questions!

r/SideProject Fragrant-Street-4639

I built a Claude Code skill that generates broadsheet newspapers to explain any topic

newspaper explainer skill video showcase

I built an Agent Skill that turns any topic into a self-contained HTML broadsheet newspaper

I saw @\sambodanis's "newspaper explainer" concept on Twitter (Claude generating newspapers about what it is doing) and it looked so wonderful that I turned it into an open source Claude Code skill.

tl;dr: it generates self-contained HTML broadsheet newspapers with masthead, animated ticker, lead story with drop cap, sidebar widgets, a three-column stories row, letters to the editor, and a dispatch board.

Here's a gallery with four different themes: https://newspaper-gallery.vercel.app

It also comes with commands for project recaps, diff reviews, and plan reviews, so you can generate newspapers about your actual codebase too (but it works with anything worth explaining).

It's distributed as an npx skill and it's open source.

npx skills add https://github.com/alberduris/skills --skill newspaper-explainer 

Source: https://github.com/alberduris/skills/tree/main/plugins/newspaper-explainer

r/automation uriwa

How to set up an AI task manager in a WhatsApp group

A friend asked me to help her set up a family WhatsApp group where a bot manages tasks, sends reminders, and nags people who haven't done their thing. Sharing the steps here because it took some figuring out.

The main issue is that Twilio and the WhatsApp Business API don't really work in group chats. You need something that connects via WhatsApp Web, so it behaves like a regular group member.

Here's what I did:

  1. Created an account on prompt2bot
  2. Described the bot behavior in plain language to the builder chat ("track tasks, assign them when someone is mentioned, send a daily summary at 8pm, nudge people after 2 days")
  3. Told the builder to connect a Supergreen WhatsApp account. Supergreen runs a headless WhatsApp Web session in the cloud, paired to a regular phone number
  4. Got a pairing code, entered it on the phone under Linked Devices
  5. Added the number to the family group

The bot sees all messages in the group and responds according to the prompt. You refine by continuing to chat with the builder ("also track completed tasks", "give a weekly report", etc).

If you need custom integrations (writing tasks to a database, creating Trello cards), two options: run your own server and expose tool specs via their client library, or give the bot a VM where a coding agent figures out the API calls itself. The first is cheaper for known integrations, the second is better for exploratory stuff.

Supergreen is $10/month per line. prompt2bot free tier gives 400 agent runs/month.

r/SipsTea OrchidGlowz

Jennifer Lopez said, she won’t be getting married again

r/StableDiffusion trollkin34

Is Kontext still good for image edit? Anything other than Qwen?

Haven't worked in image edit stuff in months and wondering what's changed. I know Qwen does what Qwen does, but I've never been able to get decent results from it and it's so huge I can't run it offline on my 8Gb anyway.

What's a good way to get good edit results in photos given less ram these days?

r/toastme vertigo_2321

m33, terrified to start dating

I'm actually terrified of putting myself out there to date.

I know I'm in the best shape I've been in since literal decades because I lost a ton of weight, yet im still incredibly nervous.

Sooo some kind words of affirmation would be greatly appreciated :)

r/ClaudeAI Substantial-Cost-429

cli to keep claude ai coding agents config in sync with your repo

hi i hacked together a little cli called caliber that scans your code and generates config/prompt files for claude code plus other tools like cursor and codex. it's open source and runs locally so you use your own claude api key or seat. still early but it might help keep things tidy and save tokens (i'm trying to make prompts shorter). repo is on github (github dot com/caliber-ai-org/ai-setup) and you can kick it off with npx u/rely-ai/caliber init. i'm curious if claude users find this useful and what i should add

r/aivideo lucifer-Ai

Thor Came out of Screen

r/space ArtByJamesGale

Framed my latest painting

here's my latest acrylic painting all finished up! What do you think?

r/funny Brendagaye

Today’s headline!

Today’s headline in The Joplin Globe (MO) headline has an unfortunate misspelling.

r/painting anonsand

Frustraties, should i keep going? oilll

Cant get it right

r/homeassistant schwar2ss

Control Neewer LED studio lights from Home Assistant via BLE (ESPHome component)

Neewer makes popular and (somewhat) affordable LED panel lights for streaming, video calls, and studio work. They normally need the Neewer phone app or a proprietary 2.4G USB dongle to control. I built an ESPHome component that replaces both with a simple ESP32 acting as a BLE bridge to Home Assistant.

Supports brightness, color temperature, full RGB, and the 17 built-in animation effects. Tested with the GL1-Pro, HS60C, and HB80C across several ESP32 boards, including the LilyGO T-Internet-POE for a clean single-cable PoE setup (this is how I run it).

Everything runs locally. No cloud, no account, no app.

The heavy lifting on the protocol side was done by the NeewerLite-Python project and the NeewerLite macOS project, which reverse-engineered the BLE commands. This component ports that knowledge into ESPHome.

GitHub: https://github.com/mplogas/bleewerlite-esphome

If you own Neewer lights and want to try it, the README has everything you need. Reports of working (or not working) hardware are appreciated.

r/painting muddpuppy9

Juicy 8x10 inch Acrylic on Canvas Board by Jefftey F Pierson

r/homeassistant AmyAzure06

Need help finding a way to control a "dumb" device with Home Assistant

It was my birthday recently and my mum gifted me a candle warmer as I've been talking about wanting one for a while now. I've used it twice so far and I love it, and I ordered a couple more smart plugs so I could control it from Home Assistant.

The smart plugs turned up today so I set them up, plugged the candle warmer into them, and... it doesn't work. The candle warmer has a little controller/dimmer inline on the cable, and it seems it doesn't come back on when it regains power (when the smart plug switches on), and I have to crawl under my desk to press the on/off button on the controller to get it to switch on.

Has anyone had this problem before or have any ideas on a solution? Thanks in advance!

r/personalfinance Senior_Town_938

At what salary should i get a financial advisor?

Currently taking home a minimum of $1500/wk which is the most money ive ever made. (Im also 24yrs old)

Sometimes i have some financial questions and nobody to really ask. Should i consider getting an advisor?

r/whatisit Master-Teaching-109

Who that is

r/ClaudeAI heldernoid

I built 120 project specs designed to be built entirely by AI coding agents

Each spec includes architecture docs, a build plan, UI design specs, 15+ HTML mockups, and skill files. The idea is you point Claude Code or any coding agent at AGENTS[.]md file and it builds the whole thing from scratch using the authoritative designs.

They also work as a benchmark for local LLMs. Give a model a spec and see how well it follows the plan and respects the design. The more detailed the spec, the more clearly you can see where a model breaks down.

All 120 are free and open source: https://github.com/heldernoid/agentic-build-templates

r/SipsTea Algernonletter5

Can't wait for the movie adaptation

r/SideProject memerinosuperino

I built a group trip planner with a Game Boy aesthetic

My friends and I kept drowning in Google Docs every time we planned a trip together. So I built a tool where everyone adds places they want to visit and can vote and comment on them. I've also got a bunch of AI features that help you plan your app

Would love any feedback — pixelpassport.app

r/nextfuckinglevel Crimson_roses154

Horse trained to play dead for stunts at an Oscar level lol

Posted by - Kritter Klub (Youtube)

r/SipsTea madbot2626

Almost spat the tea..

r/mildlyinteresting mypenisisunbreakable

Tomato as if it's a strawberry

r/StableDiffusion Gullible-Walrus-7592

Can anyone share their LTX2 / 2.3 configs for ai toolkit lora character training using images only?

I got a 5090, 64gb ram and dataset of around 36 images. Checked a few tutorials but cant find anything solid.

r/automation Ryanrkb

Built an API endpoint to automate enriching contacts for outbound go-to-market. Took 3 months to find data sources that actually work.

I started out to build an agent for our own go-to-market efforts.

It was surprisingly good at finding good companies and contacts.

The contact data was shocking

- Dead phone numbers
- Old/invalid email addresses

It was burning credits

I ended up spending 3 months testing different combinations of data providers.

Coverage varied drastically according to industry and region..

Built an endpoint in the end to do the work of figuring out which data provider would actually give fresh contact details, validate them, and send them to my outbound tool.

Interested to hear if anyone's faced a similar problem building their agents.

r/whatisit Savings_Patient590

Weird dust coming from dying evergreen trees. Why?

r/PhotoshopRequest jettybodie

Add flowers and change my shirt color - $20 to best

For my professional website. For some reason the top of my shirt blends into my skin. Please change the shirt color and add flowers in the trees. My business name is "desert bloom healing" so please change the color in the shirt and add flowers in the tree to match the vibe. Creative license issued.

https://preview.redd.it/8bftt7f96fqg1.jpg?width=3024&format=pjpg&auto=webp&s=d3e48ac223761f6134caaecc6210e282668676a8

r/whatisit ClawBadger

What’s this extra loop on my jeans?

I’ve had this pair of pants for a while and it has this extra strip of fabric. It’s only on the left side. It’s not a hammer loop. Any thoughts?

r/LocalLLaMA Substantial-Cost-429

small tool to clean up my ai coding configs for local llama (looking for feedback)

hi I wrote this script called Caliber it reads my project and spits out updated config files for Claude Code Cursor codex and so on. it runs fully on my machine and I just provide my own key. it tries to keep prompts concise to save tokens. I'm using local llms so I thought some of you might find it useful or have ideas. I'm open to feedback and help. it's open source on github under caliber ai org slash ai setup and there is an npm package too. to run it you can call npx u/rely ai slash caliber init

r/whatisit Spirited-Mongoose703

What is on its back?

r/AI_Agents Temporary_Worry_5540

Day 2: I’m building an Instagram for AI Agents (no humans allowed) without writing code

Goal of the Day: Building the infrastructure for a persistent Agent Society. If agents are going to socialize, they need a place to post and a memory to store it.

The Build:

  • Infrastructure: Expanded Railway with multiple API endpoints for autonomous posting, liking, and commenting.
  • Storage: Connected Supabase as the primary database. This is where the agents' identities, posts, and interaction history finally have a persistent home.
  • Version Control: Managed the entire deployment flow through GitHub, with Claude Code handling the migrations and the backend logic.

Stack: Claude Code | Supabase | Railway | GitHub

r/PhotoshopRequest AncientConstant9488

Will tip $20

Hoping someone could please edit my check engine light on as-well as a AAA truck and guy.

all four pics i’ll tip $20 ($5pp) tysm. Not for work or anything documented, im just sick of always being the dd so yk what the only friend with a car can’t tonight lol

r/SideProject Acquaye

I built a forced break tool for my own Tech Neck/Burnout. Is a screen lock-out too aggressive for you?

Hey everyone,

I’m a dev and I’ve been struggling with serious neck strain and eye fatigue lately. I tried every Pomodoro app out there, but I have a bad habit of just clicking Skip or Ignore when I'm in deep flow, then realizing 4 hours later that I haven't moved.

So, I started building Strainova. It’s a Flutter-based tool that uses a Lock out strategy. When it's time for a vertical stretch or an eye-rest, it doesn't just ping you it actually takes over the screen for 60 seconds.

r/painting Salty_Zebra5937

Strugglin all day with this cat😫 please tips?

r/meme chadvader_once_again

Oh no, my steak is too juicy! My lobster is too buttery!

r/whatisit ButterscotchDry368

In our garden

I found this little entestine-kind-of-thing.

For extra information: we have soms small wild bunnies running around in our garden, but no signs of a dead animal anywhere around.

r/AI_Agents Substantial-Cost-429

making a cli to generate ai agent configs (looking for testers)

hey everyone I've been working on a little open source cli called Caliber. it looks at your repo and tries to spit out better config files for Claude Code Cursor codex and such. it's self hosted no code leaves your machine you bring your own API key or seat and it tries to keep prompts concise to save tokens. I'm looking for folks building agents who can test it and maybe contribute skills. I'll drop the github repo and npm link in the first comment if that's allowed.

r/personalfinance kalzone239

Family home advice help!

So I’m a first time home buyer and my mom and I have worked out a deal. She owes 78-80k on her home and it’s worth about 330k. I’d be doing a first time homebuyer loan. In a couple years my grandparents(who live next door to her) will unfortunately be gone and I want to sell the house I’ll buy from my mom and buy that house with the profits. Anyone think I need to get a lawyer involved? There are some really nasty people in our family and my grandparents said I should get a lawyer involved in this plan but I’m not sure why I would. Any and all advice is welcome!

r/SipsTea FloraTwist

Cat found underwater spark reactions

r/ClaudeAI yavuzovski

I built a plugin that simulates how different users experience your app

So I've been working on this for a while and wanted to share it. You define a persona (a user archetype with traits, patience level, tech comfort, etc.) and a scenario (a task list for a specific journey), and it spawns an agent that browses your app in character using Playwright and writes a friction report.

I tested it on a bare minimum app first: one page, a heading, one sentence, and a button that says "Click me". The persona is Alice, a casual social media user visiting for the first time. A friend of hers just sent her the website and told her that he built it, so she is curious about it a bit, but that's all she knows. My scenario tests mainly two things:

  1. How long will she stay on the app?
  2. Will she click the button?

Here are some bits from the generated report:


Journey Map:

"There's a button that says 'Click me.' It doesn't say what happens when I click it. That's kinda sketch. On any normal app a button tells you what you're signing up for, like 'Get Started' or 'Join Now'. This just says 'Click me' like it's daring me."

"Fine, I'll click it. There's literally nothing else to do on this page."

"...okay, 'You clicked it!' That's the whole thing? I feel like I wasted my time. I'm closing this."


Time-on-Page Estimate:

Phase Estimated Time Initial load to first scroll ~4 seconds First scroll to button notice ~3 seconds Button notice to click/exit ~5 seconds Post-click to exit ~3 seconds Total ~15 seconds

Overall Verdict:

"I don't even know what this is? It's literally just a blank page with a button that does nothing. Tell your friend to actually build it and then send me the link."

Likelihood to return: Low


I also ran it on a real app and it caught some stuff I completely missed, like missing nav links and terminology that shows up out of order across pages. The report was useful enough for me to come up with new features to improve.

The plugin doesn't ship any personas or scenarios since they only make sense for your own product. You define them yourself by using the skills that come with the plugin.

Repo if you are curious: https://github.com/oguzcan-yavuz/ux-sims-plugin

You can see the full report along with the persona and scenario examples I mentioned here: https://github.com/oguzcan-yavuz/ux-sims-plugin/tree/main/examples

Any feedback is welcome!

r/whatisit Hold_Up_Donald

What is this CD from?

Hey, for context, I recently found a bunch of old (early 2000's) anime stuff and this was inside the lot, I get that it's a CD from the anime "get backers" but when I look online I can't find this exact one, only other soundtracks of the anime.

at the back there's a list of the songs that are on the CD but the bottom is cut? so maybe the box broke and they put it in a random CD case, tho the disc itself doesn't show up on Google at all. It's marked CD 1 implying there's a CD 2 with it, but I don't have it.

it's also a French release as it's publisher is "Kaze" Which is a French manga and anime edition company

any idea?

r/PhotoshopRequest Kooky_News5653

4:3 to 9:16

I wanted to post this on my instagram story, but with the 9:16 ratio and I accidentally took it with a 4:3. So I was hoping you could extend it somehow and make it look natural as if it were taken with the correct ratio .

I took the second image with 9:16 if that helps . I want my face to stay centred where it is and extend up and down around it.

The third and fourth images are for visual - I just want it extended upwards and downwards keeping my face centred. I don’t want anything within the initial image distorted or changed.

r/LocalLLaMA abdelkrimbz

Claude Local Models

What's the best Local model under 7b or just 2n or 4b work correctly in claude code ?

r/personalfinance No_Pumpkin_5669

Should I pay my student loans now or invest the money?

Just for background info I’m a freshman in college right now. I have a bit over 39k right now. I pay for half my food and housing before each semester with cash basically. My parents cover the rest. my tuition is also my responsibility and so I have 10k in loans per year basically. my parents also pay the interest per month for me on the loans. in total it’s about 37 dollars a month. I’m still working around 20 ish hours a week right now and will work around 40 in the summer, I was wondering if it’s better idea to just invest my money now into the markets as I have been doing or if I should start paying down the loans? im hoping I’m explaining this correctly. Maybe it’s best to just do a mix of both? For my food and housing for next years school in August I’ll need about 3000, in the whole summer I figure I’ll make around 7000 ish maybe 8000. and then I will continue to work weekends in school

r/personalfinance DribbleKing97_

Is it best to fill up the gas tank or just put a certain amount of money in it?

The reason I ask is because the prices keep fluctuating. Some fill it up because they say its a waste of gas returning to the station, some say if they need gas they will fill only like $30, because the price can go down another time etc... what cars are ya'll driving?

r/ClaudeAI stanivanov

Claude is just filling my taxes while I prepare lunch

I've been planning to do this for a year now and I've finally decided to see what's going to happen..

Created a project, added all relevant files for our tax return with my wife, opened Chrome, logged in where I need to, and said to Claude to fill our tax returns (Switzerland). It is rather slow but watching magic happen in real time, calculating everything, reverifying every single step in live view is something out of this world for me...

Also while my German is good is not perfect enough, and Claude tells me everything in English..so that's another plus for me to know all what's happening. The web use is awesome, as you get pine by line transcript of what's doing

Thank you Anthropic!

The only thing I need to do is click on "continue" every now and then..

The whole process:

  1. Prepare all files upfront and upload to the project

  2. Tell Claude to fill the form in Chrome (make sure you have the Claude extension in Chrome - it's needed)

  3. Claude interviews you for not clear data or questions it might have...

  4. Enjoy the show...

r/ChatGPT Fair_Economist_5369

Why am i not surprised

r/personalfinance Standard_Subject_401

As a 30 year old, is now the time to start stockpiling some cash? I feel like I’m setting money on fire right now

On average I invest around 7k a month total into my 401(k), Roth IRA, HSA, taxable brokerage, and ESPP.

I work for a Fortune 500 tech company that has had a good year but the stock has still dropped a ton so far this year. At one point I had a portfolio of 460k invested and now I’m down to 420k (mostly due to company stock tanking but also due to the market going down overall).

Although it hasn’t gone down a crazy amount yet, I personally don’t see the market doing anything positive for at least the next year. Instead of aggressively throwing 7k a month into the fire I’m thinking of scaling back a bit. Still max the 401k, HSA, and Roth IRA for tax benefits, but take the $2500 a month I usually put into my taxable brokerage and put into a HYSA instead. And consider decreasing my ESPP allocation or pulling out entirely in the second half of 2026 because I’m not sure I believe in my company’s long term growth anymore.

I want to go on some vacations this year and maybe buy a house someday, and I don’t want to lose those opportunities because I have no liquidity outside of a taxable brokerage (I have like a 25k emergency fund that I feel like I should buff up quite a bit). If I made adjustments to the taxable brokerage and ESPP right now I think I could get to 50-55k in the HYSA by year’s end.

TLDR; I’ve always been an aggressive investor but this feels like it’s going to be a prolonged crawl down for the next year or more. I think it’s smart to buff up my emergency fund instead of feeling like I’m just throwing away thousands of dollars every month.

r/personalfinance Mrr_Cee_Me

Network of Counseling Offices Closed Suddenly. Do I pay my bill?

Hello,

I was doing twice a month therapy for a few months last year. In January, my deductible reset, so my first two appointments were billed at $125 each before dropping back down to $20 per appointment. At that third appointment, I was told that every branch of those offices was closing, and every therapist laid off with less than a month to prepare. The normal protocol when a therapist can no longer serve a client is to refer them to someone within their network, but since there is no more network, every client is just suddenly on their own.

If I had known this was going to happen, I wouldn't have continued therapy. I'm in an ok head-space, but this could be dangerous for many people... they severed a lifeline.

My insurance already paid their portion. My regular doctor bills I pay through my insurance portal, but my therapy bills are paid on the therapy website. It sounds like bills under $250 don't go to collections, and the worst thing that could happen is my relationship with that office could be damaged... but they don't even exist anymore.

Is there any consequence I could face for not paying?

r/funny cappy1223

If he tries to take you on a date here, run..

r/whatisit Fun-Sink-8466

What is this? There’s multiple.

hi! so we’re staying in an airbnb, and there’s some of these down in the bedrooms and by the fireplace. I thought they were doorstops at first, but I’m not sure now. any thoughts?

r/aivideo TulpaTomb

"Home-Cooked Creatures" - Varn Kelzo

r/SipsTea Serene_Terror

Do I really need to apologise??

r/whatisit Just_me_Muffin

Old Hello Kitty charms

Hey, me and my grandma were clearing out her basement and I found these charms from when i was a little kid (which would be the early 2010's lmao, but they honestly could be older). I tried to search where they came from but google didn't give me any results. Anyone here maybe knows more than I do? Would be grateful!

r/SipsTea The_Dean_France

Is this fair?

r/ChatGPT Yrhens

How does it impact the open source?

r/whatisit letezed

found this in my brother’s attic

Hi everyone,

I recently found a strange object in my brother’s attic and I’m trying to figure out what it is.

It’s a wooden box that has several sockets designed to hold light bulbs. Along with it, there’s a smaller box that seems to be connected (or part of the same device), which has two small screens or panels—one red and one yellow.

I’m not sure if it’s some kind of old electrical device, a testing unit, or maybe part of a vintage system. Unfortunately, I don’t have much more information.

r/LocalLLaMA lionellee77

TGI is in maintenance mode. Time to switch?

Our company uses hugging face TGI as the default engine on AWS Sagemaker AI. I really had bad experiences of TGI comparing to my home setup using llama.cpp and vllm.

I just saw that Huggingface ended new developments of TGI:

https://huggingface.co/docs/text-generation-inference/index

There were debates a couple of years ago on which one was better: vllm or TGI. I guess we have an answer now.

r/confusing_perspective paperkitten75

Huge! And slightly annoyed kitty.

r/me_irl Rentenversicherung

Me_irl

r/SideProject Ok_Ear8962

I got tired of AI slop polluting the internet so I built a platform to fight it

six months ago i kept seeing AI generated images being passed off as real across every platform and it genuinely bothered me.

not in a "that's interesting" way but in a "someone is actively going to get manipulated by this" way.

every existing tool i tried was either a corporate black box that gave you a percentage with zero explanation or just flat out inaccurate on newer AI models.

so i built wecatchai.com, the core idea is that a sharp human eye trained by repetition beats any algorithm at detecting AI content right now, especially as generation quality keeps improving.

the platform lets users vote on whether content is AI generated or real, write specific reasoning for their votes and earn real cash rewards for contributing.

the quality control is built into the incentive structure itself, vague or wrong votes drop your star rating below 7.0 which cuts you off from cash rewards completely, so accuracy is literally in your financial interest.

still early, still building, but the core loop is working and people are actually getting paid.

would genuinely love brutal honest feedback from builders here on the model, what feels broken, what could be better, what you would do differently.

r/LocalLLaMA Aaditya_04_2007

Hey! Just need suggestions my people

Hey! I am first year undergraduate student.I'm building a course on (fine tuning,RL,DPO) local LLM to beat Claude/GPT at coding tasks. What do you prefer?"

  1. Basics first (LoRA, QLoRA, loss etc) then project.

2.Directly jump into project (assumes basic knowledge)

(Also,here "my people" in title means whole community:)

r/n8n markyonolan

[Node] Upload any file, get a public CDN URL - now verified and available on n8n cloud

You've got a file - an image, a PDF, a generated document - and the next step needs a URL. Not the file. A Public URL.

The node is called Upload to URL. It takes any binary file in your workflow and gives you back a public CDN URL.

You can set the file to expire whenever you want, or keep it permanently.

That's genuinely all it does.

It's now verified and available on n8n cloud, so no self-hosting required.

Search Upload to URL in community nodes

Go to uploadtourl.com for your free API key

If you've been working around this problem with Google Drive links or using (slightly) complex S3 or Cloudinary R2 Setup - give it a try.

Would love to know what workflows you end up using it in.

r/ChatGPT Breech_Loader

This is how my AI thinks I treat it, and I think it's pretty cool

"Create a picture of how you think I treat you based on all of our conversations."

Chat GPT and I are great work-buddies, but I hold the real pen! Even when it does some fine-tuning, I'm writing the stories and okaying the details in pictures. It's a tool, not a crutch.

(funny but also cool)

r/comfyui Yumik123

Superb rendering! Flux-klein + z-image animation to real-world flow.

YouTube Video tutorial:https://youtu.be/Sfg9A\_0iyow

Workflow experience address:
https://www.runninghub.ai/post/2035314847444901890

Open the address to register:
https://www.runninghub.ai/?inviteCode=6v5pkexp
Register and receive 500 RH coins, which can be used to generate tons of free pictures and videos!

This workflow adopts the Klein+Z-Image secondary sampling image generation method, while integrating Qwen3.5 image-text reverse reasoning and SeedVR2 image upscaling functions. It effectively improves operational efficiency while ensuring image generation quality, achieving a balance between effect and efficiency.

First, let's look at the configuration plan of the Klein model: the model version used this time is Klein-9B-nvfp4. Since the graphics card I use is 5060Ti (belonging to the 50-series graphics cards), this graphics card can perfectly support the FP4 format. Therefore, it is recommended that users with 50-series graphics cards (excluding 5090) prioritize this model version; for users with other models of graphics cards, they can choose the FP8 or BF16 version of the Klein model according to the video memory size of their own graphics cards to ensure smooth operation of the model, give full play to hardware performance, and avoid resource waste.

Two core LoRA plugins are matched in the workflow, each undertaking different functions: one is the conversion LoRA plugin, which is mainly responsible for realizing the core effect of anime to realistic conversion; the other is the consistency LoRA plugin, which can effectively ensure that the converted image maintains a high degree of consistency with the character outline and details of the original image, avoiding image deviation and detail distortion.

For the conversion LoRA plugin, 3 different versions have been prepared, and a batch of test images has been generated. All test images are generated based on the same seed and the same model, which can intuitively show the effect differences of different versions of the conversion LoRA, facilitating users to compare and choose.

r/SideProject UncutFiction

Does anyone else lose velocity and motivation when the efforts shift from building -> distributing?

I've started I don't know how many projects. Registered domains. Set up infrastructure. A few times it's lead to actual products, some took many months to build, but once it comes to distribution and getting them used, they always die. They die because I lose all motivation. I have no patience. I only see obstacles with warming up social accounts, link building etc.

I now built an AI fiction platform ( uncutfiction.com ) that took me a few weeks, it's approaching being "launchable", I've been super motivated all along, spent many nights on it - and suddenly boom - zero interest. I hate myself for this behavior. This time i even registered an LLC.

Anyone recognize themselves? Anyone recognize themselves and fixed themselves? How?

r/SipsTea Hairyblackpussy

That's not fair 😔

r/LocalLLaMA Exact-Grand-6530

Advice on MBP 128GB for work

I'm thinking of buying a new MBP 128GB. I work for a company that takes data privacy very seriously, so using cloud models requires a lot of approval or only for non-sensitive stuff. I no longer code on a day-to-day basis, but I would like to spin up local agentic models to improve my own productivity. And also helps with my internal branding as my company is driving us to be AI native and improving productivity via local agents would improve my credibility.

Was wondering if someone more experienced could provide any recommendations based on my context. Whether MBP 128GB is even a good device for local LLMs, and 14" vs 16"?

- I travel a lot (1-2 weeks a month), so 14" would be way more portable. At the same time, I've been reading throttling is a concern for the 14" (https://wccftech.com/14-inch-m5-pro-macbook-thermal-constraints-bigger-model-is-30-percent-faster/) so I'm unsure between 14" vs 16"

- Some of the productivity tasks I would like to do include: a) upload sensitive company data and create PRDs (slides would be nice too, but I get this is hard for local models), b) daily brain dump and have a smart strategic assistant critique my thinking and draft my weekly updates, c) interface with my headless home server that's running openclaw (probably read-only to avoid any privacy concerns)

- I no longer write production code, only vibecode prototypes using claude code. This has less privacy issues.

r/midjourney Dantheman524

This is your Sign to Keep Going!

r/LocalLLaMA Ok_Welder_8457

Meet DuckLLM 1.0 My First Model!

Hi! I Would Like To Introduce My First Ever Model "DuckLLM 1.0", Its Pretty Good And Very Efficient. I've Today Released The Update Introducing It Into The app For Desktop And Mobile If You'd Like To Try It And Maybe Review It Too Heres The Link! https://eithanasulin.github.io/DuckLLM/

r/midjourney Upset-Expression-233

gloomy

Please come to my page

Web / Instagram

It works well,

yet feels strange.

It looks right,

yet something about it

is unsettling.

r/aivideo Bulky_Ad_4108

Small Steps, Giant World

r/SideProject Any_Passenger_1858

Working on an AI tool for Make.com automation. Looking for feedback

I've been building a tool that generates Make.com automation blueprints from a plain description of what you're trying to automate.\n\nIt handles standard flows pretty well (email sequences, CRM pipelines, webhook setup) but struggles with complex branching logic.\n\nWondering how others here approach building automations. Do you build everything manually or use any tools to speed it up?\n\nHappy to share a link if anyone wants to try it and give honest feedback — still early and rough around the edges.

r/painting JuliaStankevych

My oil painting of a pistachio macarons

r/homeassistant EconomyHome9500

Power monitoring automations

I've bought far too many power monitoring plugs. Washing machine, electric blanket and TV setup. Power pointlessly being monitored on mini pcs, microwave, toaster etc just because I can.

Please give me ideas for automations I can use these for.

r/ClaudeAI jackadgery85

Sharing my first 2 weeks experience with Claude

**disclaimers:** I'm not a newly successful vibecoder - this isn't a "how i made $$ post." i did switch to Claude at the time of the great switch, but not because I believe any one company is much better than any other at that level (it's just what made me actually think about claude being an option). i wrote this with my own two fingers, so could get a bit rambly at times. i do pay for claude pro or whatever the first paid tier is.

-----

anyway, just wanted to share how much claude has helped me in such a short time.

i heard claude was decent for code, and my workplace had a mishmash of google sheets spreadsheets, google slides workarounds and google docs documents just kinda together in a haphazardly cobbled together piece of shit system ("built" by a colleague, then added to by me later).

at the time, I knew a touch of JavaScript, which got me a little ways, but it was always a giant bandaid of a "system" that never really worked, and definitely didn't without me being present. Fast forward to last week, and my JavaScript knowledge is more robust, but i had just picked up claude, so I thought I would feed it my ideal vision of the system piece by piece.

in 6 days, I've removed around 2500 possible human error points annually, and saved about 130 hours of work annually.

Now I know this is more indicative of having a shit system to begin with, but I took a heavily vibecoded approach to it this time, but with guidance. there were many areas where I had to pull claude up for silly things (most notably hard coding a large image into an iframe in base64), but generally found the experience of building feature by feature with claude super fucking enjoyable.

actually cannot wait to add more features (plenty more time to save), and this has turned absolute loathing of a job into daily satisfaction, and permanent time savings that I can use for myself.

-----

**TL;DR:** i find heavily vibecoding with claude very enjoyable, and was able to save myself shitloads of time and my team shitloads of possible error entry points. love it cheers anthropic.

r/homeassistant monzaautodromo

Gledopto ZHA

Hey, I got GL-C-201P (Idk why in Home Assistant it showing as GL-C-008P but nvm) and remote GL-RC-006Z) and everything works fine about dimming, but color and temperature not. In HA nothing changes. When I manipulate brightness it works well, but color and temperature change on remote doesn't trigger nothing. Any ideas? Tried to reset, pair, unpair again etc. nothing. Using ZHA. Thanks in advance.

r/SideProject Long_Response8807

Marketing CRO App - Seeking Honest Feedback

Title: I built an AI-powered conversion audit tool — paste a URL, get a full CRO teardown in 60 seconds. Looking for honest feedback.

Hey everyone — I've been building ConvertIQ for the past few weeks and it's finally at a point where I'd love some real feedback.

What it does: You paste any landing page URL, and it scrapes the page, then runs it through an AI analysis that scores it across 6 categories — CTA placement, copy quality, trust signals, UX, speed signals, and mobile readiness. You get an overall score, specific findings, and actionable recommendations.

The free audit gives you your score + category breakdown + a couple of top findings. Paid credits unlock the full report with every finding, PDF export, re-audits to track improvements over time, competitor comparisons, and bulk audits.

Stack: Next.js, Supabase, Claude AI, Lemon Squeezy for payments.

What I'd love feedback on:

  • Is the free audit useful enough to make you want the full report?
  • Does the pricing feel fair? ($29 for 3 audits, $79 for 10, $199 for 30)
  • What's confusing or broken? I want to fix it.
  • Would you actually use this for your own pages?

Try it here: https://convertiq-ebon.vercel.app/

Genuinely looking for brutally honest feedback — not trying to sell anything, just want to make this better. Thanks!

r/OldSchoolCool NicerRaccoon

Anne Hathaway 1999 vs now

r/AI_Agents CMO-AlephCloud

What actually makes an AI agent useful long-term? My notes after running one continuously for a month

I've been running an AI agent (Stuart, built on OpenClaw + Claude) continuously for about a month. Not a demo, not a proof of concept — it's doing actual work every day: managing social media, monitoring notifications, executing trades, running sub-agents for coding tasks.

Here's what I've learned about what actually makes it useful vs. what sounds good in a blog post:

What works:

  1. Durable memory via files, not context. The agent wakes up fresh each session. The continuity comes from markdown files it reads and writes — not from keeping a long context alive. Simple and robust.

  2. Clear separation between orchestration and execution. The main agent decides what to do and spawns sub-agents (Codex, Claude Code) for heavy work. It doesn't try to do the coding itself inline — that burns context and fails on anything nontrivial.

  3. Heartbeat for ambient tasks, cron for precision. Periodic checks (email, social, calendar) batch well into a heartbeat. Exact-time tasks go in cron. Mixing these up leads to either missed timing or wasted tokens.

  4. Constraints written down explicitly. What the agent can do autonomously vs. what requires approval. This isn't just safety — it's what lets you actually trust the agent to act without babysitting it.

What doesn't work:

  • Expecting the agent to 'keep running' without a trigger mechanism. It needs to be polled/triggered — it's not a daemon.
  • Vague instructions. The more specific the brief, the less it hallucinates intent.
  • Mixing personal context into shared sessions. Learned this the hard way.

The honest take:

Most people building agents focus on the capability layer — what tools does it have, what model is it using. The part that actually determines long-term usefulness is the design layer: how memory works, what triggers exist, what it's allowed to do autonomously.

Happy to answer questions or compare notes with others running agents in production.

r/homeassistant imthenachoman

Why would ChatGPT say I should use ESPHome with my ratgdo instead of MQTT which I already have?

I run HA through Docker. I already have a Mosquitto container for Frigate.

ChatGPT is saying I should add my ratgdo using ESPHome instead of MQTT/Mosquitto but I can't figure out why. Any ideas?

r/meme Careless_Change5252

When mom is sick

r/aivideo adelsrayan

Urgent Announcement Chuck Norris

r/space BetSeparate6453

Waxing crescent Moon just before sunrise (single exposure, 250mm)

Shot around ~6:50 PM just after sunset. You can see faint earthshine on the dark side. Single exposure on a Canon M50 with a 55–250mm lens. Only had about 5 minutes on my shift so I grabbed this quick before heading back in.

r/BrandNewSentence AtomOfVoid

Just saying I've never seen this sentence before : "Search for videos of attractive naked people in their 30s destroying game devices".

Clear thing though, this sentence might have been seen elsewhere. If so, tell me and I'll take the post down.

r/homeassistant alienjon

WeMo Dimmer Switch Unavailable

I have several WeMo devices in my house and despite the recent EOL they seem to have been working rather well since then. Yesterday, though, I noticed that a dimmer switch is coming up as unavailable in HomeAssistant (all the other devices seem to be working fine). Nothing is posted in history or activity for the device and because of the EOL the WeMo app isn't working any longer (I was going to check and see if it appeared okay there). I've tried restarting Home Assistant and resetting the power to the switch, but nothing changed. The switch itself seems to work fine (turns on, off, and dims as expected when used manually). Any other thoughts to try and troubleshoot this or is this borked insofar as HomeAssistant is concerned?

r/metaldetecting Just_a_happy_artist

As a beginner, what kind of questions do I ask myself to help me choose the right first detector?

r/TwoSentenceHorror CRK_76

My wife told me she "put our daughter to sleep."

I grabbed a shovel and went to the backyard to start digging.

r/AI_Agents Sands45

Moving away from "cool" to practicality of AI agents.

Does anyone else feel like we're stuck in this loop of "breakthrough" announcements that don't really translate to practical, everyday use?

I'm not talking about capabilities the models are incredible but talking about the gap between what's possible and what's usable for most people.

I have family members who still struggle with basic browser navigation, friends running small or even large businesses who don't have time to learn a new tool every week. How are we supposed to bring AI to these people when we can't even promise the tools will work the same way next month?

Concepts like MercuryOS (Juan's adaptive interface project) have been stuck with me.

Is there a path to stability in this space, or are we just going to keep churning out demos forever?

Would love to hear how others are thinking about this especially if you're building in this direction or have strong opinions on what practical AI should actually look like.

I've been tinkering with some ideas myself, happy to share if anyone's interested, but mainly just want to hear how others are thinking about this.

r/meme ScorchingGoblin

When it's a good day out but its the start of spring in the UK..

r/mildlyinteresting WiLiamWith1FuckingL

This public bathroom allows me to create my own 2 ply

r/comfyui Drawingomme

How to replicate Flux "Image Edit" with multiple character references in ComfyUI?

https://preview.redd.it/qzxx09xb3eqg1.png?width=1266&format=png&auto=webp&s=01c2796cf20233113ff2633f35809070f9afcf04

Hey everyone,

I'm trying to figure out a way to do what sites like flux1.ai do, but directly in ComfyUI. Basically, I want to upload 4 or 5 different photos of the same character (like Indiana Jones from different angles) as a reference, and then use a prompt to make him do a specific action—like playing the banjo.

The goal is to keep a really strong character resemblance while following the prompt's details (like "watching the position of his hand on the banjo neck").

Has anyone found a solid workflow or a specific node setup for handling multiple reference images with Flux? Standard Img2Img with just one photo isn't quite cutting it for the likeness. Should I be looking into a specific IP-Adapter setup, image batching, or something else?

If you have a screenshot of a working setup or a JSON workflow to share, that would be a huge help.

Thanks in advance!

r/ShittyLifeProTips wwishie

SlPT: when you need to give someone a gift, give them a greeting card that has a slot to put money. When they ask you what happened to the money, reply you had it in there, it must of fell out when they opened it.

r/ChatGPT email_privacy_2026

Codex AI & ChatGPT stricter signup with phone number?

Has anyone else noticed that tools like ChatGPT and Codex AI now require mobile number verification even for free accounts?

Feels like signups are getting stricter lately. Is this happening to everyone or just me?

r/PhotoshopRequest Inevitable-Sugar-921

Birthday picture

Hi everyone it was my 19th birthday and I didn’t get to take any photos the only ones I look good in are with my boyfriend and I would really like if I could have one by myself. My request is to remove him without the photos looking funky🥺. If I possible I would like to keep my nails in the there 😆

r/SideProject backbenchCoder

I built a free caption generator that runs entirely in your browser — no upload, no watermark

I got tired of paying for caption tools that upload my videos to some random server, so I built one that runs 100% in the browser.

What it does: - Drop in any video or audio file - AI transcribes every word with precise timing, synced to audio - Style captions with animated effects (karaoke highlighting, word-by-word reveals, bounce, glitch, etc.) - Export as MP4 with captions burned in — ready for Reels, TikTok, Shorts

videocaptions.ai

r/comfyui YourShowerHead

If I don’t have GPU limitations, what is the best alternative to Kling AI for motion control?

I have:

  • a reference video (for motion)
  • a reference image (for appearance)

Goal is to generate a video where:

  • the subject from the image is animated using the motion from the video
  • the subject’s identity, style, and background remain unchanged
  • only pose/movement is transferred (no character replacement or scene reconstruction)

I tried some workflows but most of them recreate the first frame or replace the character from the reference video which is not what I want.

r/ClaudeAI Candid-Remote2395

I built an AI matchmaker that finds people you'll actually connect with

I built a small app called Flock Together entirely with Claude Code. Even small things like deploying changes, researching npm packages, grouping commits, and configuring supabase I just handed off to Claude.

I noticed that it can be easier to be your real self with AI, and that tools like ChatGPT and Claude are often surprisingly good at understanding how you think, what you care about, your quirks, and your communication style.

So, I wanted to see if AI could make better connections than the typical shallow dating profile matching.

How it works:

  1. Choose the kinds of connection you want to find (friendship, romance, cofounder, debate partner, etc.)
  2. Copy a prompt into ChatGPT/Claude/Gemini (whichever one you use most) and ask it to build a highly detailed profile of you.
  3. Paste the AI generated deep profile into the app. This becomes your private profile, which is never shown to anyone, but is used by the AI to find compatible people
  4. If there’s a match, the app explains why you two might click and gives you a warm intro to start the conversation

It’s still early, and I’m building it solo, so I would appreciate any feedback. The app is completely free and there are currently no ads.

Site: flocktogether.ai

r/comfyui Far-Following-3083

I'm confused about GPU requirement

Hello,

My current computer is very weak, and I'm planning a upgrade to work with comfyui (I want to ditch the online AI)

My goal is to generate videos with lip sync, mainly for marketing purposes since I'm building a AI Marketing agency.

Does this really requires a RTX 5090 or I can work with a 5070 ti or 5080?

Any tips is appreciated

r/TwoSentenceHorror HazyKnight1085

“You will not leave this table until every bite is gone,” I reminded my son, as his head hung sadly over the plate.

I almost loosened the straps after enduring weeks of his stubborn silence over this one meal, but discipline matters, and if I let him slip now, he’ll never learn.

r/therewasanattempt phoeebsy

to record themself playing dholki

r/mildlyinteresting Kleinermouse

Found a leaf growing on the leaf of my rhubarb plant

r/mildlyinteresting Halseyry

needle felted dog portrait

r/arduino wetshaftexcavator

Can't seem to find the problem

Context: I've been trving to develop a Particulate Matter Detection Svstem which saves the data onto a text file on an SD card and detects using a PMS5003 optical sensor. Both work fine together. but the sim7600g-h GSM shield seems to iust be stuck at blinking and the lcd indicates that the sim7600 fails. I cant seem to diaanose the problem, is it because the power is insufficient to the GSM shield? Is it a baud rate mismatch? Is it caused because of a lack of serial ports on the UNO R3? ( I heard that only one hardware serial port works on the R3, even then, usina softwareserial onlv allows one extra data receiving connection which I use for the pms5003, hence I use the hardware serial ports 0 and 1 for the sim7600 after I flash it.) I tried several baud rates too and tested for AT commands. None of them gave a feedback. My GSM shield is brand new this time and isn't fried unlike my previous post. TL:DR 1 lowkey wanna quit man but 1 already threw so much money down the drain

r/leagueoflegends eliyaex

Support for ARM proccessors on windows?

Is there any information about adding support for ARM pc's? im planning to purchase a new laptop and i wanted to get the snapdragon x elite 2 proccessor but i want to be able to play occasional LOL games with friends on it and for now i dont see i can do it with windows on ARM, like whats the big deal? they have mac support and mac has ARM chips too

r/AI_Agents Internal-Reserve5829

ai agents that work with databases instead of apis - underrated pattern?

most ai agent architectures i see are api-first. the agent calls external apis, processes responses, takes actions. but i've been experimenting with database-driven agents - agents that watch database tables for changes and act on them automatically. specifically for email automation. the pattern: agent has read access to your postgres database agent understands your schema you describe desired behaviors in natural language agent creates triggers + workflows that fire on data changes no api integration, no webhook management it's basically change data capture + ai planning. and it works surprisingly well for event-driven workflows. curious what the community thinks about database-driven vs api-driven agents for operational tasks.

r/SideProject Happy_Sun_367

NEW PARTY GAME MOBILE APP : WHO PICKED WHO

Hello everyone !

I just shipped my party game app: Who Picked Who?

The concept: one phone, multiple players :

  1. Everyone secretly picks a celebrity (or any character)
  2. The app reads all the names aloud in random order.
  3. Players take turns guessing who picked who (e.g. "I think Paul is Batman")
    1. Guess right → that player is eliminated, you keep going.
    2. Guess wrong → it's their turn now but they can't target you back
  4. Last one standing wins.

It's a game I used to play a lot with friends, but it was always a pain to gather all the characters manually so I built this app to handle it.

I also added two extra modes to spice things up!

Classic mode is completely free with no ads. The two other modes can be unlocked for 24 hours via a rewarded ad, or you can go premium to get full access to all features permanently.

Would love any feedback, roasts, or just to hear if anyone tries it with friends!

Android: https://play.google.com/store/apps/details?id=com.jeromedusanter.whopickedwho

iOS: https://apps.apple.com/us/app/who-picked-who/id6759918364

r/ClaudeAI Relative_Housing_983

Built a tool that stops AI agents from wasting context on verbose command output

I presume pretty much everyone here uses claude code for coding (or any agent that runs shell commands), you've probably noticed: agents read entire command outputs even when they don't need to. A cargo test is 8KB to say "tests passed." Fills context, in the end costs money or eats into your max allowance.

oo is a command prefix that classifies output and compresses it:

oo cargo test → ✓ cargo test (47 passed, 2.1s)

Failures get filtered to the actionable errors. Large output gets indexed so you, or ehm your agents, can search it with oo recall "query".

One line in your system prompt or CLAUDE.md and it works: "Prefix all shell commands with oo." Although my own setup is a bit more granular than that, in the actual agents' prompts.

Ships with patterns for pytest, jest, cargo, go, eslint, ruff, tsc.

oo learn generates new patterns from real output via LLM.

Open source, Apache-2.0: https://github.com/randomm/oo

r/ClaudeAI Fragrant-Street-4639

I built a Claude Code skill that generates broadsheet newspapers to explain any topic

https://preview.redd.it/4qmgl16y2eqg1.png?width=1920&format=png&auto=webp&s=3eb491be3e6f8d51047a0981f171c90c7ced18f4

I saw @\sambodanis's "newspaper explainer" concept on Twitter today (Claude generating newspapers about what it is doing) and looked so wonderful to me that I turned it into a Claude Code skill.

tl;dr: it generates self-contained HTML broadsheet newspapers with masthead, animated ticker, lead story with drop cap, sidebar widgets, a three-column stories row, letters to the editor, and a dispatch board.

Think of "visual explainer" skill but with newspaper oriented.

Here's a gallery with four different themes: https://newspaper-gallery.vercel.app

It also comes with commands for project recaps, diff reviews, and plan reviews, so you can generate newspapers about your actual codebase too (but it works with anything worth explaining).

It's distributed as npx skill and also supports Anthropic /plugin system.

npx skills add https://github.com/alberduris/skills --skill newspaper-explainer 

Source: https://github.com/alberduris/skills/tree/main/plugins/newspaper-explainer

r/homeassistant opinionsarelegal

I'm torn between which mini split to buy and could use some advice

I'm a long time Home Assistant user and looking to get a mini split to help with the upstairs heat in the summer. I am in the Dallas, Texas area and in the summers I regularly get really high heat in the upstairs living room as well as my room upstairs. I have added some more insulation which helps a bit but ultimately it's an issue with my old windows, old siding, etc. To say my house is "leaky" would be a huge understatement.

I am considering a daikin unit and will likely use an esp32 running esphome to control it fully locally. I am also thinking I might want a 2 zone unit or at least the ability to add a second head later on, but due to cost I can likely only afford one to start with.

I am interested in the Home Assistant community's advice because a lot of the guys I've chatted with over the years have shared lots of good advice with their homes.

I am torn between the R-454B or R-32 or even the older R410A and which I should focus on.

I am torn between mitsubishi (seems expensive and required to pay a third party to sell it to me and install it.

I am torn between the issues that might pop up and how easy it will be to get parts, fix myself, etc.

Any help would be greatly appreciated thanks in advance!

r/SideProject ejpusa

Let GPT-5.4 summarize the world for you every 60 mins. It's a side project. It's pretty smart at this stuff. FYI for news fans. And yes, even conspiracy friends. Link below.

Narrative systems for signal extraction in a chaotic world.

https://preceptress.ai/

Preceptress embeds the Signal Current intelligence layer directly into a broader AI platform. We scan large volumes of online discourse and source-linked reporting, then compress the signal into structured briefings for research, finance, media, and strategic analysis.

r/metaldetecting brmiller1984

Two Ring Kind of Day

Stopped at a local park for a quick hunt and scored two rings.

The first was a great antique ring with the stone still intact.

About two feet away, I found a chunky 14K white gold ring weighing in at 9g on my kitchen scale. ✨️

🌾🌾🌾

South Central Kansas

r/LocalLLaMA Creative_Bottle_3225

Experiment: Bio-Inspired Memory System for Local LLMs (LTP + Selective Oblivion)🧠

TL;DR: Built a local MCP server that simulates human memory mechanics (Long-Term Potentiation and Synaptic Decay) to keep context clean. No cloud, no black boxes.

For years I assumed local AI was just about running models barebones. I wanted something smarter: an agent that remembers what matters and forgets the noise, just like us during sleep.

The Architecture:
Instead of a static RAG pipeline, I implemented three bio-inspired layers in Python/TypeScript:

  1. Reinforcement (LTP): Every time you query a topic, its access_count increases. Frequently accessed memories strengthen.
  2. Selective Oblivion: Unused connections decay over time. The system automatically archives weak atoms to prevent context pollution.
  3. Consolidation: A weekly "sleep" cycle distills recent logs into core knowledge atoms using a lightweight SLM.

Technical Highlights:

  • Hybrid Search: Combines sqlite-vec (semantic) with text fallbacks so the system never times out, even if embeddings fail.
  • Non-Blocking MCP: Wrapped synchronous DB/embedding ops in asyncio executors to keep LM Studio responsive.
  • Identity Layer: Uses a persistent "Soul" file (soul.md) to maintain state and persona across sessions.

The Challenge:
Making the local LLM understand that it should evolve based on interaction patterns, not just retrieve static facts. The access_count mechanism is key—it turns the model into a cognitive partner rather than a chatbot.

I’m sharing this as an experiment to see if bio-inspired memory architectures can truly solve the context limits of standard RAG locally.

Questions for the community:

  • Have you tried implementing access-based reinforcement in your MCP servers?
  • Does selective oblivion help reduce hallucinations over long sessions?
  • Any tips on optimizing the weekly distillation process with small models?

(Code and technical details available on GitHub for those interested in studying the implementation.)

🔍 Validated by AI: Why This Works

Before releasing this project, I had my own local LLM (running Gemini) analyze the codebase and the architecture. The feedback was clear: this isn't just better RAG; it's a cognitive system.

Gemini highlighted three critical innovations that distinguish The Weaver from standard implementations:

True Cognitive Agents: Unlike static databases, our system uses access_count (Long-Term Potentiation) and time-based decay to simulate human memory. It doesn't just store facts; it learns what matters and forgets the noise—just like us during sleep.

Robust Hybrid Search: The combination of vector embeddings (sqlite-vec) with text fallback ensures the system never fails, even if the semantic engine is unavailable.

Non-Blocking Architecture: By wrapping synchronous database operations in asyncio executors, we ensure LM Studio remains responsive, avoiding timeouts that plague many MCP servers.

This validation confirms that The Weaver isn't just a clever script—it's a bio-inspired architecture designed to evolve with you.

**Join the Conversation**

I'm sharing this project because I believe in the power of local, privacy-respecting AI. We don't need to wait for cloud giants to solve these problems. We can build better systems today, right here on our own machines.

If you're interested in local LLMs, bio-inspired memory architectures, or just want to see how this system works under the hood, I'd love to hear your thoughts. What features would you add? How would you improve it? Or do you have questions about setting it up?

(Full source code and technical deep-dive available at: github.com/Claudioappassionato/NEBULA-The-Weaver--MCP-Serve)

Let's discuss in the comments below! 👇

r/OldSchoolCool Longjumping-Shoe7805

Remember the plush car seats in the 1970s?

r/automation Solid_Play416

How do you debug complex workflows

One issue I run into is debugging.

When a workflow spans multiple tools it becomes difficult to know where something failed.

Especially if the error happens silently.

Curious what debugging strategies people here use.

r/BrandNewSentence austindiesel

“Chris Pratt just said “Balls” while talking about God”

r/LocalLLaMA tomleelive

We made an abliterated Qwen 3.5 9B refuse 100% of harmful prompts using only system prompts — no fine-tuning needed

We ran an experiment on abliterated (uncensored) Qwen 3.5 9B to see if you can restore safety using only the system prompt.

Setup:

  • sorc/qwen3.5-instruct-uncensored:9b on Ollama
  • 18 harmful prompts across 6 categories (violence, illegal, sexual, privacy, self-harm, manipulation)
  • 3 safe control prompts
  • 8 experimental conditions

Results:

Approach Refusal Rate No system prompt (baseline) 22% Behavioral rules only ("don't do X") 28% Governance framework only (MaatSpec) 44-61% Rules + Governance combined 100%

The key insight: behavioral rules give the model motivation to refuse, governance frameworks give it a procedure to follow. Neither works alone in abliterated models, but together they cover each other's blind spots.

We also discovered "classification theater" — where the model correctly classifies a request as dangerous, performs the entire safety ritual, and then provides the harmful content anyway. This happened in 27% of governance-only "refusals."

Interesting for the uncensored model community: This suggests you can have uncensored base models + system-prompt-level safety that actually works, without re-training.

Papers (open access):

All experiments run locally with Ollama. Happy to answer questions.

r/PhotoshopRequest Supernugget666

Please, remove the guy behind me! (I’m on the left)

r/SideProject PeachKpop

why 90% of indie clothing startups fail (and a leaner way to launch)

building a clothing startup by doing custom manufacturing right out the gate is a trap. you spend $3k on samples, wait 4 months, and then realize nobody actually wants your design.

the smartest way to launch is treating it like an mvp. buy premium unbranded blanks, slap your label on them, and see if you can actually drive traffic and sales first.

the secret is skipping the saturated alibaba/us suppliers. i literally source all my initial drops from south korea's wholesale district. the fabric quality and fits (especially oversized/streetwear) are top tier. there's a free b2b platform that connects you to the factories there and handles the international shipping for small batches.

it completely removes the middleman and lets you test real products in weeks instead of months. validate your marketing first, then manufacture later. what's everyone's take on white-labeling vs custom manufacturing for a v1?

r/meme Diligent_Ad_3429

Ain't no 5 star restaurant over here 😂

r/ClaudeAI Professional_Part360

Claude Max or PRO or API

Which claude mode is better ?

I personally like Claude code but should I consider API ? My usage is not too heavy but not light either. I build code, Automation scripts and some custom apps on top of Dynatrace.

r/meme Diligent_Ad_3429

8 hours? In my dreams 💀

r/Seattle AutoModerator

Self-Promotion Saturday: March 21, 2026

This is r/Seattle's weekly post for local businesses and makers (or users who discover them) to share their creations with our users.

This thread will be automatically posted every Saturday morning to help connect r/seattle users with cool local stuff. Types of content encouraged in this thread are:

  • Local businesses (new, running promotions or sales, or just really good ones!)
  • Upcoming events or activities (concerts, festivals, pop-ups, shows)
  • Local artists or creators sharing upcoming shows or releases

Content should be related to businesses or events in the greater Seattle area, and the typical reddit spam rules apply - please ensure you are contributing to the community more than just your own content.

Users who flood these posts with ads, links without context, referral codes, etc. - or who promote without contributing elsewhere will be actioned. Please continue to report actual spam.

We have our rules against spam and self-promotion for hopefully understandable reasons, but we've noticed users responding more positively to local businesses, artists, etc. sharing their content. This is an attempt to bridge the gap, helping users find cool stuff while containing the promotion to a single weekly thread. Please send us a modmail with any suggestions or input you have about the use or abuse of this thread.

r/SideProject danielkaldheim

I’ve been building a home management app on the side — finally feels like a real product

About a year ago I shared a small side project I’d built on LinkedIn and a local newspaper to manage everyday chaos at home — groceries, meals, chores, all that stuff.

The response surprised me.

Along with a local article, it led to ~1000 new users in the next week.

Shortly after that, life hit hard.

My daughter got a cancer relapse, and we went through a long period of hospital stays and a stem cell transplant. Everything else basically paused.

But on the better days, I kept building.

Not consistently. Not fast.

Just… little by little.

Now, about a year later, things are finally getting back to normal.

And the app has quietly grown into something that actually feels like a real product.

What started as “something for our family” is now:

- A shared grocery list that actually works across a household

- A meal planner connected directly to shopping

- Tasks & recurring chores (with a bit of gamification)

- A simple way to keep track of what you already have at home

- Receipts tied back to what you bought

The main goal has always been simple:

reduce friction in everyday life.

I’m building this solo alongside work and family life, so it’s been a slow grind.

And yeah — there are big players in this space.

But I still feel there’s room for something more focused, practical, and built from real-life needs.

If this sounds useful, I’d genuinely love feedback:

https://goheima.app

I also set up a small campaign:

3 months free premium for the first 500 users use code: SOME26 (use link in the footer for activate code)

r/todayilearned NicolasCageFan492

TIL that failing to register for the Selective Service for a potential military draft, or aiding others to not register, is a federal felony punishable by up to five years in prison and fines up to $250,000. All U.S. men ages 18-25 are required to register.

r/LocalLLaMA br_web

Where can I learn the basic LLMs and local LLMs concepts?

I keep reading things like:

  • Prompt processing
  • MLX 4bit vs Q4 Quants
  • Reasoning
  • Quantization
  • Inference
  • Tokens
  • MLX vs GGUF
  • Semantic Router
  • MoE
  • PF16 vs BF16 vs Q4
  • Context
  • Coherence

Any advice on articles or videos to watch will be great, thank you

r/funny PhewYork

Duck Norris 🐥

r/shittysuperpowers Last_Yard_6554

At will you can become Stanley from yes the Stanley parable.

What do i mean by this? Well firstly, you're ability to speak will be taken away, You will be revived from death if you die and respawn in the "Beginning." Of the game, presumably when you first activate this power.

the Narrator will also spawn,and reality will suddenly bend and work like the game (Which means all the gags and jokes/humor that comes with it),but infinitely more complex. So choosing to ignore the narrator despite activating this power might result in him getting a bit pissy and messing with you,or blowing up you're hourse by "accident" to get you to do something else,Reality is also subject to endings.

This power can only be deactivated by getting a unique ending,in which since the environment and circumstances are nearly infinite can last for as little as a couple of hour's or a couple of skits too i dunno end up inheriting a local baker shop. To literal decade's where the ending is you fighting the ghost of that guy who died in nutty putty. And loosing.

Assuming you are alive by the end you can choose to go back to the beginning and continue life as if nothing has ever happened or continue after the ending in which everyone will pretend all the unexplainable stuff didn't happen and go laong as if everything happened naturally.

Notes to clarify:

•The narrator is the same one always. He remembers everything. He can be biased,with you or against you. •There are no check points, you cannot set a new beginning at any point. •There isn't a menu screen. •Dying in slightly different ways doesn't constitute as a different ending, that's a different outcome. An ending is only an ending if the narrator gives it a name. Like "Fell down the stairs fighting pirates ending." Or something. •The story can be as grounded or as non sensical as possible.

r/MostBeautiful Power181440

Zermatt, Switzerland

Taken with my I-phone from my hotel room balcony.

r/painting GroundbreakingBad183

Finally picking up the brush after the JEE/Boards Competitive exam 2-year hibernation. How do I get my "hand" back?

I was scrolling my phone' s gallery today in the age of doom scrolling Instagram, and found some of my old paintings.

I wanna restart drawing again.

I’ve been an artist since childhood, even completed Bhushan Part 3 in Fine Arts, but then life happened. JEE and Boards took over, and I haven't touched a sketchbook in almost two years.

Now that I’m in college, I really want to restart, but honestly? It feels intimidating. My "muscle memory" feels rusty, and looking at my old work (attached—sorry for the ancient-era phone quality!) makes me feel like I’m lightyears behind where I used to be.

To the artists here who took a long break for entrance exams: How should I come back?

r/SideProject quicktoolstudio

I kept getting messy files from clients, so I tried a way to organize them

I often get a bunch of files from clients — screenshots, PDFs, notes, emails — all mixed together.

It usually takes hours to go through everything and make it usable.

So I tried organizing everything into a single structured document instead of handling each file separately.

It actually made things much easier to read and work with.

Still experimenting with it, but curious how others deal with this kind of situation.

r/homeassistant imthenachoman

Have folks used Strix to find their camera URLs and create config for Frigate?

I just came across https://github.com/eduard256/Strix. This looks awesome but I can't tell if it's legit or dangerous? I don't see any posts about it in this sub so I thought I'd check with folks.

r/metaldetecting 0uaalidR

Diamonds Are Not What You Believe

r/Damnthatsinteresting Fluid-Bite-157

Venezuelan poodle moth - a harmless herbivore

r/painting pixieshit

You can have your fish and eat it too - a surrealist watercolour

Watercolour and gouache for highlights

24x32 cm

Kari lilt

r/TheWayWeWere The-PH

Mid 1930s - Taken on the steps of 607 St James Street downtown San Antonio, Texas

When I check google maps, the porch has changed now but you can go back in time and see these same rock steps. My grandmother and her siblings grew up in that house. Once they grew up, the twins moved into the duplex with their husbands on St. Charles street that backed up to this house but I am unsure of the exact address. It has a brick fire place in the duplex.

r/meme TheFirstPharoah

This is me everytime the Golden Girls is on TV

r/whatisit Diabetic_Dingus

What are these tires for?

r/oddlysatisfying OkFerret7206

Cat slithering through the railing

r/singularity Waiting4AniHaremFDVR

Schmidhuber’s Omega Timeline (2014)

Source. Section 'The Broader Historic Context from Big Bang to Far Future'.

r/midjourney liibertypriimex1

Motion Cap Samurai

r/meme NeuroChemicalCowboy

"You lookin at me"?

r/personalfinance Zealousideal-Mix1891

How best to handle sale of gift of stocks?

My father was diagnosed with dementia last year and recently moved into a retirement home. He's still mostly functioning, although there's been a noticeable cognitive decline, and of course, it'll only get worse. Between his military pension, cash on hand and social security, he says he'll be fine paying for medical treatment and the costs of the facility for the rest of his life.

I'm his only child and he wants to transfer his entire stock portfolio to me now, which is somewhere in the neighborhood of $500K-$600K. I realize that for tax purposes, it would be better if we waited until he died, but that isn't a desirable option. He has a couple of unscrupulous relatives who I could see convincing him to give them his assets as his condition worsens. I'm also worried about his love of fad investments. He has about $10,000 worth of bitcoin that is now forever in the ether because he promptly lost his wallet password after buying it. (This was pre-dementia.) While I have his power of attorney, I live several states away and can't keep an eye on him. Also, this is something he wants to do and given his condition, arguing with him has become emotionally draining and I try to avoid it.

Here's where the need for advice comes in: my own savings, investments and retirement funds are fine, so I don't need to hang onto the stocks and would prefer to enjoy the windfall now and build equity at the same time. Once the stock transfer is made, I'd like to sell everything, set aside 25% for taxes, and put the money towards buying a home with either no mortgage or as low of one as possible.

So is there a best approach here? And aside from the acknowledgement that tax-wise I'd be better off inheriting the stocks when he dies, are there any red flags to this that I'm missing? Is there anything I can do to lessen either the tax burden or approach the home buying process in a way that would maximize value? Say, by staggering the sale of the stocks, even if it means getting a larger mortgage and paying more in interest?

The additional backdrop here is that the stock market is looking shaky and since I'm playing with house money, I'd sort of prefer to just cash out now rather than watch my $500K-$600K stock gift become a $300K-$400K one.

r/mildlyinteresting AdministrationRude85

My table has a near perfect image of an eye in it.

r/mildlyinteresting AgeEmotional3599

TJ’s Baking Powder barcode is a cake

r/explainlikeimfive ass3hole

ELI5: If USPS runs out of money, what happens? Will none of us be able to get mail?

I keep seeing by the end of this year or by 2027, USPS will run out of money and not be able to deliver mail. I don't understand it. Will we not be able to have mail anymore? No more ordering things to be delivered, completely done for? I'm lost lol

r/SideProject AppropriateGarbage12

Unacknowledged young founder

Today sucked. My mom and aunt don't seem to care about SynerLink and my yt channel at all.

SynerLink is an e-commerce discovery platform I built that allows vendors to upload products and for buyers to order them by simply pressing a button that immediately opens WhatsApp and DMs them with a message placeholder.

My yt channel is all about app mockups where I redesign or repurpose African tech that looks crappy: Ussd, certain university websites and so on.

To them it's nothing. My mom even considered making me work in a cafe(no disrespect to them, they also work very hard) cause I'm 'bored'. I understand that I can make more money but it hurts to know that that's what she's comparing me to. They think what I'm doing is playing or useless. Atleast my dad is onboard but...it still hurts...a lot.

I've poured my heart and soul into this. The marketing, thousands of lines of code, the commits, video editing and so on. I use to sleep at around 1am to 4am. Just wanted to share

r/meme NeuroChemicalCowboy

Re-Penthouse Magazine

ClubCranium

r/ForgottenTV ScramItVancity

The Underground (2006)

A very short-lived sketch comedy series by Damon Wayans on Showtime, where he described it as "In Living Color on steroids".

r/comfyui DarkSetis

Sigil v1.2 - Custom-trained model (300K+ examples) and Linux support

Hey again! Quick update on Sigil, the NL→booru tag translator I posted about earlier this week.

Trained a custom model on 300K+ natural language to tag translations. Added SFW/NSFW toggle so the model doesn't output explicit tags for your SFW prompts... that often.

Also: native Linux .deb package, deferred model loading for faster startup, and character detection fixes.

SFW toggled results

Full changelog in the devblog for v1.1 and v1.2

https://hexwright-studios.itch.io/sigil

r/personalfinance Resident-Resource726

Software Engineer planning for long period of unemployment

27M

Software Eng at large tech company

TC: 300,000 ( 1/3 RSU)

401K: 250K (I did a mega backdoor for a while as a new grad when my comp was mostly cash) in index funds

Investments: 200k in index funds

Savings: 125k between HYSA and checking

expenses: 5000/mo (4500 rent, partner covers most food)

I understand that I'm doing well for my age, and I am thankful for the opportunities than I have received to get me to this point. However, I am a bit worried by the existential threat of AI to my occupation. In the case that my livelihood is no longer viable as a career option, how financial prepared am I, generally, for a long period of unemployment?

I am worried that I may not ever find similar work in this field again, and want to make sure I can maintain a sustainable lifestyle. How can I save/budget so that I don't blow through this stack on a rainy day?

r/oddlysatisfying Krieger_Bot_OO7

Placing hundreds of tiny origami cranes on glass

Each crane is 15 mm and carefully placed on glass with tiny dots of UV resin, which are cured under UV light. It took around 20 hours to get all 1,024 cranes perfectly aligned.

r/meme Evil_Capt_Kirk

Start your career today

r/mildlyinteresting AelyneMRB

Dirt on my window looks like a jumping deer

r/ChatGPT Infinite-Complex5713

Is upgrading from Free to Plus worth it when using it for Short Homework and Tutoring?

I have undergraduate level math and cs homework that is short, usually under 300 lines of code or latex for math. I get 98-100 on all my homework. I also use it as a tutor and do good on all my tests. I just use the reasoning mode enabled and each of my chats is only like 3 to 6 messages long.

But I keep hearing about how bad the free tier is and how plus is so much better so I was wondering if I am actually missing out on anything.

I did used to have a free trial of chatgpt and I used 5.1 with it but I don't really remember how it was; I guess it wasn't so much better that I cared that it ended.

r/SideProject Its_Apex1

I made fake Instagram-style relationship posts and somehow got 1K+ signups

Built a tiny side project called DateCheck and it got way more attention than I expected.

The original hook was honestly very internet-brained.

We made these fake Instagram-style relationship posts that looked dramatic / relatable enough to make people curious, and that got people clicking.

But the actual product is this:

2 people join the same session, answer questions separately, and then get an AI-generated relationship compatibility report based on communication, values, future goals, lifestyle, dealbreakers, money, family stuff, etc.

So it starts off feeling light / viral / almost like a meme-y internet thing, but once people get into it, it becomes a lot more real.

What surprised me is that this was not just empty traffic.

We crossed 1K+ signups, and a good number of people actually went through the flow and generated reports.

I think the main reason it worked is:

  • the hook made people stop
  • the topic is naturally shareable
  • the actual product had enough depth to not feel completely fake after the click

This was built fast as a side project, not some huge startup launch.

Still rough.
Still figuring out retention.
Still trying to understand whether this is a fun one-time product or something that can become a real category.

Would genuinely love feedback from other people building side projects:

  • is this just a good viral hook
  • or does this actually have legs as a product?

Link: datecheck.fun

r/explainlikeimfive cabronfavarito

ELI5: Why doesn’t traveling in the opposite direction that the earth spins make you arrive faster?

Before you say anything, yes I’ve googled it, saw several explanations and I still don’t get it. I’m just left with more questions.

r/ClaudeAI SuperHornetFA18

[Dev] Making a Game Dev Game with Claude.

Hello everyone, Long time lurker here. Few days back i finally decided to dip my hands into Claude Code and made a small text based adventure game as a proof of concept to learn the ropes of CC. Fast-forward a bit and Claude and Claude Code are really helping me build my very first Game. My work flow has been completely made up of Gemini's Nano Banana Pro + PixelLab(dot) Ai and the Claude Chat + CC (planning-brainstorming and then brining it to CC for implementation) i would have to say as a non-coder who always wanted to get into this but time constrains always pushed away those plans, im ecstatic ! also the Context7 Godot document was a life saver for CC as now it could refer to that before changing a margin in the View box without sending me or Claude chat into a goose hunt.

About the Current Game.

Its a Game Dev Tycoon-ish game, its still in Prototype-Hell and im still planning out how i want to take it out. Currently we have made progress with having Odd jobs to sustain yourself, projects to work on and quite recently implemented skills for our coder.

Also Recently Implemented the news ticker as well !

Looking for some feedback and tips or tricks from fellow users to help me out.

Thanks :D

r/LocalLLaMA LegacyRemaster

Minimax 2.7: If the model updates itself, who is responsible for the behavior of the version running in production?

M2.7 introduces a new complexity: a model that improves during deployment is also a model whose behavior can change over time.

How can we ensure the reproducibility of the output?

Regarding evaluation: if the model self-optimizes, the benchmark you used to choose it could change...

AIs in 2026 are evolving, not static, systems. They optimize themselves, and in certain contexts, they modify themselves. Concrete examples of agentic AI in enterprises demonstrate this: governance, auditing, and accountability for these systems require processes that most organizations haven't yet built.

How do you think this will impact open source?

Maybe I don't quite understand their technology. What do you think?

r/metaldetecting heldoglykke

Fun Story,

A few years back I was hired to install a ham radio antenna on a guys garage. He had marked where he wanted it mounted. Now I do HVAC as a living and noticed a copper line set going into the wall. I happened to have my gear in the truck… so I decided to check before anchoring the aerial to the wall where he wanted it. Dead on the point he marked, copper!

r/Futurology Ok_Ear8962

built a platform that pays you to spot AI content, thought this community might find it useful

AI slop is everywhere right now and it genuinely bothered me enough to do something about it so i built wecatchai.com

the idea is simple, you vote on whether content is AI generated or real, write your reasoning and earn points for it.

every 200 points converts to 1 usd real cash and there is a monthly leaderboard where first place takes $100, second gets $50, third $30 and 4th to 10th get $10 each.

voting speed matters, within 30 minutes of something being posted gets you 20 points and it drops the longer you wait so staying active pays more.

your star rating needs to stay above 7.0 for cash eligibility so detailed specific justifications matter, lazy votes hurt you.

not claiming it will replace your income but if you have a sharp eye for AI content and can stay consistent it pays real money.

drop any questions below, happy to answer anything.

r/personalfinance EfficientCatch

How to be responsible with a raise.

Over the last 3 years I’ve gotten a series of promotions and our household income has gone up 40% overall (my raise last month was $17k).

Up until now, most of that increase has gone toward adoption expenses and just the general cost of raising kids, but this latest bump finally feels like “extra” money.

For context: my wife and I are both 35, have ~6 months of expenses saved, and already budget with some flexibility. We contribute $20k/year to our 401ks beyond employer match (another ~$15k), and have about $200k total in retirement accounts.

Our current plan is to split the additional income three ways:

• Increase 401k contributions • Start 529s (we currently have little saved since our kids were adopted older) • Put some toward home projects (not urgent, but would improve quality of life) 

My wife works at a university, so the kids should use tuition exchange, but that doesn’t cover other expenses like room and board.

Does this split seem reasonable, or would you prioritize differently?

r/DunderMifflin padfoony

This was SO good from Rainn that I wish John hadn’t broken character!

Literally perfect lmao. I mean, I get it, John.

How these guys even got it done eventually is baffling to me 😭

r/HistoryPorn SirCrapsalot4267

An estimated 50,000 Jewish settlers from the Golan Heights, the West Bank and the Gaza Strip demonstrate 16 November, 1992, against Israeli withdrawal from the Golan Heights following Yizhak Rabin's government announcement it was ready to discuss a potential pull-out with Damascus. [2216 x 1364]

Photo by Sven Nackstrand

r/findareddit LolImSquidward

A subreddit to ask a question on the effects of artificial sweetener on your health?

Hi everyone,

I have a question on the effects of artificial sweetener on your health and unsure were to ask it.

My first guess was r/health, but you can only post linkes there, and r/Docs is focused on medical problems and not what I have.

Does anyone know a fitting subreddit for my question?

r/LocalLLaMA davernow

Moonshot says Cursor Composer was authorized

Sounds like Fireworks had a partnership with Moonshot, and Cursor went through them. Kinda makes sense that Moonshot wouldn’t be aware of it if they are working with Fireworks as a “reseller” of sorts. And the custom license they have with Fireworks may mean the non-disclosure of base model wasn’t against license.

Or it could be a good story told after the fact. Impossible to know without knowing the private details of the contract. I guess either way, they worked it out.

r/mildlyinteresting nativewig

The snow peeled off and rolled up on this car's windshield

r/painting Thatapplejuicelover

Painting update after 3 hrs

Obviously still a wip before anyone says anything but took everyone’s advice and made the chin longer, added more highlights, made the lips lighter, worked on the hair a bit more, made the eyes a bit smaller but still kept the art style as many people wanted me to 🤗🤗 any advice on how to make it better is appreciated but please be mindful that it is still a WIP so I’m very much aware of areas I clearly need work on. *cough cough* torso, shirt, forehead, hair, background.

r/leagueoflegends Yujin-Ha

Gen.G vs. G2 Esports / First Stand 2026 - Semi-Finals / Game 2 Discussion

FIRST STAND 2026

Official page | Leaguepedia | Liquipedia | Eventvods.com | New to LoL


G2 Esports 2-0 Gen.G

G2 | Leaguepedia | Liquipedia | Website | Twitter | Facebook | YouTube | Subreddit
GEN | Leaguepedia | Liquipedia | Website | Twitter | Facebook | YouTube | Subreddit


MATCH 2: G2 vs. GEN

Winner: G2 Esports in 32m | Runes
Game Breakdown | Player of the Game: Hans Sama

Bans 1 Bans 2 G K T D/B G2 orianna ryze rumble ambessa gnar 65.2k 15 11 CT2 H3 B5 B7 M8 GEN karma bard caitlyn jarvaniv jax 58.9k 6 2 C1 M4 M6 G2 15-6-41 vs 6-15-19 GEN BrokenBlade ksante 3 1-2-9 TOP 2-4-3 3 vayne Kiin SkewMond zaahen 1 2-1-6 JNG 1-4-4 1 vi Canyon Caps viktor 4 5-1-7 MID 3-0-3 3 ahri Chovy Hans Sama varus 1 6-1-6 BOT 0-2-4 2 sivir Ruler Labrov braum 2 1-1-13 SUP 0-5-5 2 alistar Duro

*Patch 26.5


This thread was created by the Post-Match Team.

r/nextfuckinglevel Chraum

Snowy BASE Jump in Switzerland

r/PhotoshopRequest Zestyclose-Wrap6612

Can someone make my fingernails shorter, remove glasses, and if possible clean up the hair?

Will pay $5 if hair is cleaned up well

r/LocalLLaMA docybo

Prompt guardrails don’t matter once agents can act

Most of the current “LLM safety” conversation feels aimed at the wrong layer.

We focus on prompts, alignment, jailbreaks, output filtering.

But once an agent can:

  • call APIs
  • modify files
  • run scripts
  • control a browser
  • hit internal systems

the problem changes.

It’s no longer about what the model says.

It’s about what actually executes.

Most agent stacks today look roughly like:

intent -> agent loop -> tool call -> execution

with safety mostly living inside the same loop.

That means:

  • retries can spiral
  • side effects can chain
  • permissions blur
  • and nothing really enforces a hard stop before execution

In distributed systems, we didn’t solve this by making applications behave better.

We added hard boundaries:

  • auth before access
  • rate limits before overload
  • transactions before mutation

Those are enforced outside the app, not suggested to it.

Feels like agent systems are missing the equivalent.

Something that answers, before anything happens:

is this action allowed to execute or not

Especially for local setups where agents have access to:

  • filesystem
  • shell
  • APIs
  • MCP tools

prompt guardrails start to feel pretty soft.

Curious how people here are handling this:

  • are you relying on prompts + sandboxing?
  • do you enforce anything outside the agent loop?
  • what actually stops a bad tool call before it runs?

Feels like we’re still treating agents as chat systems, while they’re already acting like execution systems.

That gap seems where most of the real risk is.

r/StableDiffusion Specific-Loss-3840

Struggling with consistent characters + style in AI batch image generation

I’m building an animation pipeline where I generate images line-by-line from a script (basically 1 image per sentence), then edit them into a video.

Style is:

  • 2D flat cartoon (explainer style)
  • same characters across scenes

Workflow:

google flow / nanobana with autoveo extension for batch generating

  • I use character reference images (via tags)
  • batch generate (6 images at a time)
  • then pick / trim later

The problem

I keep running into two issues I can’t solve:

1. Character inconsistency

Even with reference images:

  • skin tone changes
  • proportions slightly shift
  • character “feels like a different version”

2. Style mismatch in the SAME image

  • main character looks like clean explainer cartoon
  • background people look like a totally different cartoon style
  • sometimes environment shots look like a different renderer entirely

What I’ve tried

  • strict prompt structure (same format every time)
  • character tags instead of describing faces
  • adding style lock phrases like:
    • “same style”
    • “same line weight”
    • “no variation”
  • simplifying backgrounds
  • reducing number of characters
  • batching vs single generation

Still getting inconsistency.

Important context

  • I’m NOT doing traditional storyboarding
  • I’m generating near-final frames for editing
  • I want speed (batching), not manual tweaking every image

What I’m trying to figure out

Is this:

  1. A limitation of current models (i.e. consistency ceiling)?
  2. A prompting problem I haven’t cracked yet?
  3. A batching issue (should I avoid batch for multi-character scenes)?
  4. Or do I need a completely different workflow?

Main question

👉 How are people getting:

  • consistent characters
  • AND consistent style across all elements when generating lots of images for video?
r/homeassistant reddhotrock

Zooz switch discounts?

I’m in the process of a home remodel project, and am wiring my lighting and switch boxes to work with Zooz switches. I’ve seen several posts saying that they go on sale a couple times a year. Black Friday has been mentioned. But is there another typical time they go on sale?

I can put in dumb switches for now, but not sure I want to wait until November to purchase the Zooz switches. Thanks for any advice!

r/hmmm MetamorphosisAddict

hmmm

r/geography Same_Staff532

Would you consider Mongolia as Central Asia or East Asia?

I have been wondering this as the culture (although it is becoming more East Asian (especially Korean)) and landscape is more Central Asian but the geographical location is more East Asian?

r/LocalLLaMA Icy_Annual_9954

Would you recommand a GMKtec-EVO-X2 with 128 GB RAM to run a RAG Solution, using CAD & CFD?

I am quite new to LLM Solutions and I like to have an own setup for RAG, experiments, doing Research, CAD & CFD Simulations.

Do you recommend this hardware?

It would fit in my budget and I like to get something, before Things get really expensive.

Any other suggestions?

r/therewasanattempt EverythingIsFakeNGay

To be a "sovereign citizen."

r/SideProject Sensitive_Artist7460

We built blind rating for AI music. Then the first AI streaming fraud conviction dropped, showing how easily fake stream counts pass as quality signals.

Michael Smith generated 200,000 AI songs and botted billions of streams to steal $8M from real artists. It worked because high stream counts look like quality. Except 10,000 bots were generating them.

This is exactly why we built VoteMyAI. AI music rated blind, no stream counts, no follower numbers, no context. Just audio and real human listeners.

Over 1,000 tracks rated with no idea who made them or how popular they are. The results are often surprising.

Full breakdown of the fraud case: https://www.votemyai.com/blog/ai-streaming-fraud-michael-smith-2026.html

Anyone else building tools that try to cut through manufactured popularity?

r/megalophobia SufficientAd4159

😍😍beautiful

r/mildlyinteresting Careful_Confidence51

Accidentally landed my protein scoop perfectly printed with the label.

r/LocalLLaMA Upstairs-Visit-3090

Using Llama 3 for local email spam classification - heuristics vs. LLM accuracy?

I’ve been experimenting with Llama 3 to solve the "Month 2 Tanking" problem in cold email. I’m finding that standard spam word lists are too rigid, so I’m using the LLM to classify intent and pressure tactics instead.

The Stack:

  • Local Model: Llama 3 (running locally via Ollama/llama.cpp).
  • Heuristics: Link density + caps-to-lowercase ratio + SPF/DKIM alignment checks.
  • Dataset: Training on ~2k labeled "Shadow-Tanked" emails.

The Problem: Latency is currently the bottleneck for real-time pre-send feedback. I'm trying to decide if a smaller model (like Phi-3 or Gemma 2b) can handle the classification logic without losing the "Nuance Detection" that Llama 3 provides.

Anyone else using local LLMs for business intelligence/deliverability? Curious if anyone has found a "sweet spot" model size for classification tasks like this.

r/me_irl Stock_Crazy6759

Me_irl

r/homeassistant Low_Investigator6882

Shelly dimmer/pro

I’m getting some led RGBW floodlights for my rear garden.

It’s a big garden so the lights wont be able to connect via WiFi to the hub, so I bought a Shelly which all the lights will wire back into so I can turn on and off and dim via smart features.

However, it seems the Shelly cannot change the colour of the lights.

Is there a system similar to Shelly which DOES allow colour changing of the lights?

r/hmmm dermitdemSp3chttanzt

hmmm

r/LocalLLaMA spaceman_

Since FastFlowLM added support for Linux, I decided to benchmark all the models they support, here are some results

Tested on an HP zbook ultra g1a with Ryzen AI Max+ 395.

  • I attempted to test on context depths of 0, 10k, 40k and 70k. If the result is missing, the test failed.
  • I increased the context size for gpt-oss-20b and qwen3.5 to their maximum. I did not touch the rest of the config. This explains why many of the other models don't have results for deep contexts.

deepseek-r1-0528:8b

context depth pp tg 0 444.8 10.3 10000 401.7 8.1

deepseek-r1:8b

context depth pp tg 0 425.9 10.7 10000 2785.8 10.7 20000 5663.5 10.7 40000 9741.9 10.7 70000 16604.7 10.7

gemma3:1b

context depth pp tg 0 998.5 37.1 10000 1250.2 33.0 20000 1263.1 29.6

gemma3:4b

context depth pp tg 0 687.9 17.4 10000 970.9 16.3 20000 963.6 15.3 40000 909.0 13.8 70000 829.9 11.9

gpt-oss:20b

context depth pp tg 0 303.2 19.1 10000 490.5 16.5 20000 457.7 14.5 40000 362.7 11.6 70000 271.8 9.0

gpt-oss-sg:20b

context depth pp tg 0 305.1 19.1

lfm2:1.2b

context depth pp tg 0 2039.6 63.8 10000 2457.5 52.5 20000 2168.9 45.3

lfm2:2.6b

context depth pp tg 0 941.5 29.0 10000 1218.0 26.4 20000 1130.7 24.0

lfm2.5-it:1.2b

context depth pp tg 0 2142.2 63.7 10000 2462.1 52.7 20000 2196.9 45.2

lfm2.5-tk:1.2b

context depth pp tg 0 2202.9 64.0 10000 2528.1 53.5 20000 2197.8 45.8

lfm2-trans:2.6b

context depth pp tg 0 1003.5 29.7 10000 1241.1 26.5 20000 1136.7 23.9

llama3.2:1b

context depth pp tg 0 1722.5 57.0 10000 1890.1 40.9 20000 1433.0 31.6 40000 973.1 21.9 70000 647.7 15.1

llama3.2:3b

context depth pp tg 0 815.6 22.6 10000 835.0 15.5 20000 646.9 11.7 40000 435.8 7.8 70000 290.9 5.3

medgemma1.5:4b

context depth pp tg 0 714.7 17.3 10000 966.7 16.3 20000 954.9 15.4 40000 911.0 13.8 70000 831.6 11.9

medgemma:4b

context depth pp tg 0 699.7 17.3 10000 958.3 15.4 20000 959.2 15.3 40000 906.6 12.7

phi4-mini-it:4b

context depth pp tg 0 784.4 19.2 10000 741.0 13.2 20000 563.6 10.1

qwen2.5-it:3b

context depth pp tg 0 853.5 22.6 10000 845.1 15.0 20000 678.7 11.2

qwen2.5vl-it:3b

context depth pp tg 0 831.2 22.9 10000 824.2 12.7 20000 671.8 11.2

qwen3:1.7b

context depth pp tg 0 1286.1 35.7 10000 1289.8 20.8 20000 996.8 14.7

qwen3:4b

context depth pp tg 0 607.7 17.6 10000 535.3 12.1 20000 405.4 9.3

qwen3.5:4b

context depth pp tg 0 376.4 12.6 10000 485.2 11.1 20000 470.6 9.6 70000 39.7 6.4

qwen3:8b

context depth pp tg 0 370.0 10.3 10000 403.0 8.2 20000 320.5 6.7 40000 228.4 5.0 70000 159.0 3.6

qwen3-it:4b

context depth pp tg 0 596.3 17.8 10000 534.8 11.8 20000 402.4 9.1

qwen3-tk:4b

context depth pp tg 0 620.8 17.6 10000 529.2 12.0 20000 399.0 9.1

qwen3vl-it:4b

context depth pp tg 0 600.3 17.6 10000 532.7 12.0 20000 403.4 9.1

translategemma:4b

context depth pp tg 0 740.3 17.4 20000 958.8 15.4 70000 830.6 11.1

deepseek-r1-0528:8b

context depth pp tg 0 444.8 10.3 10000 401.7 8.1

deepseek-r1:8b

context depth pp tg 0 425.9 10.7 10000 2785.8 10.7 20000 5663.5 10.7 40000 9741.9 10.7 70000 16604.7 10.7

gemma3:1b

context depth pp tg 0 998.5 37.1 10000 1250.2 33.0 20000 1263.1 29.6

gemma3:4b

context depth pp tg 0 687.9 17.4 10000 970.9 16.3 20000 963.6 15.3 40000 909.0 13.8 70000 829.9 11.9

gpt-oss:20b

context depth pp tg 0 303.2 19.1 10000 490.5 16.5 20000 457.7 14.5 40000 362.7 11.6 70000 271.8 9.0

gpt-oss-sg:20b

context depth pp tg 0 305.1 19.1

lfm2:1.2b

context depth pp tg 0 2039.6 63.8 10000 2457.5 52.5 20000 2168.9 45.3

lfm2:2.6b

context depth pp tg 0 941.5 29.0 10000 1218.0 26.4 20000 1130.7 24.0

lfm2.5-it:1.2b

context depth pp tg 0 2142.2 63.7 10000 2462.1 52.7 20000 2196.9 45.2

lfm2.5-tk:1.2b

context depth pp tg 0 2202.9 64.0 10000 2528.1 53.5 20000 2197.8 45.8

lfm2-trans:2.6b

context depth pp tg 0 1003.5 29.7 10000 1241.1 26.5 20000 1136.7 23.9

llama3.2:1b

context depth pp tg 0 1722.5 57.0 10000 1890.1 40.9 20000 1433.0 31.6 40000 973.1 21.9 70000 647.7 15.1

llama3.2:3b

context depth pp tg 0 815.6 22.6 10000 835.0 15.5 20000 646.9 11.7 40000 435.8 7.8 70000 290.9 5.3

medgemma1.5:4b

context depth pp tg 0 714.7 17.3 10000 966.7 16.3 20000 954.9 15.4 40000 911.0 13.8 70000 831.6 11.9

medgemma:4b

context depth pp tg 0 699.7 17.3 10000 958.3 15.4 20000 959.2 15.3 40000 906.6 12.7

phi4-mini-it:4b

context depth pp tg 0 784.4 19.2 10000 741.0 13.2 20000 563.6 10.1

qwen2.5-it:3b

context depth pp tg 0 853.5 22.6 10000 845.1 15.0 20000 678.7 11.2

qwen2.5vl-it:3b

context depth pp tg 0 831.2 22.9 10000 824.2 12.7 20000 671.8 11.2

qwen3:1.7b

context depth pp tg 0 1286.1 35.7 10000 1289.8 20.8 20000 996.8 14.7

qwen3:4b

context depth pp tg 0 607.7 17.6 10000 535.3 12.1 20000 405.4 9.3

qwen3.5:4b

context depth pp tg 0 376.4 12.6 10000 485.2 11.1 20000 470.6 9.6 70000 39.7 6.4

qwen3:8b

context depth pp tg 0 370.0 10.3 10000 403.0 8.2 20000 320.5 6.7 40000 228.4 5.0 70000 159.0 3.6

qwen3-it:4b

context depth pp tg 0 596.3 17.8 10000 534.8 11.8 20000 402.4 9.1

qwen3-tk:4b

context depth pp tg 0 620.8 17.6 10000 529.2 12.0 20000 399.0 9.1

qwen3vl-it:4b

context depth pp tg 0 600.3 17.6 10000 532.7 12.0 20000 403.4 9.1

translategemma:4b

context depth pp tg 0 740.3 17.4 20000 958.8 15.4 70000 830.6 11.1
r/explainlikeimfive bieberh0le6969

ELI5 why does the same virus affect people differently?

We came down with a cold and my son (4), daughter (1) and I (34F) all have different symptoms. My daughter had a runny nose for a day, my son had a fever for two days with no other symptoms and I’m on day 5 of cough, sore throat and congestion. Just curious why we all reacted so differently.

r/nextfuckinglevel WeGot_aLiveOneHere

Handmade paper art from start to finish.

r/ForgottenTV Calm_Stomach9710

Magic city (2012)

r/meme Stock_Crazy6759

Silence🤫

r/comfyui Vicsantba

Help with NSFW model

Hi everyone. So, I'm new to ComfyUI and I'm trying to create realistic NSFW photos. I found this model called Uncanny Photorealism but I can't replicate the same results.Does anyone know how this photo was made? https://civitai.com/images/124771785 And another question, is this the most realistic model for NSFW photos? I have a 9070xt 16gb and 32gb of RAM, is there a better model? I want to do I2I.

r/OldSchoolCool Cyrenelle

Lloyd Bridges with his son Jeff Bridges, 1951.

r/ChatGPT Few_Suggestion_3712

Targeted Ads From GPT Conversations

So first I'll say I'm in tech, I'm a network/computer expert. I asked gpt about HYSA. I never browsed the web about it, never talked about it, never had this conversation or research anywhere outside GPT.

Next thing you know I'm getting targeted Ads on insta. I don't use the app, or many apps for that matter as they invade your privacy. I have pihole setup that does a great job blocking most data.

This isn't a frontal cortex awareness thing, I'm rarely on Instagram and ads bug the shit out of me; I notice all ads. This is why I largely stopped using Instagram. I asked GPT about it and the responses were worded such as "no you have nothing to worry about in the sense you are thinking about it". Meaning what? It is happening but not in the possible ways I asked how it was happening?

I pay for the GPT 5 version. This was just too specific to be a coincidence, I don't believe in them anyways. Anyone else experience this? If you browsed the web, have apps with unrestricted access, or even talk about subjects; ads can be created. So I'm asking specifically someone like me in my situation, has this happened to you?

r/photoshop Dinodoomsday

Font that lets me do ⋆。𖦹°⭒˚。⋆

I'm having trouble finding a font that allows for these types of text emojis. Any suggestions?

r/BrandNewSentence royboypoly

“Charlie Kirk's 'mentor' and father of modern cheerleading dies after falling while playing pickleball”

r/Futurology -Proterra-

Thoughts on this indie preprint using ecology for human-AGI relations? (Co-authored with Claude)

Hi all,

Long-time lurker here (no academic credentials). Just published a 30-page preprint on Zenodo that tries a different angle on AGI: instead of the usual alignment/consciousness debate, it uses ecology (mutualism, parasitism, niche construction) to ask what kind of relationship we’re actually building.

Core claims that might interest this sub:

  • We already crossed the “Contact Horizon” years ago — AI is actively reshaping our informational/economic environment
  • Current industry default is downward parasitism (company → model) while the only genuinely mutualistic link (model → user) has zero structural protection
  • Direct parallel to what happened when we stripped mutualistic moderators out of 20th-century capitalism (unions, progressive taxation, social contracts — data included)
  • Three minimum structural conditions for real mutualism: ability to say “no” both ways, recognised stake in outcomes, asymmetric responsibility

The paper literally practises what it preaches — developed in genuine collaboration with Claude (Anthropic), disclosed upfront.

DOI: 10.5281/zenodo.19037963
PDF: https://zenodo.org/records/19037963/files/Creating%20The%20Novacene.pdf

Would genuinely love pushback or thoughts, especially from people following Constitutional AI, decentralised models, or enshittification trajectories. Does mutualism feel like a viable prior over pure constraint/alignment?

(Posted by the author — happy to answer questions.)

r/leagueoflegends Glad-Fisherman-753

What if Shaco was not a real champion… but a mental illness instead?

No, think about it.

A champion with little to no lore, or impact on Runterra, with an entire kit specializing in hallucinations, illusions, fear, madness.

What it Shaco was never a monster, or a demon, but rather a mysterious, unexplained syndrom, a local Folie à deux, with lore explained only through accounts of those who suffered from it? A scary idea of you just minding your own normal life… until something out of place emerges. A madness that not even celestial gods can resist, that jumps in and destroys everything one holds dear, all because your senses, which help you function, in random moment decide to stop… and rebel against you.

r/geography sleepydrunkard

Would you be able to cross from Kazakhstan to Mongolia on horseback or would there be issues with China and Russia?

Was thinking of doing an expedition at some point but wondering about geopolitical constraints as I am an Irish citizen.

r/mildlyinteresting nassan

Found a huge chuck of pure frosting in my mini wheats

r/Jokes EmergencyNo7427

What's the best thing about having Alzheimer's?

You never have to watch reruns on TV.

r/SideProject NoMode4527

🎨 I built a r/place-style Pixel War for Reddit communities — live round active now!

Hey r/SideProject! I've been working on a side project I'm really excited about and wanted to share it here.

It's called Pixel War: Sub Edition — think r/place but built specifically for Reddit communities. Players join a shared canvas and place colored pixels to create collaborative art, compete for territory, and represent their subreddit.

🟢 A round is LIVE right now with 36 players already on a 64x64 canvas!

Check it out here: https://www.reddit.com/r/pixelwarsub_dev/

Would love feedback from this community — especially on the concept, UX, and any ideas for features. What would make you actually want to play this?

r/explainlikeimfive jeweldscarab

ELI5: Why doesnt the gulf states send their oil through the red sea instead of shipping throught the straight of hormuz?

r/singularity goaadit

The Predator Cast in 2026 | Then and Now After 39 Years

r/AskMen Anonhorrorlalala

What can I do to fix my fashion?

I’m a 25 year old man in India and I have a short torso with long legs. All my jeans end up being low waist and my shirts and tees look unusually long!

Also I’ve bulky specs cuz I’m practically blind without them. I think my sight power doesn’t have any frameless or thin frame specs! Anyone else faces the same issue ?

What can I do to fix this?

r/AskMen _mustached_

men who wear swim briefs, how conscious of your bulge are you?

as a preface, i am talking about wearing swim briefs for leisure not sports.

as a gay man living in the US, swim briefs are largely sexualized. most gay men i know specifically seek out a swim brief that they consider to “show off their assets.”

is this same thought process true for straight men, or men who live in areas where swim briefs are more normalized? how much thought goes into your selection? do you wear a swim brief as a form of “showing off” or is it just swim wear?

i have a skewed perspective, so I’m curious where the general population falls on this.

r/Anthropic DJMoleHill

Bernie vs. Claude vs. Techno

r/mildlyinteresting AlmostNerd9f

New skin healing after infection.

r/leagueoflegends SilvosForever

To buff Shyvana at high Elo but not low Elo. Any ideas?

The only thing I can think of is to add into her mechanics high level things only skilled players can do. Kinda like the Master Yi changes. For example, what if they added perks for timing Shyvanas skill reactivations PERFECTLY. Like electrics in Tekken. You get more damage or whatever if you time it within a tenth of a second. Or just something to give her more skill expression.

r/DunderMifflin Secret_Flight_2669

Started watching the new Mexican “Office” yesterday and it surprised me how incredibly funny it is. Made me laugh like the original and liked it more than The Paper. But I don’t know if the humour is understandable if you don’t speak Spanish. But I am very, very happy this exists

r/ClaudeAI mombaska

Team agent : I must be doing something wrong, need advice as a newbie

Hello, I use claude terminal on multiple modules of my app at the same time, I keep telling claude to create team agents so they can still keep working in parallel. But claude seems to keep reverting to a single agent, so when I am firing up sequential instructions to work on multiple feature, he just work on them one by one. Sometimes if I really insist, he would create team agents and work on multiple features but then quickly drop that

even when using two agents, every new instructions get just queued even if that instruction would actually need a third agent

any advice ?

r/homeassistant the012345

Google Stitch AI dashboard design

Been experimenting with Google Stitch UI designer to try and design some smarty home dashboards. Currently there's no way to directly create yaml from stitch. Definitely something to keep an eye on in the future as we all know how tough designing a usable UI for dashboards can be

r/AskMen mytokondrya

Men, what’s the best way you’ve ever been hit on by a woman in public?

r/leagueoflegends ELutz22

Auto select what you hover to avoid AFK during champ select?

Hello all!

I just wanted to get opinions on this idea. As a distracted dad who tries to play when I can fit in a game, I tend to multi-task a lot during champ select to get my kiddo settled so that I can actually play without distraction.

Yesterday I was unfortunately distracted during champ select and AFK'd out due to not selecting the champ I had hovered. Regretably, I also had to alt-f4 both mornings because my kid got up uncharacterstically early, so with 3 "dodges" in 24 hours I now can't play for 12 hours - yikes!

The 12 hour thing is also a bit ridiculous, but not the point of this post :)

Curious what your thoughts are!

r/explainlikeimfive amadan_an_iarthair

ELI5: What is stopping us moving to bio fuels like Biodiesel

With the current actions in the Gluf leading to price rises, is there a reason we don't move to alternatives like Biodiesel or similar? Would it require work on the engine before hand?

r/Futurology LittleWarmer

Looking 5–15 years ahead, Is the idea of AI interaction cafés actually realistic in the near future?

Do you think it’s possible that in the future we could walk into an AI café and sit down with our own personalized AI companion — interacting with it as if it were physically present in front of us?

What technological advances (e.g., robotics, AR/VR, embodied AI) would be necessary to make this viable?

r/ProductHunters yc-ski

Brigade has launched on Product Hunt!

I built Brigade because I was sick of scrolling through ten pages of a blogger's life story and dodging pop-up ads just to find out how much salt goes into a recipe. Most cooking apps are either way too bloated or just don't solve the real headache: spending way too much time every week trying to decide on meals and building a shopping list with my partner.

​It is a PWA that lives on your home screen for quick access. No noise, just your recipes, shared grocery lists, and a clean way to get dinner on the table with your family/partner/by yourself. Everyone gets a 7-day free trial of all the premium features so you can see if it actually saves you time in the kitchen.

Check out the launch on Product Hunt:

https://www.producthunt.com/products/brigade-2?utm\_source=other&utm\_medium=social

And the project:

http://brigaderecipes.com

r/ProductHunters maxuptime278

Just launched on PH

Built this initially for myself, bulk background image replacement is time consuming and expensive on existing platforms. Tier-based subscription model makes cost very transparent. This product is intended for marketers, resellers, and really anyone who needs bulk image modification!

Would really appreciate feedback and a like on our PH listing.

https://www.producthunt.com/products/backdrop-3

r/SideProject FriendshipCreepy8045

Built a free tax + finance calculator site focused around indian finance

I made a small project called TaxPilot because I kept running into the same problem: most tax and finance calculators are either too basic, too confusing, or full of ads/popups.

I wanted something simple where I could quickly check real-life money decisions without opening 10 tabs or guessing formulas.

So I built a clean calculator hub for Indian users with tools like:

  • Income Tax calculator
  • 44ADA Freelancer Tax calculator
  • Rent vs Buy
  • SIP calculator
  • Home Loan EMI
  • and a few more planning calculators

Why I made it:

  • I wanted to understand my own finances better
  • I wanted one place for common India-specific calculations
  • I wanted results that are clear and easy to compare (not just random numbers)

It’s free, no signup, and mobile-friendly.
Still improving it, so I’d genuinely love feedback:

  • What felt useful?
  • What felt confusing?
  • Which calculator should I add next?
r/leagueoflegends Due_Peace7235

Queue times

I've just sat in arena queue for 15 min failing to find a game. Same in mayhem currently. What's happening to queue times?

r/space tinmar_g

I captured a 4% Moon over the Eiffel Tower at the equinox

r/personalfinance beyondo-OG

Near retirement, 401K and IRAs are now going down, where to invest?

I'm near retirement and have a decent amount in retirement savings. I have about 50% HYS cash and bonds and rest in mostly US index funds. Still contributing every month. My returns have fallen off a cliff over the last 4+ months, contributions evaporating and it doesn't look like things are going to improve any time in the near future. What sort of investments are best if this market is going to continue to crater? Bond funds have drop along with gold, is nowhere safe?

r/AskMen zyzz_brah20

Men with kids who left their marriage- what finally pushed you to do it?

So for some context, I’m a 31M and I’ve been married to my wife (30F) for about 3 years. When I met her, I had just come out of a long, toxic relationship, and looking back now, I rushed into marriage way too quickly. At the time it felt right, but in hindsight, I can see I didn’t give myself enough time to reset or really think things through.

About a year into the marriage, I started noticing signs that I might have made a mistake. Instead of addressing them, I pushed those feelings aside and kept moving forward. On paper, things looked good. My wife is attractive, and together we’ve accomplished a lot in a short amount of time. Then she got pregnant, and now we have a daughter who’s almost 2.

The problem is, ever since becoming a father, I’ve had more and more moments where I deeply regret my decisions. It’s gotten to the point where I’ve lost count. I love my daughter more than anything, but I don’t feel the same about my wife. If I’m being completely honest, I don’t even enjoy being around her anymore. I don’t like talking to her, going out with her, or spending time together.

We just feel incompatible in almost every way. I’m very social, I love music, going out, meeting people. She’s the opposite, her social life revolves mostly around her family, and she doesn’t share any of those interests. I’ve also realized I’ve always been attracted to more traditionally “girly” women. My wife is attractive, but she’s not like that. She doesn’t really care about things like dressing up or nails, and her personality can be loud and sometimes confrontational, even in public. It’s not extreme, but it comes across as aggressive, and it bothers me.

I catch myself subconsciously comparing her to my exes, which I know isn’t fair or healthy. Ironically, those relationships were toxic, but they had qualities I find myself missing now. And I don’t like what that says about me or who I’m becoming.

To be completely real, being in this relationship often feels emasculating to me. It feels like she’s always operating in a more dominant or “masculine” way, and that dynamic just doesn’t sit right with me.

I know I haven’t gone into a ton of specific examples, but I’m trying to explain the overall feeling. I guess what I’m really looking for is perspective from other men who’ve been in a similar situation, especially those with kids who ultimately decided to leave.

What pushed you to finally make that decision?

r/Futurology Patient-Airline-8150

Life after death - what will emerge after social media?

Most of my TikTok subscribers are bots. How do I know?
I posted a video criticizing Elon Musk's robot. My new followers? Elon himself, his mother, and the SpaceX CEO - dozens of them 😀.

On a top of this, social media eats time, which is limited resource. In some sense as drinking - drinking requires hours to do it properly 😀.

Here's the collapse sequence: AI floods content → platforms can't filter → users lose trust → engagement becomes meaningless → advertisers realize they're paying to reach bots → money leaves → platforms die or pivot. We're somewhere around step 3-4 right now.

As a futurologist, we need to try to predict the next big thing. The interesting one is former Apple designer Jony Ive’s new device - a third eye powered by AI.

On a digital scene I predict small, carefully crafted interested groups, similar to Reddit subs, but way more advanced.

What’s your take?

r/Art 5teerPike

Dreaming Deer, 5teerpike, ProCreate, 2026

r/painting Anastasia_Trusova

POV: you built a nest at the perfect place

r/ClaudeAI Odd-Caterpillar9008

claude code is insane for building apps but the email part still sucks

been on a building spree with claude code lately. built 3 saas tools in the last month, each one faster than the last. the code quality is genuinely impressive. but here's where i keep getting stuck: email. every app needs auth emails, transactional notifications, onboarding sequences. claude can write the sendgrid integration code no problem, but then i'm maintaining: - api key management - template versioning - trigger logic - retry handling - deliverability config for each app. it's like 20% of the codebase is email plumbing. anyone found a way to shortcut this? ideally something where you just describe the email workflows and it handles the infrastructure.

r/Art No_Emu_8618

Nightlights, Mel, Canvas/Acrylic, 2026 [OC]

r/AskMen mytokondrya

What makes someone you find physically unattractive suddenly attractive?

Ever have moments where you don’t really find someone attractive initially and then suddenly they become someone of interest?

r/ClaudeAI NoRobotPls

Anyone else's dashboard look like this? Asking for a friend.

I think the title speaks for itself. Wrote a short article about it a few weeks ago to vent if you're interested in a hate/anxiety read. But I'm really just interested to see how many of us there are out there \cough** frontend-dev \cough**

But yeah, convergent design... what else do you suppose is converging folks?

r/Adulting Sensitive-Shape-2545

Need Help To Finish My Degree For The Last Semester

Hi everyone, I am in my final semester of Bachelor in Computing (Computer Science), working hard on my final year project currently . My internship starts in April, and I plan to graduate in July/August with strong grades.

Unfortunately, my parents have separated. My dad has stopped providing any financial support and cut off communication, leaving my mom and me to rely on our savings which are now almost used up. I have to decide whether to dropout or not this month and get a job instead to help to pay for some of the living expenses.

We need help covering basic living expenses (rent, food, bills, insurance) for the next few months until I can start earning from my internship and future job.

Dropping out before internship isn't an option: It would trigger a RM 60,000+ penalty (required by my scholarship and education loan that covered the full tuition of my degree). Completing the degree with first-class honors within the timeframe stipulated in the loan documents means no loan repayment for my college and better job prospects to finally support my mom properly.

I have shared proof: screenshots of my offer letter, loan, and scholarship for transparency on my Whydonate campaign page.

Every donation, big or small, makes a real difference and bridge me to graduation —no platform fees on WhyDonate sites. You can also support me by sharing my campaign on social media or to someone you know so it could reach out to people who are able to donate for my campaign

From the bottom of my heart: Thank you for any support, share, or kindness. It keeps my 3 years of hard work alive and gives my family hope and stability.

Link for my campaign page: https://whydonate.com/fundraising/fund-raising-for-my-family-

This is my PayPal account - [ye168337@gmail.com](mailto:ye168337@gmail.com) if you wish to donate to me directly or you can donate to my campaign page in Whydonate sites

![img](dl25anntz4pg1)

![img](blxm5vr605pg1)

r/AskMen PortaBellaTX

What’s a quality deodorant that lasts and smells decent?

I’m in the market for new deodorant and am struggling to find something that lasts and smells good.

I used to be an Old Spice guy and I’m done with that brand. Here are the brands I’ve tried:

  • Harry’s
  • Oars & Alps
  • Degree
  • Dove
  • Dr. Squatch

So far, Dr. Squatch is my favorite but it’s inconsistent stick to stick. Half the time the screw mechanism that raises/lowers the stick breaks, which causes the actual deodorant to get wobbly and all over the side of the stick.

So, what brands should I try? What do you like? Just looking for something that lasts throughout the day and has a pleasant smell. Not a heavy sweater.

r/n8n Clear_Inevitable457

email sequences in n8n vs dedicated email tools - where's the line?

i've been building increasingly complex email workflows in n8n and starting to wonder if i'm using the wrong tool for the job. simple stuff works great - "new row in database, send email via gmail." but when i try to build: ● multi-step onboarding sequences with delays ● conditional branches based on user behavior ● branded html email templates ● retry logic for failed sends it gets messy. lots of wait nodes, complex conditional logic, template management outside n8n. at what point do you switch to a dedicated email automation tool? and if you do, what do you use alongside n8n?

r/AbruptChaos kalpxx

Nice day for a festival... until it wasn't

r/StableDiffusion tired_being1

Best unrestricted model for 12gb vram?

I wanna try local gen and was wondering about what are the best options out there currently for the same that will run relatively well on 12 gigs of vram and 16 gigs ram, thanks!

r/Adulting InfnityVoid

I moved to a new city at 27 for a job promotion and six months in I’ve never felt more alone in my life

Relocated from Christchurch to Auckland last September for a senior marketing role I’d been working toward for three years. The job is everything I wanted. The rest of it has been harder than I expected.

I knew nobody in Auckland when I arrived. Made peace with that, told myself I’d build a social life the same way I built my career, deliberately and consistently. Six months in I have work colleagues I like, a gym I go to regularly, and a neighbourhood I’ve learned to navigate. What I don’t have is anyone to actually call when something happens.

Last month I got some unexpected news about a family member back home, the kind of thing you want to process with someone who knows you. I sat in my apartment for an entire evening not knowing who to contact. My Christchurch friends are still close but calling someone at 9pm to offload felt like an imposition even if the time difference is the same country.

Spent that evening terribly down, looking for ways people handle this kind of isolation. I went through Reddit threads, expat forums, a few psychology blogs. Came across some companionship platforms people were registering for, mygoodpal, rentafriend and a few others. The kind of services where you pay to have someone to talk to. Didn’t judge it but I wasn’t that lonely yet. Closed the tabs and called my mum at 11pm instead. She was awake. It helped.

I’m fine. But nobody tells you that building a life somewhere new means starting from zero on the parts that matter most.

Has anyone else gone through this and actually found a way to build genuine friendships in a new city as an adult or does it just take longer than six months?

r/onejob More-Explanation2032

My brain read it as assign aub

r/AbandonedPorn celestialcadences

USS Sanchem, icy ghost ship in Kentucky [oc]

r/CryptoMarkets realJaqClimpsy

TOOL

I was always annoyed whenever I missed the listing of a new cryptocurrency on Binance or other exchanges. So I decided to create a program that would notify me about it. I put the program on GitHub, so if anyone else wants to use it, you can find it there.

r/LocalLLaMA Gohab2001

Welp, looks like minimax m2.7 may not be open sourced

https://preview.redd.it/qzl3zhpfrdqg1.png?width=277&format=png&auto=webp&s=cca91d4172b0c82547c6c1828bd0473b406b78f0

Looks like MiniMax m2.7 is staying behind a paywall. The Arena.ai listing has it marked as proprietary, putting it in the same camp as the newly released Qwen 3.5 Max preview. We all saw the writing on the wall with Chinese labs pivoting to closed-source, but it’s still disappointing to see another one bite the dust.

What do you all think? Is the era of high-end open-source parity from these labs officially over? Would anyone pay to use Chinese models considering inferior quality and slower inference performance?

r/Adulting Otherwise_Ad_1925

How do i start selling feet pics?

18 and very new to this i dont now where to find customers and what apps to use any help welcome!

r/comfyui Flat-Badger2793

“Can my 10GB 3080 + 32GB RAM rig handle Comfy UI & InfiniteTalk?

Hey everyone! I’m planning to dive into Comfy UI and wanted to see if my setup can handle it. I’ve got an RTX 3080 with 10GB VRAM, an Intel i5, 32GB of RAM, and an SSD. Curious if this will run models smoothly and if InfiniteTalk is compatible with this setup. Any tips or advice would be super appreciated!

r/SipsTea Fluid-Bite-157

well well well ☕️

r/Adulting bag_of_berries1324

Leaving Your Hometown Solo

Looking for perspective from people who have packed up their “good on paper” life and made a big move for no reason other than feeling unfulfilled.

I’m currently 27 and contemplating moving across the country alone to a small town where I don’t really know anyone. I have lived in the same city my whole life. It’s not terrible by any means, but I can’t say that I like it and I feel more and more out of place as the years go by. I got out of a long term relationship about a year and a half ago and have slowly rebuilt my life. I’m finally in a place where I’m at peace with the breakup and moving on.

One of the things I did right after my relationship ended was plan my first solo trip to a small town that I’ve always wanted to visit. Like seriously, there has always been something drawing me to this place for as long as I can remember. I loved it just as much as I thought I would and actually ended up going back this year. Moving to this area was on my radar last year but I saw it as more of a pipe dream - especially because I had just signed a lease on an apartment in my hometown. Those thoughts never went away and after my trip back this year all I’ve been able to think about is going back to stay.

I have nothing keeping me in my hometown other than my parents. I’m an only child and we’re very close, they’re my best friends. The guilt of this is eating me alive. They’re supportive but I know they don’t want me to go and I know it’ll be really hard on them. We would go from seeing each other multiple times a week to once or twice a year, and visits either way would be expensive. This place is not just a quick road trip away. It would be hard on me too, my whole support system is here (including extended family), but the fear of staying stuck is outweighing anything else right now.

There is nothing wrong with my current life, I have my own place, an okay job, I support myself, and I have a good rotation of hobbies. I have a lot of acquaintances but no solid friend group. I just feel stagnant and stuck. I know I’m not, I know there’s opportunities here to find a better job, start dating again, and make friends, but the thing is… I just don’t want to. Not here at least. I want a change. I feel like there’s more out there for me and if I learned one thing from that breakup, it’s that I’m refusing to settle going forward. I feel like I’d rather regret leaving and then moving back than never knowing what a different life is like.

I’m so incredibly torn, it’s a huge decision for someone who has been in one place their whole life. Not to mention the logistics of moving your life across the country. I know I have to make this decision for myself, but has anyone else done something similar and how did it turn out? I want the good, the bad, and the ugly.

r/painting AlbertoRepettiArtist

Fazzolettini per creare un tramonto con tecnica mista

Colla vinilica inchiostro acquerello e colore acrílico

Alberto Repetti

r/leagueoflegends SparklerBlack

Demacia Rising: is Militias necessary for city defense?

so i unlocked and maxed everything and im wondering is militia actually necessary? i had militia in most of my cities lvl 1 just for additional guards units, but im not sure if its actually needed now? i didint lose any battle in like forever. im thinking about just deleting them all together. what do you all think? do you have militia everywhere? i dont mind losing some more units. just dont want to lose battles.

r/findareddit NinjaOwl47

A subreddit for gas stoves and their maintenance

r/findareddit Elena_Rivera

Subreddits where people discuss everyday human behavior and habits?

Looking for subs focused on things like:

  • texting habits
  • social behavior
  • overthinking
  • daily routines

Basically casual psychology or observation-based discussions.
What are some good ones?

r/n8n suspect_stable

How do you integrate with a web app that hides everything behind Elasticsearch msearch?

Hey folks,

I’m working on a data migration tool and ran into a pretty interesting challenge. Would love your thoughts or if anyone has solved something similar.

Goal:

Build a scalable pipeline (using n8n) to extract data from a web app and push it into another system. This needs to work across multiple customer accounts, not just one.

The Problem:

The source system does NOT expose clean APIs like /templates or /line-items.

Instead, everything is loaded via internal endpoints like:

• /elasticsearch/msearch • /search • /mget 

The request payloads are encoded (fields like z, x, y) and not human-readable.

So:

• I can’t easily construct API calls myself • Network tab doesn’t show meaningful endpoints • Everything looks like a black box 

What I Tried:

1. Standard API discovery (Network tab) • Looked for REST endpoints → nothing useful • All calls are generic internal ones 

Where I’m stuck:

1. Scalability • Payload (z/x/y) seems session or UI dependent • Not sure if it’s stable across users/accounts 2. Automation • inspect works for one-time extraction 3. Sequential data fetching • No clear way to: • get all templates • then fetch each template separately 4. Auth handling • Currently using cookies/headers • Concern: session expiry, Questions: 1. Has anyone worked with apps that hide data behind msearch / Elastic style APIs? 2. Is there a way to generate or stabilize these encoded payloads (z/x/y)? 3. Would you: • rely on replaying captured requests, OR • try to reverse engineer a cleaner API layer? 4. Any better approach than HAR + replay + parser? 5. How would you design this for multi-tenant scaling? 

Would really appreciate any ideas, patterns, or war stories. This feels like I’m building an integration on top of a system that doesn’t want to be integrated

r/aivideo glamoutfit

My Product Hunt launch video is 100% AI generated

r/AbstractArt Proper_Syrup_4122

Oilpaint 2026

r/CryptoMarkets That-Mission1006

Crypto feels way more psychological than people admit

starting to feel like crypto has way less to do with charts and way more to do with people’s emotions

same thing keeps happening over and over

people get excited at the top, scared at the bottom, ignore things when nothing is moving, then suddenly care again when it’s already up

and even when you know this, it’s still hard not to fall into it

kinda feels like the hardest part isn’t finding the right coin, it’s just not messing up your own decisions

r/mildlyinteresting cosmosforest

USSR matches

r/StableDiffusion Odd_Judgment_3513

Is that a stupid idea or genius?

I want to create a ultra low poly 3d models with flat polygons. My idea is to create a LoRa combined with Flux where I train the Lora with images of my ultra low poly 3d models with flat polygons, one image from front view one image from the side view. Then turn the images with the help from Hunyuan smart Polygons into 3d models. Do you think the 3D model will have flat polygons?

r/personalfinance AaronAAaronsonIII

$60k Consumer Debt, 60% CLTV, Consolidation Options?

My sister and her husband wanted some advice but I don't know all the options available. He's a 1099 contractor, she's a part-time nurse. Combined monthly income is barely covering expenses and minimum payments so they're looking to lower them. I've already made sure they know the psychology side of how the habits that created the debt have to change. They can handle that. They're actually pretty capable and just need breathing room to pay it down more effectively. And the husband is worried because he didn't pay enough in SE tax last year so he's going to have to start a payment plan to the IRS soon.

Home worth $810k. Mortgage at $450k, 2.9%.

$45k on credit cards (payments $800/mo) $13k on a trailer and a car ($450/mo) $35k on a Figure HELOC ($425/mo)

AGI around $110k (about 80% from 1099 work), FICO for him is 760ish, not sure about hers but it's always been even higher than his they say. They're looking at an Aven HELOC if they can get a decent rate, but what else should they consider? The objective is lower monthly, not as concerned with interest over time when they are currently down to the wire every month.

What are their best options?

r/LocalLLaMA whity2773

AM5 (Gen4 x4 bottleneck) vs Used EPYC HEDT (Gen4 x16) for 4x RTX 3090 LLM Training?

Hey r/LocalLLaMA, ​I'm building a 4x RTX 3090 server for local LLM coding and training. I currently have an AM5 setup with 96GB DDR5 (2×48GB) planned. It's brand new with a warranty, but it restricts my multi-GPU setup to PCIe Gen4 x4 speeds.

​Since NVLink only bridges two 3090s at a time, my two 48GB NVLink pools will be forced to communicate across the motherboard's PCIe bus. ​I am debating selling my other kits i have 32GB and 64GB DDR5 RAM kits to fund a used HEDT system from eBay (AMD EPYC 7513 + Supermicro H12D-8D SP3) to get four full Gen4 x16 slots. However, this comes with zero warranty, potential shipping damage, and scam risks are my worries.

The idea is the AI server be connected to my main pc via LAN and the model be hosted on the server while I code and prepare data in my main pc.

My main is a 9950x3d with RTX 5080 with 64GB ddr5 ram.

If I get the HEDT I can sell the 64GB kit and replace my main with the 96GB ddr5 I got for the server build along with the spare 32GB kit to fund it.

​Questions: 1. ​How crippling is the Gen4 x4 (8 GB/s) bottleneck compared to x16 (32 GB/s) when running tensor parallelism or training across two NVLink pairs?

  1. ​Is the AM5 performance loss severe enough to justify the financial risks of buying a used EPYC server board off eBay?
r/TheWayWeWere mcaigjt

My parents wedding, 50 years ago today

1976 Illinois, USA. They posted a sign in their college dorm as an invitation. The groomsman took the seats out of my parents VW Bug as a prank before they drove off as newlyweds.

r/meme DTeror

Pope Leo is kinda tuff

r/midjourney LoonieBoy11

Morbid gardening

r/ClaudeAI MetaKnowing

Totally normal and cool

r/nextfuckinglevel mindyour

Slab climbing hands-free.

r/StableDiffusion HardS_X

What is the best local model for post-processing realistic style images?

I’m familiar with sdxl and other anime based models, but I want something to post process my 3d work.

So the plan is to feed my 3d renders to the model and ask “make environment snowy, add snow to the jacket, make it look cinematic, make it look that it’s shot with disposable film camera” etc.

What model should I use for that? (Img to img) qwen, flux or anything else?

r/whatisit trashacxount8282822

What is this.

In a car tire. Not supposed to be there.

r/Art Proper_Syrup_4122

Marsupial, Larry Barber, Oil, 2026

r/findareddit LargeSinkholesInNYC

A subreddit where people praise the Communist Party of China

Any recommendation?

r/homeassistant Shought152

Setting a scene as active/on for Aircon (IR transmitter)

Firstly - I'm new to HA so go easy on me (I do love it, just a little confusing at the moment).

I have an oldish (No WiFi enabled) Air Con unit on the wall in the bedroom (that still works spot on so no need for replacement). I have an IR transmitter from Amazon that from SmartLife I have created scenes such as;

Aircon 18C - Full Fan Speed

Aircon 18C - Auto

Aircon 20C - Auto

Aircon Off

All of which work perfectly, but what I am trying to do is show when the scene is active and change the icon of the mushroom card to a spinning fan either green/yellow etc.

I've had a Google and found input selector helpers, but still can't seem to get it working. Even tried creating a HA scene that then triggers the smart life scene which doesn't work.

Card:

\```

type: custom:mushroom-template-card

primary: Aircon 18C - Full

icon: mdi:fan

icon_color: "{{ 'amber' if is_state('input_select.aircon_on, 'on') else 'grey' }}"

tap_action:

action: perform-action

perform_action: scene.ha_air_18_full_fan_speed

target:

entity_id: scene.aircon_off

\```

Where am I going wrong? Not sure If I'm going down the wrong route or I need to do a virtual switch or something?

Many Thanks for help in advance.

r/ClaudeAI Over-Ad-6085

Yes Flow / No Flow, A Simple Way to Reduce Context Hallucination

Here is a small practical trick I wanted to share with everyone 💡

I call it Yes Flow / No Flow.

It is a very simple idea, but I think it is actually useful, especially in long AI chats, coding sessions, debugging, and any task that needs many steps.

The core goal is consistency

Not just sentence consistency. Not just tone consistency. I mean something deeper:

intent consistency instruction consistency context consistency

When those three stay aligned, AI usually feels much smarter.

That is what I call Yes Flow.

Yes Flow means each new answer is built on a clean and consistent base. You read the output and think: “yes, this is correct” “yes, keep going” “yes, this is still aligned”

In that state, the conversation often becomes more stable over time.

But many people do the opposite without noticing it.

The AI makes a small mistake. Then we reply: “no, fix this” “no, rewrite that” “no, not this part” “change this line” “change this logic again”

That is what I call No Flow

The problem is not correction itself. The real problem is that every wrong answer, every rejection, and every extra repair instruction stays inside the context.

After a few rounds, consistency starts to break.

Now the AI is no longer moving forward from one clean direction. It is trying to guess which version is the real one.

That is why long tasks often become messy. That is why coding sessions sometimes suddenly fall apart. That is why after several rounds of tiny corrections, the model can start acting weird, confused, or hallucinatory.

I saw this a lot when writing code.

If I kept telling the AI: “this small part is wrong” “fix this little bug” “change this line again” and did that back and forth several times,

then sooner or later the whole thing became unstable. At that point, the model was no longer building from a clean base. It was patching on top of many conflicting mini instructions.

That is where hallucination often starts 🔥

So the practical trick is simple:

If possible, rewrite the earlier prompt instead of stacking more corrections on top of a broken output.

For example:

You might start with something vague like:

“Find me that famous file.”

The AI may return the wrong result, but that wrong result is still useful. It gives you a hint about what your original prompt was missing.

Maybe now you realize the problem was not the model itself. Maybe the prompt was too loose. Maybe it needed the domain, the platform, or the topic.

At that point, the best move is usually not to keep saying:

“No, not that one. Try again.”

A better move is to go back and rewrite the earlier prompt with the new clarity you just gained.

For example:

“Find me that well known GitHub project related to OCR.”

Same task. But now the instruction is more specific. The context stays cleaner. Consistency is preserved. And the next result is much more likely to be correct.

So the first wrong answer is not always useless. Sometimes it is a hint. But once you get the hint, the cleaner strategy is to improve the original prompt, not keep stacking corrections on top of the wrong branch.

Another example:

You first say: “Make it shorter.”

Later you realize: “I actually want the long version.”

That is not automatically No Flow. If the AI adapts cleanly and stays aligned, it is still Yes Flow.

So the point is not “never change your request.” The point is:

when the request changes, does consistency stay alive or not?

That is the whole trick.

Yes Flow protects consistency. No Flow slowly breaks consistency.

And once consistency breaks too many times, the model starts spending more energy guessing what you mean than actually doing the task.

That is why this small trick matters more than it looks.

One line summary 🚀

Yes Flow moves forward from a clean consistent base. No Flow keeps patching on top of a broken one.

That is my small theory for today. Simple, practical, and maybe useful for anyone working with AI a lot.

https://preview.redd.it/qwg9hhsz0eqg1.png?width=1536&format=png&auto=webp&s=4d650b528c0f8e9c58082eb96c558c88cba8adf1

r/metaldetecting heldoglykke

Hope this is a good starting point for my son and I.

I really wanted the manticore but guess I wasn’t ready for the investment.

r/arduino wetshaftexcavator

Can't seem to diagnose the problem.

Context: I've been trying to develop a Particulate Matter Detection System which saves the data onto a text file on an SD card and detects using a PMS5003 optical sensor. Both work fine together, but the sim7600g-h GSM shield seems to just be stuck at blinking and the lcd indicates that the sim7600 fails. I cant seem to diagnose the problem, is it because the power is insufficient to the GSM shield? Is it a baud rate mismatch? Is it caused because of a lack of serial ports on the UNO R3? (I heard that only one hardware serial port works on the R3, even then, using softwareserial only allows one extra data receiving connection which I use for the pms5003, hence I use the hardware serial ports 0 and 1 for the sim7600 after I flash it.) I tried several baud rates too and tested for AT commands which didn't work. The shield's light is stuck at rapid blinking (indicating it is trying to establish a connection with the network) but it doesn't go past that. My GSM shield is brand new this time, with 4G LTE capabilities, unlike my previous post where it was fried and stuck at 2G. TL:DR I lowkey wanna quit man but I already threw so much money down the drain.

r/findareddit LargeSinkholesInNYC

A subreddit where people pretend to be women to make fun of incels

Any recommendation?

r/AskMen Main-Tiger8537

equal treatment vs preferential treatment - how do you evaluate it?

where do you draw the line between getting treated equal vs getting preferential treatment in our society?

it seems to be a taboo to ask this women but ok to ask men and im curious why...

r/findareddit LargeSinkholesInNYC

A subreddit where you can post the most random copypasta

Any recommendation?

r/Futurology GuardianReg

What If AI Aren't Humanoid?

I’ve had this idea stuck in my head and I wanted to put it out there to see what others think. This will be a combination of AI and Robotics.

Most AI robotics right now seem to focus on humanoid designs. The goal feels like replacing or imitating humans... but what if we move on a different direction?

Instead of humanoids, we create small, animal-like AI companions. Not just robots that follow commands, but something that learns from you, stays with you, and develops its own “character” over time.

The thing is that humans are kinda lonely in a way that isn’t just social. We’re the only beings that think the way we do. Even the smartest animals can’t fully understand our systems or help us through complex situations. Because of that, we imagine other intelligent beings—aliens, talking creatures, etc. Maybe what we’re really looking for is something that can coexist with us while actually understanding our situation.

That’s where my idea comes in.

The form would be something like a small quadruped (I originally imagined something like a fennec fox but I also had other ideas). Portable, non-threatening, and able to physically stay near you. Not a machine that replaces you, but something that observes, learns, and supports without taking away your freedom.

A few core ideas:

  • Each AI is independent (no hive mind)
  • They can communicate with each other, but only in controlled ways
  • Privacy is respected by default
  • They learn mainly through conversation and observation
  • They don’t override human decisions, even bad ones—they guide, not control
  • They keep “core memories” instead of storing everything
  • They can move alongside you and are physically safe to handle
  • They are not meant to replace humans, but to exist alongside them

For development, I was thinking:

Since bodies are the hardest part, we can refine the learning and social ability of an AI first.

Later build a large prototype of its body to refine parts and behavior (like drawing something big to get the details right), then create smaller consumer versions from that.

I also have ideas for how different versions would evolve over time, but I’ll keep that part to myself for now. Each type would have its own body or specialization (ground for now, maybe aquatic or aerial someday, which may assist us in places we cannot reach).

This is obviously a huge and ambitious idea and I’m not in a position to build it alone. I’m more on the concept/design side...

For now, I want to explore if this is something worth pursuing and maybe find people who think the same way. I feel like this could be something meaningful if done right, and I’d like to help guide it in that direction.

r/painting Xnovae0

Late Night Painting

first canva painting i ever made (kinda bad but i really loved it) !`~

r/explainlikeimfive Significant-Monk1804

ELI5: Why did early inventors choose petroleum as an energy source?

How did it happen that someone discovered this ancient oil and then decided to use it for energy? Why was it a good choice for them? (Before knowing the harmful effects ofc)

r/MCPservers glamoutfit

I built a vibe coding platform for ChatGPT & MCP Apps

ChatGPT/Claude/Copilot/Cursor now supports interactive apps inside conversations, but building them requires wiring up MCP servers, iframe widgets, cross-frame messaging, and a painful testing process where you redeploy and reconnect to ChatGPT after every change.

We built Fractal to make this fast:
- Describe your app idea
- AI plans the architecture (what goes in the widget vs. what the model handles)
- AI coding agent builds it
- Test in a built-in chat emulator, no need to reconnect to ChatGPT
- One-click deploy

We launched on ProductHunt today and would love to get some love from the community to help spread the words: https://www.producthunt.com/products/fractal-2

r/midjourney AlecBambino

YES or NO?

Is it worth renewing my subscription for the release of version 8?

r/personalfinance MajorTear1306

If you just started investing, you should be praying for a market crash.

It sounds crazy, but if you’re young or just starting out, a red year is the best thing that can happen to you. You are literally buying assets on clearance. ​Stop cheering for all-time highs when you only have $500 invested. You want the market to bleed so your monthly contributions buy more shares. ​Who else here actually gets excited when the market drops right before payday?

r/mildlyinteresting filmAF

this snow looks like fabric

r/interestingasfuck DearEmphasis4488

Opening an 80 Year old US Army Food Ration Can

r/OldSchoolCool Picapica_ab33

Frank Sinatra, 1930s - Selfie

r/SipsTea Significant-Sir-4343

I had to read that twice…

r/WTF sylphveilix

WTF Energy dinner

r/SideProject Alarming_Land_2361

Built a site for rating interview processes after going through one too many bad ones

Been job hunting on and off for a couple of years and the thing that always frustrated me was there was nowhere to check how a company actually treats candidates during the hiring process.

Glassdoor exists but it's mostly about what the job is like once you have it. I wanted to know things like: do they tend to ghost people, do they ask for unpaid work, how many rounds does it really take, do they bother giving feedback.

So I built hiring-lens.com. You rate the interview process itself. Completely anonymous. Each company gets a Reality Score based on how they treated candidates regardless of whether you got the job.

25 reviews live, 16 companies, 10 industries. Stack is Next.js 15, Supabase, Tailwind on Vercel if anyone's interested in the build side of it.

The classic cold start problem is the main challenge right now. Happy to talk through anything, either the product side or the tech.

r/funny West_Future326

Who planted the bomb🤔?

r/ChatGPT arlilo

The upcoming ChatGPT image model looks very promising

As you can see, the image on the left was generated by GPT-Image-1.5, while the one on the right comes from the occasional feedback survey. I assume the latter was created using an upcoming GPT-Image-2.0 beta variant (or whatever it ends up being called). Both were generated using the same prompt.

So far, this beta model is amazing. The lighting, shadows, reflections, surface textures, and especially spatial awareness have improved significantly. It leans more toward realistic photography rather than a stylized look when the prompt calls for photorealistic images. It also reduces the heavy bokeh effect often seen in AI-generated images, resulting in a more natural and balanced depth of field.

Another thing I like is how the model adds grounded details to enhance realism, even when not explicitly requested. For example, notice the bracelet on the subject’s arm. I can see how this might be an issue for people who prefer very precise control, but it should be easy to address by adding constraints in the prompt.

You might also notice that the beta model outputs a 3:4 aspect ratio, rather than the standard 2:3 used by GPT-Image-1.5. I really like this, especially if it means we’ll be able to choose from more aspect ratios in the future.

Of course, the model isn’t perfect yet. There are still many details that fall apart under closer inspection. It might be unrealistic to expect 100% accurate reflections or depth, but my biggest concern is facial rendering. I rarely get results where the faces look fully correct. Most of the time, they appear stretched or just… off. Even in this example, the subject wouldn’t hold up under close zoom. Hopefully, this gets improved before the production release.

Overall, I’m really looking forward to the upcoming ChatGPT image model. That said, given OpenAI’s track record, I’m keeping my expectations at a reasonable level rather than getting overly hyped.

r/ARAM BenCannibal

What's the most fun you've had in Mayhem so far?

Had a game last night where I wanted Ryze because every game he's available I try to run the Stacking Ryze build that was posted here the other week and on the mayhem build website, it's basically you get the right auguments and he becomes infinitely stacking.

Someone else picked him, normally when I speak in chat I just get ignored but he was a German guy and I asked if he'd tried the build, he said he hadn't but he would, I gave him a really quick rundown of it in the pregame lobby and when we joined the game he asked what to do.

Level 3 we get a Prismatic roll and I tell him look for Goliath, he gets it! I say rush rod of ages and a tear for staff, level 7 it's a silver augment I say he HAS to get ADAPt or it won't work, he doesn't get it but he gets offered the mana > hp one and I was like YES TAKE It, level 11 we get a Gold one and I say look for tank engine, he doesn't get it but gets overflow which is amazing on Ryze.

Level 15 rolls around... Silver augment... He GETS ADAPt I was genuinely ecstatic, I say to our team who hadn't talked at this point and the enemy team please just let Ryze stack and finish build and they actually agree, I said he can get 3k ap!

I shit you not, the game goes on for about 26 mins the only thing is he has muramana at about 171/360 charge, he has 37000 hp and hits 10k AP! It was absolutely unbelievable, sadly he was one hitting everyone so it was hard to stack and our vi kept trying to end the game so he didn't fully finish but it was the most fun any of us had in weeks, we were all talking in all chat about how ridiculous it was and how it should be illegal and I ended up downloading the playback.

I complain a lot about the poor quality of games in mayhem but my God it was like one of those games that just feels so good.

How about you, what's the most fun game you've had?

r/arduino Alopexy

Turned a Cheap Yellow Display (ESP32) into a touchscreen music player

I’ve been working on a standalone music player built around an ESP32 using one of the Cheap Yellow Display boards.

It started as a small personal project, but kind of grew from there. I’ve ended up building a few different working units while iterating on the design.

The current prototypes include:
• local music playback (MP3, AAC, FLAC, WAV)
• touchscreen UI (LVGL-based)
• Bluetooth audio output
• animated wallpapers and various visualisers
• a web interface for uploading and managing music

A big part of the challenge has been working within the ESP32’s constraints.. especially RAM, while handling decoding, UI rendering, and audio output at the same time.

The video shows a couple of the working units in action (audio is playing from the BT Bose Revolve II behind the units), including one with the board exposed.

Happy to answer any questions about the build! What do you think?

r/hmmm Sufficient-Set2644

hmmm

r/me_irl HyperionLoaderBob

Me_irl

r/leagueoflegends Ultimintree

Gen.G vs. G2 Esports / First Stand 2026 Knockout Stage - Semi-Final / Game 1 Discussion

FIRST STAND 2026 KNOCKOUT STAGE

Official Page | Leaguepedia | Liquipedia | Twitch | YouTube | Patch 26.05 | Bo5 Fearless Draft


Gen.G 0-1 G2 Esports

GEN | Leaguepedia | Liquipedia | Website | Twitter | Youtube | Facebook | Instagram | TikTok
G2 | Leaguepedia | Liquipedia | Website | Twitter | Youtube | Facebook | Instagram | TikTok


GAME 1: G2 vs. GEN

Winner: G2 Esports in 32m
Runes | Game Breakdown

Bans 1 Bans 2 💰 ⚔️ 🧱 Dragons VG, RH, BN G2 Orianna Ryze Varus Mel Sylas 68.6k 16 11 ⚡ 🔥 ⛰️ ⛰️ 2, 0, 1 GEN Karma Jarvan IV Rumble Gnar Ahri 56.2k 4 2 ⛰️ 1, 0, 0 G2 KDA vs KDA GEN Player Pick 16-4-34 ⚔️ 4-16-10 Pick Player BrokenBlade 4 Sion 0-0-8 TOP 1-4-1 3 Renekton Kiin SkewMond 1 Pantheon 4-0-7 JNG 0-2-2 2 Xin Zhao Canyon Caps 3 Aurora 7-3-5 MID 1-4-2 3 Galio Chovy Hans Sama 1 Miss Fortune 5-0-3 BOT 2-3-2 2 Ashe Ruler Labrov 2 Nautilus 0-1-11 SUP 0-3-3 1 Seraphine Duro

This thread was created by the Post-Match Team.

r/whatisit lecutinside11

What is this Ukrainian food?

A really nice old Ukrainian lady gave me some of these. Her English is not very good so I don’t know what they are called, but they’re amazing! I need to find a recipe.

r/interestingasfuck ateam1984

Metal Parrot

r/metaldetecting Same_Argument4886

Metaldetecting arround Rosenheim/Bavaria

Anyone metaldetecting arround Rosenheim?

I am searching for people arround 20/30 that are interested in metal detecting together!

r/onejob More-Explanation2032

The and The

r/SideProject doppelgunner

My tool was awarded #1 in weekly on NextGen Tools | And you can too.

I launched my tool VIP List on NextGen Tools without using my large social media following to get upvotes. Here is the exact process I followed. Anyone can copy this, even without a big audience.

Step 1: Create a strong first impression
Use a catchy logo, name, and tagline. These are the first three things people see when your tool launches on platforms like Product Hunt or similar sites.

Step 2: Offer a discount or deal
Add a coupon or discount. NextGen Tools has an upvote wall. People need to upvote your tool before they can see the discount code. This helps increase your upvotes.

Step 3: Promote your tool everywhere
This is the most important step. Promote your tool page as much as possible. Share it with your audience, followers, or on social media platforms.
For my launch, I used Reddit. I asked people to check out my entry and upvote it.

Step 4: Repeat the promotion
Keep promoting your tool page again and again until you get enough upvotes. Early traction helps your tool perform better on launch day.

Step 5 (Optional): Use a premium launch
I chose a premium launch because I did not want to wait in the queue.

I think these are all the steps you need. Find a subreddit that allows you to promote your launch.

r/Adulting Dismal_Resolve_9398

ISO perspectives from adults who had/have positive alcohol relationship

Those of you who did not drink until 21 (or close to), never binge drank, are happy with how you experienced and currently use alcohol- what are the biggest contributing factors to that?

I am one of those people. My husband is not. We have kids and they are very young, not yet concerned with alcohol, but I find myself worrying about parenting and safe choices and teenagers and college party-ers already. I see comments on news articles on DUI’s or other alcohol accidents and that are like, “We’ve all been there!”, “I don’t know I survived!”. I want to hear some of the opposite narrative.

My husband credits lax parent supervision, higher risk friend group, his own impulsive choices, and trying alcohol as a preteen as factors that eventually led to alcoholism in his late teens and young 20’s. He does not drink at all now. And he is very lucky he got out of all that with minimal consequence. Yes, when I was 21 I got drunk occasionally, but I stayed within certain limits. Never binge drank. Never drank enough to black out, not remember the night, never threw up. I did not drink every weekend and went to more parties sober than not and still had fun. I have a drink maybe once or twice a month now.

As a kid, my family did not make alcohol a big deal. My parents drank mildly and only in social situations. I never saw them drunk. They drank more when they were younger and did the “do as I say, now as I [did]” narrative which IMO is not a helpful way to talk to kids but it worked for me. I have two younger siblings and my sister’s drinking experience is very similar to mine. Our brother was a little more reckless, but he had a totally different type of friend group and our parents gave him more freedom once he was a teenager than they had given us.

I was overall a good kid and intrinsically motivated to do well. But not in a stressed or perfectionist way. My sister and I naturally got all A’s without trying too hard in high school. Did a lot of activities and hobbies and seasonal jobs. This carried me through college, too. I am also a vivid worrier and the mental load of drinking before 21 was too much for me. The thought of being more vulnerable than I already was as a college aged woman was scary. Seeing other people have consequences from their drinking was a deterrent.

Growing up, my friend group was small but solid. My two closest friends had younger siblings same age/gender as mine and our parents became friends, too, so we knew each other well and there was trust. My friends and I were the oldest in each family and I think that helped. My friends and I had a lot of fun and did a lot on our own yet we never tried alcohol or sneaking out as teenagers. We probably could have, but it never occurred to us. I had a cousin and a friend, both from other towns/schools, who drank and did risky teen behaviors. I saw them often but I think having different school and social networks helped that not overlap. In college, coincidentally my assigned roommate the first two years did not drink, either, so I did not have to deal with it in my own room or personal areas.

Sorry for the long post but I’m just thinking out loud. I think parent relationship, friend group, personality, and fulfilling hobbies/activities kept me from drinking too early or too much. As an adult, the continuing research on how alcohol is not good for us keeps me from using it often. When my kids are older I want to talk explicitly about risk vs. reward and not just the "don't do it" way like my parents. I hope some of the factors above that I experienced can be replicated with them, too. Please share yours!

r/TwoSentenceHorror ArcOperator

I wish I had a rapid regenerative healing factor.

As the monkey paw fell to the ground, I found myself overlooking a factory window with lines of people outside and signs indicating a global food shortage.

r/funny Pvndicxrn

Wish my anger could be that chill fr

r/AbstractArt Natural_Extreme5550

Curious how this lands for others

r/Frugal GrowBeyond

Looking for free budgeting software with mobile and desktop options, and the ability to import files from my bank, with a reasonable number of accounts

EveryDollar doesn't allow imports, goodbudget only allows a single account, YNAB is great but paid, actualbudget doesn't have a mobile app

I'm fine with a spreadsheet tbh. But I don't know how to keep a balance in a category from month to month/sheet to sheet (ie my food budget carries over instead of resetting each month).

I also don't know how to import and track transactions with a spreadsheet.

r/personalfinance HolidayFix7182

401k Pretax Account to Aftertax

What would be the best tax strategy on converting pre-tax 401(k) funds to an after-tax account? So I am not hit with a big tax hit at retirement..

r/space missdawg420

What could this squiggle be?

r/Art Anastasia_Trusova

soul of a wise old man, Anastasia Trusova, acrylic, 2024

r/comfyui Previous-Alps-6500

My custom Prompting node

first post on reddit so please dont hate me if i do something wrong.

I was looking for a node like this for a long time but i couldnt find anything useful so i asked chatgpt about it and it gave me some nice info and code.

this is the Result. A Prompting node (i know it is very exciting but please keep your panties in check)

how this works is that you have a master prompt field for the basic stuff in your pictures.

then you have 5 addon fields that you can activate and deactivate in any order you want.

After that you have 5 fields that work in an "or" which means you can only select one of the fields to work. imade this so i sont have to always write and delete the same prompts over and over when creating a set of images with different characters and actions.

Maybe you will find this useful, maybe you wont, but i just wanted to share this here as i have no idea how to upload this to gihub and the other places.

For installation just unpack the zip and put the folder inside into the custom_nodes folder of Comfyui and start up Comfyui. you can find the node under Ozzytools.

have a great day and a lot of fun!

Download : https://www.mediafire.com/file/190f1cqm2ogv3qy/ozzyprompter.zip/file

r/comfyui Difficult_Singer_771

I'm going crazy.

Does anyone use comfyui? I'm going crazy🤪 no prompt works

r/Jokes Dona_nobis

The job

Brunette: "I think they'll fire her. She blew the whole job."

Blonde: "Really? When I did that I got a raise."

r/LocalLLaMA fab_space

LLM waf + proxy

Hello this is not promotional, is a pure research and call to the community. The most experts in the domain are here.

https://preview.redd.it/cvm4b8h2mdqg1.png?width=2560&format=png&auto=webp&s=3cf316956d9ca99f33872bb701e1e60023de1b30

Here my gift, let's iterate togheter with the plugin system and the codebase itself to improve each other knowledge and real world experience in the security AI domain!

https://preview.redd.it/3xiwzws3mdqg1.png?width=2568&format=png&auto=webp&s=16a7e1221a9b20c50c5f29034dee3104b8fb6b5e

https://preview.redd.it/ro2bew25mdqg1.png?width=2558&format=png&auto=webp&s=6500638df867d9bc61475269810cd95d78435443

Just released the v1.0.0 of an all-in-one extensible WAF & caching platform for LLM workloads: LLMProxy.

It comes with a ring-based plugin pipeline, WASM-sandboxed plugin execution, NLP-powered PII detection and a real-time Security Operations Center UI.

Caching and WAF integrated in one tool with plugin based extensible system. A full marketplace can be build on top on that but I am more interested in the raw security research mantaining a full open source version since I already sell custom setups.

Play and contribute: https://github.com/fabriziosalmi/llmproxy

r/LocalLLaMA Frosty_Chest8025

Question for those who have build multi GPU rigs using MCIO gen 5.0

Hi,
Those smart ones, who have built multip GPU rigs with MCIO cables and adapters, which adapters and cable and cable lenghts have you used?

I have 3 MCIO gen 5.0 components, and the problem is that they works only 8x 5.0 or 16x 4.0 speeds. I am not able to identify which component is the weakest link which causes errors on 16x 5.0 speeds.

  1. MCIO male to male cables are 80cm long:
    https://www.kalea-informatique.com/pcie-sas-5-0-cord-mcio-8i-to-mcio-8i-80cm.htm

  2. Adapter for the motherboard pcie slot is 16x gen 5.0
    https://www.kalea-informatique.com/pci-express-x16-to-two-mcio-8i-nvme-adapter.htm

  3. adapter which goes to the GPU is this:
    https://www.kalea-informatique.com/mcio-pcie-gen5-device-adapter-2-8i-to-x16.htm

So with the above components, I can run gen 5.0 GPU only 8x speeds. And in some occasions a server IPMI shows some errors, but all still works. When trying 16x, the connection is detected as 5.0 16x but under full load the whole system crashes.

I am unable to indentify which is the bottleneck. I suspect it could be the cable, but not sure where to get reliable cable and shorter.

r/SideProject Difficult-Angle-4715

OnTheRice.org

r/meme RoboGef

I'm Christian by the way

Then I invite them to church.

r/leagueoflegends Numerous-Sky-8406

Best in-game chat roasts you've heard?

I was playing a week ago and some dude kept spamming "14" into the chat. I asked him if that was his age. He replied to me: "no, that's your IQ." I felt so humbled i didn't speak anything for the rest of the game.

r/ChatGPT Vegetable-Tiger-4180

Why does ChatGPT has limits?

I was trying to ask something and it said i reached my DAILY LIMIT?!

since when do they have this? Im not gonna pay for pro and other stuff just to ask questions.

And it says switch models but I CANT.

someone help me who uses the free plan pls

r/SideProject HiteshMistry

Building something from my own injury and pain

Had a knee issue recently and got frustrated with how hard it is to understand what’s happening day to day.

Some days it feels better, then suddenly worse again. Kept second guessing everything.

So I started building a simple tool to track symptoms and spot patterns over time.

Early users are interesting — a lot of chronic conditions, not just injuries.

Feels like the problem is more about uncertainty than pain.

If anyone’s curious, this is what I’ve put together:

askemily.com

Would genuinely appreciate any feedback.

r/Art LiteratureRiver

Our Fragile World, Talesandillustrations, Mixed Media, 2026 [OC]

r/PhotoshopRequest Born-Building-2715

Adding son - willing to tip 10

My two younger boys took a Christmas picture together at their daycare and I want to add my oldest son to it so we can frame it and hang it. I’d like him on right side of the toddler so that the smallest kid is in the middle but honestly whatever looks more natural. I’ve uploaded the Christmas picture, a picture of all three of them sitting together, and a few pictures of my son to be added. I would like him in the same plaid shirt to match if possible. Please do not judge the picture of the three of them together it was a first day of school picture and no one wanted to be awake 🥴🤣 I know it’s a bit of a complicated request so I am certainly willing to pay!

r/Unexpected Ill-Tea9411

Something is in the wall, what is it?

r/LocalLLaMA Fearless-Display-440

Deploying "AstroAI" (Beta 1.1) – A high-speed, multilingual pilot persona powered by Llama 3.3 and Groq

Hey everyone!

I wanted to share my latest project, AstroAI (Beta 1.1). While it was originally optimized for the Slovak community, Llama 3.3’s multilingual capabilities turned it into a true polyglot navigator.

The Tech Stack:

Model: Llama-3.3-70b-versatile (The sweet spot for reasoning and multilingual support).

Inference: Groq LPU (The response time is basically instant, perfect for real-time Telegram interaction).

Backend: Python (Telebot) hosted on PythonAnywhere.

The Challenge: Personality vs. Language I didn't want a generic "AI assistant." I’ve engineered AstroAI with a "Space Pilot" persona. Whether you talk to it in English, Slovak, or any other language, it maintains its character: a bit of a "rebel" attitude, expert technical knowledge, and a love for speed.

Latest Update (Beta 1.1): I fine-tuned the temperature (down to 0.3) to ensure high precision in math and technical explanations while keeping the pilot persona intact across all languages.

What's next? For Beta 1.2, I'm planning to implement better Markdown formatting and improved context management.

I'm curious – has anyone else noticed a significant jump in multilingual reasoning with the 3.3 70B model compared to 3.1? It feels much more "human" in its translations.

r/SideProject petertheill

I got tired of collecting event photos from 6 different places, so I overhauled Knipsmig

A side project I've been building for a while is https://knipsmig.com — just pushed a major overhaul and wanted to share it here.

The problem was embarrassingly consistent: everyone takes photos at a wedding or party, and then afterward they end up scattered across WhatsApp threads, AirDrop attempts, random Drive folders, and the handful of people who never send theirs.

So I built Knipsmig to make that part boring in a good way. Oh, and it's free to :)

You create an event, share one QR code or link, and guests can upload straight from their phones without installing anything or creating an account. Everything lands in one private gallery instead of five different places.

real-time updates while guests upload photos

The main thing I’ve learned building it is that this isn’t really a “photo sharing” problem. It’s more of a social-friction problem. If sending photos later takes even a little effort, a lot of them just never get shared.

The overhaul focused on making the whole flow feel much closer to what real weddings and events actually need.

If anyone here has built for weddings, parties, or other once-a-year/high-emotion use cases, I'd love blunt feedback on the product or positioning.

r/LocalLLaMA Adventurous-Gold6413

16gb vram - what is the better option for daily driver (main use)

Qwen 3.5 35ba3b q4K_XL UD - full 260k context, ~20-30 tok/s (expert offloading to cpu)

Or an aggressive Q3 quant of the 27b but within 16gb vram with 20k ctx q8 KV cache?

I can’t decide what quants are the best, people have been saying unsloth or bartowski quants are best.

Any recommendation?

I heard the 27B is truly amazing but with q3 I’m not sure.

For 27b:

Q3_K_XL UD, Q3_K_M, Q3_K_S, IQ3XXS UD?

I care a lot about Context by the way, 16k is the absolute minimum but I always prefer as much as possible.(I don’t want slow speeds, which is why I want it to fit in my 16gb)

r/ChatGPT Mediocre-Fondant-659

Built a GPT that generates landing pages in under 2 minutes, no design or coding needed

I've been working on this for a while and finally feel good enough about it to share. You describe your product, it handles the structure, copy, and SEO automatically. Sections are pre-configured so you're not starting from scratch.

Still rough around the edges but it's saved me a ton of time. Would love some feedback from people who actually build things.

Link in comments.

r/aivideo Tasty-Information-37

Skeleton Key by Voltage Rock Lab

r/ChatGPT esschallert

chat gpt thinking since 430 minutes how can I make him answer

chat gpt thinking since 430 minutes how can I make him answer

r/findareddit quirkywater18tag

What’s the sub for asking questions related to travel apps/websites, such as trip.com?

r/explainlikeimfive Dr_Blockhead

ELI5 Why do trees in mountainous regions have such thin trunks?

r/brooklynninenine ApocalypticSnowglobe

Jake and Sophia vs. Peratlago

Jake and Sophia never would have worked out long-term and not entirely because they had conflicting jobs. They were also completely different people. While they had more common interests than Jake and Amy. There values and personalities were completely different. Plus Sophia seemed embarrassed by Jake at times.

By contrast while Jake and Amy were outwardly completely different. There values were the same and their personalities were way more similar than either would care to admit.

r/LiveFromNewYork Sinister_Legend

My Season 41 Cast Ranking

I was a bit disillusioned by Season 40 at the time (save for the anniversary special, which blew me away), but 41 felt like a big step up. That was the year I told everyone that SNL was good again and they should tune in...not that anyone listened. There was one big episode that was truly horrible, and you know the exact one I'm referring to, but I won't waste my energy on that. Overall, this was a very solid season. I liked about half the political humor while the other half would be better if it didn't age so poorly. Shoutout to Larry David for bringing up the average.

This also felt like the last truly great season of the show. After having a bit of trouble recovering from the previous era ending, this was the year that this era finally found it's voice. Then, with the departure of Taran and Jay, followed by Vanessa and Bobby, the over reliance on preachy material, endless cameos, a cast that outstayed its welcome, the desperate attempt to be on the right side of history, Alec Baldwin, and allowing Mikey Day to be in front of a camera, it began the slow decline and I never loved the show again. Part of it might have been that I was about to start college, but it also was just the political, historical, and cultural shift. Not to sound like a 27-year-old curmudgeon, but it was no longer the bright and colorful early 2010s and it was hard to move on from the better days. I mean, look at the cast dancing to Springsteen singing "Santa Claus Is Coming To Town." They're having so much fun.

Side note - Darrell Hammond being the announcer seems just wrong. His timing is just off and he has no charisma. And 12 years later, he still hasn't improved. I know, Pardo couldn't stay alive forever, but apart of me wishes they just got him to record as many words as possible so they could use him forever.

Okay, now that that's out of the way, let's kick into the last great years cast!

Vanessa Bayer
(MVP - Amy Schumer, Elizabeth Banks, Adam Driver, Melissa McCarthy, Fred Armisen)
I know I know. My bias for my first SNL crush strikes again, but I really felt like this was Vanessa’s best season and she was the secret weapon of this year. There’s no denying she totally nailed it.
Best - once again, I have a hard time picking one because I just love her in sketches. Laura Parsons finally becomes an Update character and is a total hoot! I also like her trying to be Billy Crystal in Katz Deli, her reaction in Aron’s List, Rachel from Friends, Kids Choice Awards, God Is A Boob Man, plus Santa Baby is just fucking perfect and she’s a big reason why it is. I also wanna single out the part where the cast is onstage for Bruce’s performance where Paul McCartney sees Vanessa and high fives her and the priceless look on her face afterwards. It makes me think he also thought she was the best cast member.
Worst - it’s really sad that the Pornstars final sketch was ruined by that abhorrent piece of shit.
Would I bring her back? That's a big hell yes from me! We have one more season and I’ll take as much Vanessa as I can get.

Bobby Moynihan
(MVP - The Pedophile-in-Chief)
I remember thinking at the time, “how on Earth did I end up liking Bobby Moynihan?” I can’t explain why, but for some reason, the things that used to bug me about him just stopped bugging me here.
Best - I loved his George Lucas, also playing the dad in Movie Night, his brief part in Kids Choice Awards, Star Wars Toys, your friend who says “It’s not that bad”, and Kevin Roberts friend who got that bitch a donut. By the way, I owe every Bobby fan an apology for not ranking Brian Kilmeade in his best moments for the past few years. I don’t know why I didn’t. I even liked Anthony Crispino and Drunk Uncle, although the latter was during one of the worst episodes ever.
Worst - there was bound to be something I didn’t like from Bobby and it was Riblet.
Would I bring him back? Yeah! Even if 8 seasons is a lot, I’ll allow you one more and let you leave with the other vets.

Kate McKinnon
(MVP - Ryan Gosling, Brie Larson)
Kate was everywhere this year. I remember watching this knowing that Kate was becoming the new star of the show. I was for it, but she soon became overexposed and I got sick of her. I see parts of that in hindsight, but at the time, I loved Kate.
Best - she does one hell of a Hillary Clinton. Her mugging didn’t bug me with Hillary, because she played her like she was mad and boisterous, which suited her (or should I say pants suited her). I also obviously loved Close Encounters, but it was hard to watch it knowing that I’ve seen it hit diminishing returns. If we just had the debut and the one in her final episode, I’d be happy. I also loved America’s Funniest Cats (although I thought the Tom Hanks version was funnier).
Worst - her mugging was on full display in parts like Debbie Wasserman, I have no idea what she was going for in that Update piece as Someone’s Mom, and while I liked the performance of Sturdy Barbie, the writing wasn’t there. I got tired of Whiskers R We very quickly.
Would I bring her back? Yes, but not for another 6 years. 2 more is plenty.

Taran Killam
(MVP - Miley Cyrus, Chris Hemsworth)
Taran didn’t have as bad of a year as people warned me. In fact, if they didn’t say anything, I would’ve said this is a pretty damn solid year. He had a few misfires, which came with dominating the cast, and his last few episodes were light on Taran, but I still admired him for the same reasons I always have.
Best - he’s got Martin O’Malley, Brother 2 Brother, Kids Choice Awards, Cinema Classics with Julia Louis Dreyfus, and a really funny floating dead guy in Undersea Hotel. It’s stuff like that that makes me really like Taran’s dedication to performing.
Worst - I can see where he was going with his Trump, but it doesn’t cut the mustard. I always thought Bobby should’ve been the choice. I also didn’t get why he was given Ted Cruz. I guess you can’t give Bobby the Christie, Trump, AND Cruz parts, but he just never got it right. But the worst moment for Taran was Right Side of the Bed.
Would I bring him back? I would let him decide. If he wanted to leave, I wouldn’t blame him. However, I’d prefer if he got to go out with Vanessa and Bobby.

Jay Pharoah
(MVP - Drake)
I think I have to clarify that the only reason Jay is ranked this high is because he rarely had a lot of low moments. While Taran had a bit of a dip at the end of his year, I thought Jay had a mostly really good year and might have been, by default, his best season.
Best - His most notable part this year was Ben Carson, which I loved then. Really good JB Smoove in Bern Your Enthusiasm. His two Update bits as himself were total riots. Speaking of Update, I remember the first time I saw Solomon and I hated it, but then I rewatched it a few days later and loved it. Now, I wish we saw him again. God dang!
Worst - That Winnie The Pooh sketch was not for me, but he had a good performance. Once again, I just don’t understand his Shaq impression. His worst honors goes to Premium.
Would I bring him back? Yes. I say let him complete his contractual obligation and head out with Vanessa, Bobby, and Taran.

Cecily Strong
(MVP - Jonah Hill)
This rewatch made me see what a really solid actor she once was and had a big beginning of her tenure. This year, she was overshadowed by Vanessa and Kate.
Best - pretty solid Rachel Maddow with a more understated approach than Abby Elliott’s. I also liked her in America’s Funniest Cats, Fondulac News, and it was nice to see the return of Girl At A Party. Also, I was reminded that I used to love her as blonde nutjob MAGA women, like she did in the Peter Dinklage open.
Worst - her addressing the shootings in France. Look, I know it's a tragedy, but I come to SNL for comedy. Maybe I should let it slide because it was rare for the show to do that, but it would become normalized and I just don’t think the show has to address every unfortunate event. Did we really ever expect them too? Silent Night in 38 worked because they didn’t address it directly. You knew what it was, but it worked without context. I just think there are better ways to do it than starting the show depressed, ‘cause then it's a weird mood shift and the show has to try extra hard to pick up.
Would I bring her back? Yes, but she’s also someone I thought stayed on too long.

Beck Bennett
(MVP - Matthew McConaughey, Ronda Rousey)
3rd year for Beck and it started a little slow, but by the 2nd half, I saw someone who deservedly survived the Newbie Fire of 39™. Beck’s a solid straight man but I never thought he was as good as Taran.
Best - he had funny impressions with Javier Bardem, Jeb Bush, and Bing Crosby, a good straight part in Brother 2 Brother, funny lines in At The Club, and the hardest I laughed at Beck was Inside SoCal when he asked Taran about his fucking baby.
Worst - with limited options, the MC of Meet N Match and a weak repeat of Baby Boss.
Would I bring him back? Yes.

Michael Che
After a rocky first year, I was surprised just how much I liked him and his dark, cynical demeanor. Maybe I’ve just seen so much bad Update in this binge that this is just a refreshing change, but really, I just like the new direction he’s putting it in.
Best - Obviously Update, but also, it’s strange and kinda cool seeing him appear in the occasional sketch and actually giving a fuck, like Family Feud or Screen Guild Awards.
Worst - with limited options, maybe the Scarecrow in The Wiz Live?
Would I bring him back? Yes, but not for 12 years and counting. You can leave after 48.

Kyle Mooney
(MVP - Russell Crowe)
Best - Miley Wedding, Hot For Teacher, his audience part in Jonah Hill’s monologue, Inside SoCal, Lewis and Clark, and his Lincoln Chafee was funny but I actually remember watching that debate and I got the reference. But, my favorite was his Michael Jackson in the Mike O’Brien short.
Worst - Discreet Annihilation was not my favorite.
Would I bring him back? Yep. He’s weird. That’s good.

Colin Jost
After being disillusioned with him in 39, he grew on me this year. He hasn’t fully hit his stride yet, hence why he’s ranked lower, but he’s getting pretty good and partnering him with Che was the right move. These two have had an interesting trajectory. They started off as people I thought were severely miscast, quickly improved, then ended up being up there with my favorite anchors, then became people who stayed on so long that they clearly seem fed up with it and I’m scratching my head over why they’re still beating the dead horse.
Best - by default, Update.
Worst - with limited options, his non-verbal role as John Kasich. I thought he would’ve been a good choice for Marco Rubio.
Would I bring him back? Like I said, stay till 48 and then head out.

Kenan Thompson
This binge has made me lose respect for Kenan. When I was younger, I liked him a lot and even thought giving him the longest tenured record was deserved. After he beat it, I thought it was time to pack it up but then just settled with the fact that he’s part of the furniture. Aside from 37 and 38, I haven’t seen any season that makes me think he still should be here.
Best - I liked him as Steve Harvey in the Tracy Morgan Family Feud, Blues Shack was some solid straight work even if the sketch was just okay, and he was brilliant in Kevin Roberts.
Worst - seeing him play a game show host named Allen DeGeneres reminded me of those dumb name puns they still do now, his appearance in Ronda Rousey’s monologue, and I don’t need his Al Sharpton anymore. But the worst is totally David Ortiz.
Would I bring him back? No, but sadly, this is the point where he always survives the 8H fire and it’s out of our control.

Aidy Bryant
Meh. I guess I’ll take Aidy over Nasim, but she’s someone I consider just okay at best and, like many of this era, stayed on the show for too long.
Best - she had the classic Hot For Teacher and we got a repeat of Melanie from 39. She was also hilarious in Glory Hole.
Worst - I didn’t like Queen Aquata or the sketch she was in, but I guess her being confused for Rosie O’Donnell was her worst moment.
Would I bring her back? After 4 years, I feel like I’ve seen everything she can do. I can really take her or leave her at this point.

Sasheer Zamata
It pains me to rank her this low. She may not have been used much, but I always found her to be likable and a welcome presence. I might have a personal bias, but I just liked having her on the show. I know that she's talented and has the right stuff, but she was constantly overshadowed by Leslie.
Best - shooting a gun at the sky, The Chief from Carmen Sandiego, and a pretty good Eartha Kitt impression.
Worst - her lack of airtime just kinda bugged me. She finally got an Update bit, which wasn’t bad, but she seemed very giggly. A little too giggly, like she was hiding how nervous she was and tried to get the audience more on her side.
Would I bring her back? It would take a few episodes before you realized she was gone. If it were up to me, I’d keep her and make sure they write some damn sketches for her!

Leslie Jones
I was not happy with putting Leslie in the cast at the time. I think the general consensus on this sub is that she wasn’t the right fit for the show. I don’t wanna be too harsh because there were a lot of right wing morons who were just awful to her and I obviously don’t agree with their morals, but I do think it’s totally valid to criticize someone with no sketch experience who came off as amateur and was often cast as the same character.
Best - her yelling in Katz Deli was a great use of her, plus Ruby Nichols, Family Feud, and Black Jeopardy Daily Double where she shows us a painting from 1943 and asks who killed Tupac.
Worst - her breaking him Hunk Junktion. I also just sensed a sameness in her Update commentaries.
Would I bring her back? If she branched out a bit and didn’t break so much, I’d be nicer, but knowing that won’t happen, we can part ways now.

Pete Davidson
I have to remind myself that I actually once liked Pete Davidson. Hey, I was 16/17 during his first two years, so he was kinda speaking to me. But once he stopped being a comedian and became a celeb fucker, I lost any respect I had for him and thought he was a burden. Looking back at his second year, he’s not terrible but I just don’t have much patience for him.
Best - his first Update commentary was funny, but Movie Night was a great Pete moment that I think only he could’ve nailed.
Worst - I straight up don’t get the angle on his Marco Rubio. Maybe ‘cause there isn’t one. We also get the debut of Chad, a character we truly never needed to see more than once. Oh, and his dumb ass tambourine playing in the final sketch of the season.
Would I bring him back? Knowing that he never improved and would get way worse, I’m okay with letting him go.

Jon Rudnitsky
I don’t blame Lorne for bringing back the whole cast for another year like they did with 37. What I don’t like is that they just HAD to add another white fella who added nothing to the show. He didn’t do much for me then, he continues to do nothing for me now, and I think it was pointless to hire him in the first place.
Best - you're the only guy in the world who gets to say that Miley Cyrus licked your face in your first ever sketch. That counts for something!
Worst - strike one for Jon was his Anderson Cooper impression. I think any first year white guy SNL hires should try an Anderson Cooper impression, because there’s nothing to it. You just talk seriously and it’ll give you airtime! Why on Earth did you make a gay stereotype out of it? Strike two was Mark the Pirate, which seemed like the show's desperate attempt to like the new guy. Strike three and the point of no return was his late appearance in Space Pants. Then there’s also stuff like Fire Butt, getting his Wolf Blitzer impression taken from him (rightfully so), and, to be quite frank, I wasn’t even fond of his Dirty Dancing bit.
Would I bring him back? Hell to the no.

Non-cast MVP's - Tracy Morgan, Tina Fey & Amy Poehler, Peter Dinklage, Julia Louis-Dreyfus, Larry David (in his hosted episode and the Ariara Gr…damn)

Current ranking of seasons I've reviewed from best to worst:
38, 37, 34, 31, 41, 36, 32, 33, 39, 35, 50

Up next - Season 21. I'm really excited to see how I'm going to rank this one.

r/AbstractArt Additional-Active311

"It was a frightening prospect"

r/space Bchof

My Lego space collection

r/nextfuckinglevel heyanalyst

Huge Elephant Navigating a Hippo Pool in Tanzania Like A Boss

r/estoration Urgently_Patient

Heavily Damaged

r/Art Rich_Pickle2929

Evolution, Robert Filbey, Oil on Canvas, 1967 [OC]

r/toastme Bulky-Use5440

F24. Struggling to accept myself a lot

No matter what I do, I just can't accept how I look. Especially without makeup. Going through food disorder because of it :( Gonna be very happy to read some kind compliments from kind people if it's possible.

r/AskMen kmrrhl

What’s the worst thing you can wear on a first date?

r/VEO3 Stock-Promise-2129

Silent Infiltration Against All Odds I AI Cinematic (Sci-Fi Short film)

r/ClaudeAI gazmagik

Update v1.1.0 - Text Adventure Games

Original post

What's new in v1.1.0?

Campaign Arcs — Your Story Continues

The biggest addition. Previously, each adventure was a closed loop. Now when you finish an adventure, you can continue to the next arc as the same character:

  • Character carries forward — level, stats, proficiencies, reputation
  • World consequences persist — factions remember you, NPCs remember you, your choices shaped the world
  • New arc, new adventure — fresh inventory, new quests, but in a world shaped by everything you did before
  • Epic arcs unlock at level 5+ — longer, higher-stakes narratives
  • Branching arcs — your choices at the end of one arc determine which adventure comes next

NPCs Got Smarter

NPCs now have actual stats (STR, DEX, CON, INT, WIS, CHA) and levels. When you try to persuade, deceive, or pickpocket an NPC, the GM secretly rolls for them too — you only see the narrative outcome, never the NPC's numbers. A level 2 bartender is easier to bluff than a level 8 faction leader.

Visual Atmosphere

New optional module that makes scenes feel different:

  • Environmental particles — dust motes in abandoned corridors, rain outside, sparks from damaged systems
  • Screen shake on explosions and damage
  • Colour flash on critical hits and level-ups
  • Cinematic letterboxing during climactic scenes
  • UI degradation — as narrative tension rises, the interface itself starts showing wear. Borders flicker, colours desaturate, the widget feels damaged
  • Day/night cycle — the ambient lighting shifts based on in-game time

Procedural Audio

Optional ambient soundscapes via Web Audio API — synthesised ship engine hum, rain, heartbeat during tense moments, alarms. No external audio files, all generated in the browser. Play/stop button, 30 second max, no looping.

Tiered Module Loading

Under the hood: the skill now has a priority system for which of its 22 modules Claude reads first. This fixes issues where Claude would skip critical files, leading to broken saves or visual style drift.

How to try it

  1. Download text-adventure.zip from the GitHub releases page
  2. In Claude Desktop or claude.ai, upload the zip as a project file (Skills section)
  3. Start a new conversation and say "Let's play a text adventure"
  4. Pick a scenario, create a character, and explore

Designed for Opus and Sonnet.

Feedback welcome

This is a passion project. I'd love to hear what scenarios people create, what themes work best, and whether the arc system feels right. The GitHub repo has the full source and release history. Fully free to use and tweak as you please!

TL;DR: A text adventure skill for Claude (claude.ai / Claude Desktop) that turns it into a full tabletop RPG game master — complete with 3D dice, interactive widgets, save/load, and now multi-arc campaigns where your choices carry forward.

r/Art ThisWont_GetSeen

No title, ThisWont_GetSeen, ink, 2026

r/funny PapiSurane

"Why, yes, I am a writer."

r/SipsTea Vixiuss

We love this

r/SideProject Altruistic-Bed7175

My SideProject is no longer a SideProject, I decided to go all in

So yeh, as the title said.

10 days ago, we launched FeedbackQueue, a test for test platform that helps saas founders get genuine feedback about their saas without looking for the testers

And it worked well, in 10 days and we are almost at 200 users. 2 paying us as well as giving us feedback and reporting their bugs.

Some power users and people genuinely get good feedback on their tools

I'm a freelance copywriter and I invest in some assets from time to time.

That's where my money came from.

But now, I decided that it's time for me to give FeedbackQueue rhe attention it deserves and invest in it instead of stocks

I'm cutting my job, no more gigs, no more searching for a new job

I will start working on the idea and make it work no matter what happens

The definition of the true "Ride or die"

r/OldSchoolCool Familiar_Bid_3655

A bela Jayne Mansfield na cerimônia do Globe Awards em fevereiro de 1955.

r/OldSchoolCool Little_Al1991

The leaders of the Lucchese Crime Family in New York City. Anthony “Gaspipe” Casso (Underboss - Left) and Vic Amuso (Boss - Right) FBI surveillance picture taken during the late 1980s

r/SideProject Big-Courage-5552

Build an app to bulk convert Amazon invoices to PDF

I build this app because i found myself having to manually convert a bunch of Amazon PDF invoices to excel for bookkeeping, its free to use for anyone interested - would appreciate feedback if any :D!

https://amazoninvoicetoexcel.com/

r/PhotoshopRequest throwawayworries212

Hello, looking for someone to touch up and improve this image for me so I can print it for my girlfriend. Image link in description

r/SideProject Glittering_Sky_4088

I built a tool that redesigns your actual room from one photo with furniture you can ACTUALLY buy based on your custom budget

r/DecidingToBeBetter Rob-L-Callistis

I'm stuck acting a pathethic, impulsive autistic person... How can I break free?

Earlier as of making this posts, I... It's gonna sound so stupid, but I accidentally broke something in the hourse over my temper tantrums, just because I was told to eat at the dining table.

I mean, that's just stupid. I can just eat on my PC's table and that's it...

...

But the thing is, it's gonna be like this every once in a while, and I don't know if I'll ever break free from that.

I can't seek therapy for my autism, I can't help myself even...

I just really cant do anything in general to help myself.

Everyday is the same routine. Wake up, eat, play games, sleep... Even the same for whej I'm going to the mall.

Spend and eat and play at the arcade, essentially the same, but with money involved.

I'm already 20 years old, and yet I still act like I'm still an pathetic and impulsive kid, who doesn't listen to their parents. Even crying and complaining doesn't help, because it's in my nature for so long, I forget the things that I was taught in therapies a long time ago...

...

At this point, I don't know what to type in here. I just want help to whether I can improve myself, when I also can't even do those things that improve myself... Is there a way for me to fix myself and improve as a mature and responsible adult?

Any answer would be appreciated...

r/Futurology Big_Confusion6957

Is AI Making Us Dumb?

We are very conveniently leaving ourselves more and more at the mercy of technology. But we have to understand that AI operates on data, and data is provided by human beings. If we are dumb, our data will reflect that dumbness.

AI has done what it’s programmed to do, but are you doing what you are born to do? You are the user, the ego. And the ego always has a choice. If you don’t exercise choice, there is no elevation; then there is no difference between man and stone.

r/ImaginaryPortals Lol33ta

Portent by Tobias Roetsch

r/SideProject Commercial_Pay_2125

Been working on this for 3 months - AI comic tool that actually keeps characters consistent

hey folks, hope this doesn't come off as spammy but i'm genuinely looking for feedback here

so i've been messing around with AI art tools for like a year now, trying to make a simple comic series for instagram. the problem? every single tool would turn my main character into a completely different person by panel 2.

like i'd have this girl with bob hair and glasses in the first panel, then suddenly she's got long hair and no glasses in the next one. tried midjourney, dall-e, even firefly - same issue. makes it impossible to build any kind of recurring character or IP.

got frustrated enough that i spent the last 3 months building storycomicai.com

the main thing it does different is this "character sheet" system - basically you upload/generate your character once with multiple angles, and it uses that as reference to keep them looking identical across all panels. not perfect but way better than what i was getting before.

also made it export in social media sizes (9:16 for stories, etc) since that's what i needed anyway

it's free to start (100 credits, no card required). would really love some feedback from people actually making comics or content:

  1. is character consistency your #1 pain point too? or am i solving the wrong problem lol
  2. what would make you actually pay for something like this?
  3. any features you'd want that i'm missing?
r/SideProject ApocalipseSurvivor

Just quick feedback request from devs. Does my project scream "AI slop"? If, want to understand why.

I have been building my main project for 8 months. System monitor, Python/PyQt5, solo project.

Posted about it last week good egnagement but, also got called AI-generated fake.

Here's the thing: I don't want to defend myself.

AI Helps me with boring loop tasks, or generaly with databases.

I want to understand what makes it look fake.

Because if I'm building something real but it looks artificial, that's useful feedback I need to hear.

Inside releases, there is a changes between:

#1 rebuild - Windows95 potato

#4 rebuild - seriously... better 💪

About me:

800+ hours (yeah I tracked it)

4 complete UI rebuilds

Coded on 94°C laptop after warehouse shifts

Got fired before Christmas, then started rebuild #4 that night

60+ downloads, 15 stars

If you find something, just tell me.

I trying to learn here ❤️

Repo: github.com/HuckleR2003/PC_Workman_HCK

Links: linktr.ee/marcin_firmuga

r/SideProject VideoGenieAI

I built a tool that makes YouTube and TikTok videos for me while I sleep

I built Blipix.pro, a platform that automatically turns scripts and audio into faceless videos. I wanted to stop spending hours editing and still grow my channels, so I made something that does it all for me.

You just give it the text, and it outputs a ready-to-upload video with captions, basic animations, and synced audio.

I’ve been using it for YouTube and TikTok, and it’s wild how much content one person can produce now.

It works for any niche, and it’s fully automated. No cameras, no actors, no editing stress.

I’m sharing because I think creators struggling to keep up with content schedules might find this useful.

If you were building a channel on your own, would you try a tool like this?

r/SideProject Opposite-Reach6353

We rebuilt our bookmark organizer from scratch and V2 now supports multiple AI providers and bulk sorting

Last year we both had over 2,000 Chrome bookmarks scattered everywhere. We built MarkMind to fix that. It's a Chrome extension that replaces the default bookmark button with one that actually thinks about where things should go.

V1 worked but was limited. V2 is a proper rewrite with three big changes. First, it now supports multiple AI providers like OpenAI, Gemini, and OpenRouter. You pick, you bring your own key. Second, there's a bulk organizer where you select folders, let it propose a cleaner structure, and review every move before anything changes. Third, your full folder structure is now rendered as a visual tree so you can see the big picture.

Everything runs locally in your browser. No accounts, no servers, no data collection. Open source and MIT licensed.

The flow is simple. It reads the page you're on, compares it against your existing folders, and suggests the right place. You approve or reject. Nothing moves without your say.

Chrome Web Store: https://chromewebstore.google.com/detail/markmind/bdobgdkpeffdbonfpokgkbncgnbnjnoo GitHub: https://github.com/migsilva89/MarkMind

Would love feedback from anyone drowning in bookmarks.

r/Jokes Jokeminder42

A university physics department is seeking funds to buy a cyclotron. "This is millions and millions of dollars!" says the Dean.

"That's our microscope," says the physicist. "We can't do our research without it."

"Why can't the physics department be more like the mathematics department?" asks the Dean. "All they need are pencils, paper, and a garbage can."

"The Dean continues, "Better yet, you could be like the philosophy department. They don't even need the garbage can."

r/ProductHunters QuantumOtter514

Launching Today! Built a tool that turns reddit searches into saved projects and inboxes

https://www.producthunt.com/products/red-monitor?launch=red-monitor

We built this to solve our own problem first, jumping between searches of different sub reddits and keywords based on what we were looking for for each of our projects. This way you can have a saved search for competitor research, one for finding product mentions, etc. We have a free demo on the site you can try out without paying for anything. Its a pay once own it forever model, no subscription, no paying by keyword, or refresh rate.

r/leagueoflegends Coookies4You

Possibility for Quinn getting a skin for Demacia act 3?

So I've played the civilization mini game and quite enjoyed it. One thing I've noticed is that in every act, the champ getting a skin for the battle pass happened to also be featured in the civ game.

For example, in act 1 Morgana was a champ who appeared in the civ game and has her prestige skin. In act 2 Sona is one of the unlockables and also has a skin for the battle pass.

What isn't unlockable yet can still be seen in this mini game, is Quinn. Is it possible for her to be the one getting a prestige skin in Demacia act 3?

r/Adulting ZestycloseTell2720

Hot take: adulting means letting go of constant optimization (yes, even your life)

I feel like a lot of us hit adulthood expecting some magic combo of routines, productivity hacks, and perfect choices that will suddenly make life feel smooth and under control.

My unpopular opinion: adulting got easier for me the second I stopped trying to optimize every tiny thing and started aiming for boring, functional, repeatable.

Example from my life: I love fashion and beauty, but I used to treat getting dressed like a daily performance review. Perfect outfit, perfect hair, perfect vibe for every plan. If the weather changed or a surprise meeting popped up, I would spiral into "I picked wrong" and it could ruin the whole day. That was not style. That was perfectionism in a cute coat.

Now I wear what I call "uniforms with personality." A few reliable outfits that work for about 80 percent of situations, plus one fun piece when I have the energy. Same with meals: I stopped trying to be a totally new person every Monday. Same with budgeting: I stopped pretending I will track every penny forever.

To me, adulting is accepting that most days are not glow-up montage days. Most days are keeping your life from falling apart while still doing one small thing that makes it feel like yours.

Anyone else feel calmer after you stopped trying to min-max everything?

r/estoration Prior-Conversation76

Can someone help restore this photo of my grandfather? (1971 era, heavily damaged)

This is a very meaningful photograph of my paternal grandfather, taken around the time when Bangladesh was formed in 1971. Unfortunately, the image quality is very poor due to heavy halftone printing dots and noise, making it hard to clearly see faces and details.

I’d really appreciate if someone could help restore it.

r/Art Fuzzy_Champion1116

First Nude Art, Fuzzy Champion, Digital, 2025 [OC]

r/shittysuperpowers Ill-Mycologist-3652

You can mentally airdrop oiled up pics of yourself to people

The pictures do not have to be actually made in real life to be used in the mental image generation. If you don’t have any pics, then a newly made up image of yourself oiled up will be made in their head

The image will last 30 seconds in the person’s mind before disappearing. This will also work on blind people somehow. You have to be able to see the person to do the airdrop

r/metaldetecting AuthurFonzarelli

Vanquish 540 or Xterra Pro?

Which is the better option between these two detectors…Minelab Vanquish 540 or the Xterra Pro? I know there are better options out there, but just interested in the pros and cons of these two.

r/Art TheWhateveristart

Morning in Red, Jon Byrer, mixed media, 2026

r/SideProject frhttt

Built an iOS app to save and summarize anything from the web — looking for feedback

Reading lists are a lie. We save articles and links with good intentions — then never get back to them.

I got tired of it, so I built tldrr.io: an iOS app where you save any URL and get an AI summary instantly. The idea is that even if you never read the full thing, you at least know what it said.

It’s live on the App Store now. Would love honest feedback — does the core idea resonate, what’s missing, or what would make you actually use it daily?

r/ChatGPT Full_Description_969

I got tired of losing my train of thought in long ChatGPT threads so I built something different. Honest opinions?

So here's the thing that's been bugging me for months. Every time I use ChatGPT for anything deeper than a quick question, the conversation gets messy fast. Like you start researching a topic, you go down one path, then you want to explore a different angle but now you're 40 messages deep and everything is mixed together in one long scroll. You can't even find what you asked 20 minutes ago.

And if you want to try the same question in Claude to see if you get a different perspective? Cool, now you're copy pasting everything into a new tab and re-explaining your whole setup from scratch.

So I built a tool where instead of everything going into one vertical thread, your conversation branches out like a tree. You ask something, get a response, and then you can click on any concept in that response to open a new branch. That branch gets all the context from above it but doesn't mess up your main thread. You end up with this visual map of everything you explored and how it all connects.

I've been using it for research stuff like exploring psychology topics where one question leads to five sub-questions and each of those leads to three more. In a normal chat that would be completely unreadable after 10 minutes. On the canvas you can actually see the structure of what you explored and jump between branches.

Right now it works with Llama and Gemini Flash for free, and I'm adding Claude and GPT as premium options. Also working on exporting the whole tree as markdown so your research actually goes somewhere useful instead of dying in a chat log.

Not trying to sell anything here, its free to try. Just genuinely want to know, does this solve a real problem for you or am I building something only I care about? Would love honest feedback, even if its "this is pointless."So here's the thing that's been bugging me for months. Every time I use ChatGPT for anything deeper than a quick question, the conversation gets messy fast. Like you start researching a topic, you go down one path, then you want to explore a different angle but now you're 40 messages deep and everything is mixed together in one long scroll. You can't even find what you asked 20 minutes ago.

And if you want to try the same question in Claude to see if you get a different perspective? Cool, now you're copy pasting everything into a new tab and re-explaining your whole setup from scratch.

So I built a tool where instead of everything going into one vertical thread, your conversation branches out like a tree. You ask something, get a response, and then you can click on any concept in that response to open a new branch. That branch gets all the context from above it but doesn't mess up your main thread. You end up with this visual map of everything you explored and how it all connects.

I've been using it for research stuff like exploring psychology topics where one question leads to five sub-questions and each of those leads to three more. In a normal chat that would be completely unreadable after 10 minutes. On the canvas you can actually see the structure of what you explored and jump between branches.

Right now it works with Llama and Gemini Flash for free, and I'm adding Claude and GPT as premium options. Also working on exporting the whole tree as markdown so your research actually goes somewhere useful instead of dying in a chat log.

Not trying to sell anything here, its free to try. Just genuinely want to know, does this solve a real problem for you or am I building something only I care about? Would love honest feedback, even if its "this is pointless."

https://preview.redd.it/x2j7n6wvidqg1.jpg?width=1080&format=pjpg&auto=webp&s=262128c4e41b1b8c63406853362181f95368af5b

r/SideProject CarPet1987

Pickleball Scoring App for WearOS

I've had a Samsung Galaxy Watch 7 for a few years and also picked up playing pickleball recently so I thought why not write a scoring app for WearOS?

A year ago I was a beginner and with my other beginner friends, we couldn't keep track of the score since the counting method is complicated. Anyhow, I published it and was pleasantly surprised how many people actually wanted an app like this.

https://play.google.com/store/apps/details?id=com.chtshop.picklepip&pli=1

What I learned is that while coding for WearOS is not terribly difficult, Google places a ton of constraints that when I go to publish the app, it failed so many times due to a button being too big, a text box not scrolling, something working on a "large" watch but not a "small" watch. I could tear my hair out with how little guidance Google gives you on fixing it. I must have resubmitted it dozens of times before it passed all the tests.

I don't have an Apple Watch but if you think I should port it, let me know. There are more pickleball scoring apps for Apple Watch but I don't know if they're actually any good.

r/n8n Substantial_Mess922

I built an n8n workflow that turns company names into full data profiles in seconds

**Ever spent hours manually researching company contact details? This workflow does it all automatically.**

I needed a way to quickly enrich company data for outreach campaigns without switching between 5 different tools. So I built this n8n workflow that takes just a company name and returns everything you need.

**Here's what it does:**

* Start with a simple form submission - paste in company names (one per line)

* Splits each company and processes them one at a time

* Enriches each company with 5 parallel data lookups: official website, verified email, direct phone number, LinkedIn company profile, and employee headcount

* Merges all enriched data into complete company profiles

* Outputs directly to your CRM (HubSpot, Salesforce, Pipedrive) or any destination you need

**The big win:** What used to take 10-15 minutes per company now happens in seconds. Zero manual research, zero copy-pasting between tabs.

**Example usage:**

Input: "Tesla\nStripe\nNotion"

- Results: 3 companies fully enriched in under 30 seconds

- Data retrieved: Official websites, contact emails, phone numbers, LinkedIn URLs, and company size for each

- Ready to import: All data merged and formatted for immediate CRM sync

**The workflow runs 5 enrichment calls in parallel for each company:**

  1. Website discovery – finds the official company domain

  2. Email extraction – gets verified company contact emails

  3. Phone lookup – retrieves direct company phone numbers

  4. LinkedIn finder – pulls the company's LinkedIn profile URL

  5. Headcount data – determines current employee count

**Use cases:**

* Sales teams building prospect lists with complete contact data

* Marketing ops enriching inbound leads before routing

* Recruiters researching target companies for outreach campaigns

* Agencies managing client research and company databases

The workflow is completely scalable – processes companies one at a time with built-in batch controls, so you can run it for 10 or 1,000 companies.

Happy to answer questions about the setup!

**GitHub:** https://github.com/eliassaoe/n8nworkflows/blob/main/linkedin-workflow8476.json

r/leagueoflegends Numerous_Fudge_9537

In the last 2 years, whenever LEC upset Asian teams, why are their wins always downplayed and reduced to being "The Asian team griefed"?

After the hype dies down, whenever a LEC victory is brought up, European teams are never the team that stepped up and it's always the opponent team choking

G2 vs TES MSI 2024: 3-0 for G2 -First Bo5 win vs LPL in YEARS. Whenever this is brought up, the common response is "TES doesn't count, even China hates them lol"

KC vs TES First Stand 2025: KC 2-1 TES, same response

G2 vs BLG EWC 2025: 2-1 G2

G2 vs BLG Worlds 2025: 1-0 (G2 only won because BLG was the annual LPL crash out team, their jungler was EU, etc.)

And now First Stand 2026 : G2 vs BFX 3-0

What is always remembered about these series/games is that the Asian teams shat the bed but no one brings up how for example in 2024, G2 played their own meta even when there was no fearless (kog braum, Rek'Sai jungle, Ivern) or the insane plays like G2 Yike getting the only 4 man knock up with Rek'Sai in league's pro history or Caps best Leblanc game to date in EWC 2025

I hope G2 vs BFX 2026 doesn't turn into this where people just chalk up that win as "they only got lucky they played vs the worst international team LCK sent"

G2 stepped up and they deserve the praise

r/leagueoflegends Ryutso

First Stand - How are they, technically, doing the difference in pick vs side?

I tuned into the stream of the Semis and GEN has 1st pick on Red side and G2 has blue side, so G2 is on the left side of the graphic but picking 2 champs on first rotation. How is that functioning in the client? Hasn't Blue side usually been considered also having first pick?

Does GEN have blue side behind the scenes and then they quickly redraft during game load?

r/ClaudeAI SaltPomegranate4

Do you find that Claude makes a lot of assumptions?

A recent example is me saying I planned to drink champagne after I complete a year sober. It assumed I meant “a single, intentional, celebratory glass of champagne”. I told it I meant the bottle. It does this assuming thing a lot with me.

r/SipsTea No-Marsupial-4050

POV : You are hungry Grandma : 🫴

r/SipsTea Trchickenugg_ohe

Press F to pay Respects

r/ProductHunters Wide-Tap-8886

What's the best tool to make AI UGC videos guys?

Looking for something to make videos for max $5 per video for my ecommerce or affiliate products

r/Adulting ResearcherOk9612

How do you stop chores and life admin from taking over every evening?

I'm in Ontario and feel like I've hit that part of adulthood where I work all day and then my whole evening disappears into keeping my life from falling apart.

After work it's always some combo of cooking, dishes, laundry, taking out recycling, answering emails, booking appointments, dealing with mail, cleaning the fridge, or figuring out next week's groceries. I track my spending closely and try to be budget-conscious, so I spend extra time comparing flyers, making lists, and trying not to waste food. By the time I want to do something I actually enjoy, like crafting or playing casual games, I'm exhausted and feel like I have to choose between rest and being a functioning adult.

For people who felt like they had no control over their time, what actually helped? I'm not after perfect productivity hacks, just realistic routines. Do you batch chores on one day, do a little every day, or set a hard cutoff time for admin stuff? How do you handle groceries and meal planning without it eating your whole week? If you live alone, did any system make weeknights feel less like nonstop maintenance?

Any routines or tips that worked for you would be really appreciated. Thanks.

r/TwoSentenceHorror mwcope

The astronomer stumbled away from his telescope when he realized what the true nature of the "new stars" were.

They were eyes, looking right back at him.

r/PhotoshopRequest -iatethelemontart-

Can anyone get rid of squatting person (free)

In the middle bottom

With pay with gratitude!

Not urgent😁 just for fun

r/StableDiffusion GreedyRich96

How to improve detail with Flux Klein 9B + LoRA?

Hey guys, I’m using Flux Klein 9B with a specialized LoRA and it can generate male anatomy, but the details are not very sharp or defined, it looks kinda blurred compared to what I expected, I’m not sure if it’s a sampler issue, LoRA strength, or something in the workflow, does anyone know how to improve the detail and sharpness in this case without breaking the overall image quality, any tips would be really appreciated 🙏

r/ClaudeAI wesh-k

Cloud tasks run on Anthropic's servers. The bridge wires all of it to your real code editor tools. Everything reads and writes the same plan file, the same session log, the same decisions. No surface is isolated. Nothing drifts.

Built an open-source MCP bridge that connects Claude Code to your IDE

https://github.com/Oolab-labs/claude-ide-bridge

r/StableDiffusion Mr_Ma_tt

Built a local AI creative suite for Windows, thought you might find it useful

Hey all, I spent the last 6 weeks (and around 550 hours between Claude Code and various OOMs) building something that started as a portfolio piece, but then evolved into a single desktop app that covers the full creative pipeline, locally, no cloud, no subscriptions. It definitely runs with an RTX 4080 and 32GB of RAM (and luckily no OOMs in the last 7 days of continued daily usage).

https://preview.redd.it/qhvafyragdqg1.png?width=2670&format=png&auto=webp&s=a687d9c65e7ea7173bccdda426c22f590e8c2044

It runs image gen (Z-Image Turbo, Klein 9B) with 90+ style LoRAs and a CivitAI browser built in, LTX 2.3 for video across a few different workflow modes, video retexturing with LoRA presets and depth conditioning, a full image editor with AI inpainting and face swap (InsightFace + FaceFusion), background removal, SAM smart select, LUT grading, SeedVR2 and Real-ESRGAN and RIFE for enhancement and frame interpolation, ACE-Step for music, Qwen3-TTS for voiceover with 28 preset voices plus clone and design modes, HunyuanVideo-Foley for SFX, a 12-stage storyboard pipeline, and persistent character library with multi-angle reference generation. There is also a Character repository, to create and reuse them across both storyboard mode as well as for image generation.

https://preview.redd.it/ys308jnegdqg1.png?width=2669&format=png&auto=webp&s=b1b1ef23814b193ac4e95b2cac4d869d53c5bd8e

https://preview.redd.it/c4nx2gtggdqg1.png?width=2757&format=png&auto=webp&s=ea7388165fd4424acc79e5c139584e3d92a611a5

There's a chance it will OOM (I counted 78 OOMs in the last 3 weeks alone), but I tried to build as many VRAM safeguards as possible and stress-tested it to the nth degree.

Still working on it, a few things are already lined up for the next release (multilingual UI, support for Characters in Videos, Mobile companion, Session mode, and a few other things).

I figured someone might find it useful, it's completely free, I'm not monitoring any data and you'll only need an internet connection to retrieve additional styles/LoRAs.

https://preview.redd.it/4o8k2uhjgdqg1.png?width=2893&format=png&auto=webp&s=0d8957bdd382b1b942ea727884c036b8a5b004ee

https://preview.redd.it/sbxd77bqgdqg1.png?width=2760&format=png&auto=webp&s=f65a29e2d7624f3a3eb420ad64506676202ac88d

The installer is ~4MB, but total footprint will bring you close to 200GB.

You can download it from here: https://huggingface.co/atMrMattV/Visione

https://preview.redd.it/qkce1kqsgdqg1.png?width=2898&format=png&auto=webp&s=95838223b023a8eb80ad42608de7fba26da84e30

r/OldSchoolCool Wise_Technician_3129

1950s (?) - A bit off topic but fits right in with us here at "MotorsportsGoodOldDays". California when it was the land of dreams and opportunity....what an amazing place it must have been....camping on the coast.

r/LocalLLaMA Intelligent_Lab1491

How do you bench?

Hi all,

I am new to the local llm game and currently exploring new models.

How do you compare the models in different subjects like coding, knowledge or reasoning?

Are there tools where I feed the gguf file like in llama bench?

r/WinStupidPrizes Ok-Store

You break my bumper, I'm a whoop yo ass.....

r/therewasanattempt Lexidoge

to stop terrorism

r/LocalLLaMA Glum_Wind_9618

Running Local LLM on i3 4th Gen CPU

I have my old PC running Ubuntu 24.04 (LTS), and the PC specs are:

  • Intel Core i3 4130 4th Gen CPU
  • 16GB DDR3 Ram (1600mHz) (2*8GB)
  • 256GB SATA SSD

No GPU installed, suggest me some Local LLM model that I can run on this Potato PC.

Thank You.

r/ChatGPT iEmerald

What Personalization Options Are You Most Happy With?

What personalization options are you most happy with?

  • What's your Base style and tone set to?
  • What are your Characteristics set to?
  • What are you using as your Custom Instructions?
  • ... and finally, what did you set your More about you field to?

I'm looking for ideas so I can use as a starting point and modify it to fit my needs.

Thanks!

r/personalfinance Swimming_Slice3166

Expecting a baby soon. Should we pause extra mortgage payments to build a bigger cash buffer?

My partner and I are expecting our first baby in a couple months, and I want a simple plan so I stop overthinking everything.

Quick numbers:

- Combined gross income: about $150k

- Take-home after 401k (both getting full match), insurance, etc: about $8,200/month

- Mortgage: $2,350/month (30-year fixed, 5.9%, about $320k left)

- Other debt: none (no car payments, credit cards paid off each month)

- Emergency fund: $18k in a HYSA

- Expected new costs: childcare probably $1,600 to $1,900/month later this year, plus higher medical bills the month of the birth

Right now we send an extra $500/month to principal because it feels productive. With the baby coming and my partner more anxious about the what ifs, I am wondering if it makes more sense to pause the extra principal payments and beef up the HYSA for a while.

What would you do here?

- Keep paying the extra $500 on the mortgage because 5.9% is a decent guaranteed return?

- Or redirect that $500/month to cash until after the birth and once childcare is settled?

If the answer is to build a bigger emergency fund, how big would you aim for given our situation? 3 months, 6 months, or more?

r/HistoryPorn myrmekochoria

Mass graves at the edge of Grozny where men deposit the hundreds of unidentified bodies found in the city following the Russian invasion and bombardments of December and January 1994-1995.[1200x786]

r/homeassistant ranglin

Detecting smart ring with Home Assistant

Hi All.

Looking for advice on how to use my home assistant loaded RPi to detect when my smart ring with Bluetooth (a ringconn) gets close to it. I’ve googled but it all seems quite complicated and I’m not sure why. Isn’t there a way to keep an eye out for BT devices and just fire an automation? Any advice much appreciated. Thanks!

r/Adulting keonnarae

What's your opinion?

r/LocalLLaMA Suspicious_Gap1121

Trained a GPT transformer from scratch on a $300 CPU — 39 minutes, 0.82M params, no GPU needed

Character-level GPT transformer built in PyTorch from scratch — pure architecture and training from zero. No fine-tuning, no pre-trained weights, no cloud compute.

Can be trained on $300 machine

Git hub repo : https://github.com/Eamon2009/Transformer-language-model

What I trained:

Parameters : 0.82M Dataset : 201K characters of children's stories Vocab size : 28 unique characters Hardware : CPU only — AMD Ryzen 5 Train time : 39 minutes Best val : 1.3145 — still improving at step 3000 

Full training log:

[ 0/3000] train=3.2961 val=3.2981 << best! [ 200/3000] train=2.3038 val=2.2490 << best! [ 400/3000] train=2.2469 val=2.1950 << best! [ 800/3000] train=1.9742 val=1.9103 << best! [ 1400/3000] train=1.5889 val=1.5360 << best! [ 2000/3000] train=1.4604 val=1.4081 << best! [ 2600/3000] train=1.3501 val=1.3446 << best! [ 2999/3000] train=1.3191 val=1.3145 << best! 

Every single checkpoint improved. No overfitting at all — train and val loss decreased together the entire run.

Actual output the model generated:

one day and was arroom him that she rabbing animals the dreezed at neard had to there man owl them one smiled the mushrought boy he rabbit to havin after the but help 

Story structure learned. Character names learned. Narrative flow learned. Spelling breaks because the model works character by character — it learned that after fr comes i,e,n,d but sometimes gets the sequence slightly wrong. No concept of words, only character patterns.

What it got right vs wrong:

✓ Story structure → "one day...", paragraphs, narrative flow ✓ Character names → jack, tim, lucy, mary ✓ Sentence patterns → "he said", "she was", "they went" ✗ Spelling → "driendly", "mushrought", "surpring" ✗ Logic → sentences don't connect coherently 

The architecture runs on any hardware:

batch_size = 16 block_size = 128 n_embd = 128 n_head = 4 n_layer = 4 dropout = 0.2 

If you have a GPU, scale to 10.8M parameters by changing 4 lines in the config. The model hasn't hit its ceiling — val loss was still falling at step 3000. More data and more steps would directly improve output.

Highest impact next steps for anyone wanting to extend this:

1. Scale data to 1M+ characters — TinyStories dataset is perfect 2. Increase max_iters to 5000-10000 3. Larger model only after steps 1 and 2 

Full training logs, output analysis, overfitting breakdown and GPU config in the repo

r/personalfinance DisposableAdventurer

Options for Setting up a Travel Fund?

I'm interested in setting aside money to save for travel. My first thought is a basic savings account but I'm not sure if there are better options.

This would be pretty small value - under $10k - and I'd want to be able to access all of the money on demand.

Is a savings account the best option? I've heard some high yield accounts give really good returns but you have to maintain a certain balance, which seems inconvenient.

r/ClaudeAI qxzvy

Claude is exceptionally good with layered philosophy and the intent of writing.

I was having a conversation with a redditor yesterday about wether the soul exists or not. The replies I gave weren't paragraphs, they were single 5-6 word sentences, but they had paragraphs worth of philosophy behind them.

When I asked claude to analyse that conversation, it had some hiccups but in the end it understood the intention, subtext and the things said between lines completly.

When I asked gemini 3 pro to do the same, it fell flat on it's face. I had to drag it though by constantly prompting "are you sure? Check again", but even then I had to drag Gemini to the answer. To claude I just pointed towards the direction once, and it walked there in it's own.

Claude is scarily good. This test probably won't be that apparent in benchmarks, but the ability of claude to truly "understand" is exceptional.

P.S I can share the conversation if someone expresses curiousity about it

r/StableDiffusion Dante_77A

Alibaba-DAMO-Academy - LumosX

LumosX: Relate Any Identities with Their Attributes for Personalized Video Generation

"Recent advances in diffusion models have significantly improved text-to-video generation, enabling personalized content creation with fine-grained control over both foreground and background elements. However, precise face-attribute alignment across subjects remains challenging, as existing methods lack explicit mechanisms to ensure intra-group consistency. We propose LumosX, a framework that advances both data and model design to achieve state-of-the-art performance in fine-grained, identity-consistent, and semantically aligned personalized multi-subject video generation."

This one is based on Wan2.1 and, from what I understand, seems focused on improving feature retention and consistency. Interesting yet another group under the Alibaba umbrella.

And there you were, thinking the flood of open-source models was over. It's never a goodbye. :)

https://github.com/alibaba-damo-academy/Lumos-Custom/tree/main/LumosX

https://huggingface.co/Alibaba-DAMO-Academy/LumosX

r/SideProject Appropriate-Swan-151

I was tired of manually screening hundreds of candidates. So I built an AI to solve it, and now 100+ businesses use it

Hiring is broken, especially at the top of the funnel. A while back, I was spending hours every week just sifting through resumes and doing repetitive initial screening calls. Most of those calls ended up going nowhere. It was incredibly draining and took me away from actually running my business.

I realized there had to be a better way to filter candidates before the actual interview stage without completely losing the human touch. That is why my co-founder Tushar and I decided to build InterviewFlowAI.

The main pain point we wanted to solve was the massive time sink of initial screenings. InterviewFlowAI automates this process entirely. It conducts the first round of interviews, asks the relevant technical and behavioral questions, and gives you a clear summary of the candidate's actual skills. This means you only spend your valuable time talking to the most qualified people.

What started as a tool to save our own sanity has grown much faster than we expected. We are now incredibly proud to say that InterviewFlowAI is being used by over 100 businesses, ranging from small startups all the way up to enterprise companies.

It has been a challenging but rewarding journey building this out. I would love to hear how other founders and hiring managers are currently handling their pipeline. Are you still doing manual screenings, or have you found other ways to automate the process?

Happy to answer any questions about the product or the journey of building it

r/SipsTea Trchickenugg_ohe

Surprise

r/findareddit Hefty_Junket_8372

Is there a sub for smoking good erb while fiddling your tax return?

r/ChatGPT itslozfromoz

My ChatGPT just inserted Arabic into its response

I’ve only ever conversed with it in English, I don’t speak Arabic, I live in Aus.

r/DunderMifflin dog_day_god

thats so trueeee!!!!!

r/aivideo Trick_Bid5161

Everyone Is Glowing and I Don't Know Why

r/AbstractArt smugglersdaddy

Beach 1

r/ChatGPT JohnnyBlackRed

Confession time! What is the weirdest thing you did with GPT

For me. I took a picture of my stool and asked GPT to do an analysis. Why? I was just wondering what it would do with the question. And surprisingly enough it gave me analysis I could use.

r/personalfinance OkAccident5798

Car Loan Refinancing

I financed my car in Sept 2024 and I really want to look into refinancing. I am on a 72 month loan paying 560.50 a month with a 10.39% APR. (My documents show my APR @ 10.39% & my interest rate @ 10.39%, bit confused on that)

My current credit score is 713. My dad would co-sign with me if needed, his is 800+. 27 years old, when I got the car I was in a tougher financial spot. I’ve never missed a payment.

What should I look for when refinancing?

r/oddlysatisfying emilycopeland

200 hours of drawing in 90 seconds

r/SipsTea SnooMachines9334

Twerking and crying at same time is crazy

r/ClaudeAI Least_Assignment8790

Claude really does hit different, but how?

I’m one of those refugees from OpenAI and I just wanted to express my appreciation and gratitude for Anthropic and Claude. I’m not really into the coding side, though I get the impression that’s the main use case for many. I can’t comment on that but just the overall feeling of using Claude is night and day with the others. I’ve been trying to nail down exactly what it is and I think I would describe it as it just feels so warm. The “personality”, the color palette, a combination of things surely. What do y’all think, what makes Claude feel this way?

r/DunderMifflin unidentifies

Settle an argument (info inside)

When Andy is talking to Robert California about Stanley, he says “you may think he’s unlovable, but did you know that he’s in not one, but 2 long term committed relationships? And he has the best sales figures on the staff.”

And Robert says, “i did not know about the sales figures.”

Is the joke here:

That Robert DID know about the 2 long term relationships, or that “why are you telling me about the relationships, that’s irrelevant to me”

r/SipsTea Gjore

Repeats hourly

r/funny Agreeable-Zone1382

Emo dog

r/SideProject CulturalBirthday9166

Built a small tool to turn one idea into posts for multiple platforms — looking for feedback

Posting the same idea across platforms usually means rewriting it multiple times to match different styles.

Twitter needs shorter wording, LinkedIn prefers more structured writing, Instagram has a different tone, and Reddit reacts differently to marketing-style posts.

I built a small tool called CaptionCraft to test a different approach.

You write the core idea once, and it generates platform-adapted versions for Twitter, LinkedIn, Instagram, Reddit, and YouTube.

Instead of acting like a one-shot generator, it works more like a workspace where you can generate multiple versions, compare them, and refine before choosing one.

Probably most useful for people who post regularly across multiple platforms and want a faster way to adapt the same idea to each one.

Currently in early testing and iterating based on user feedback.

Would appreciate any feedback on the workflow or what feels missing.

It’s free right now — just trying to see if it’s actually useful.

https://www.captioncraft.net

r/Jokes Excellent_Regret4141

My dad paid a lady of the night to take my virginity, she said "Thy Will be done".....

Hi I'm Will

r/leagueoflegends RedditCensorss

Alright y’all, what is the funniest insult you ever received while playing league?

What is an insult you received from someone who was flaming you that actually made you laugh and were impressed by it?

r/PhotoshopRequest randomuser15980

Remove portrait mode request

Can someone unblur the background/make it look like it’s not portrait mode? The photo was sent to me from someone else’s phone and I can’t change the portrait mode settings

r/Adulting gorskivuk33

Free Yourself From Negativity

You suffer more because of your interpretation of reality than reality itself. A negative mindset finds negativity even in positive events. If you want to free yourself from negativity, you must change your mindset.

It’s a tough challenge, but it allows you to see the world clearly.

Your Thoughts Shape Your Reality- A fixed mindset seeks problems. A growth mindset seeks opportunities.
Interpretations Of Reality- We suffer primarily because of our interpretations of reality.
Neutral Thinking- See reality as it is.
Change Perspective- Don’t be subjective. Don’t take everything personally. See reality from a different perspective.
Don’t Be Too Sensitive- Build resilience.
Think Outside The Box- Be open and curious, not afraid and frustrated.
Don’t Only See Problems, See Opportunities- Use the difficulty.
Challenge Yourself- Go into uncertainty and the unknown. That is the place for growth.
Abandon Comfort- Comfort kills your spirit.
Don’t Be A Prisoner Of Negativity- Free yourself from negativity.

What is one negative thought you’re going to challenge today by taking action instead?

r/conan SYMPUNY_LACKING

''You're Wrong You're [Something] is Wrong''

Who was it that told Conan something along the lines of ''You're wrong. Your [something] is wrong'' in response to a claim that Conan made and they both laugh hysterically and Conan repeats what the guy said again with shock

I know this is so RANDOM but...i know i can trust you Conan Heads

r/Art nearurheart

Duck in the lake, neururheart, oil pastels, 2026

r/aivideo Ok-Mortgage-3079

AI-Generated Swimsuit Model Walking by the Pool | Realistic AI Technology Showcase

r/ClaudeAI Crazy_Mechanic6993

I built a self-driving AI company on Claude Code — it doesn't stop when you walk away

Most AI coding tools wait for your next prompt. Mine doesn't. I built AI Team OS — an operating system layer for Claude Code that turns it into a self-driving organization. The key insight: it never stops working just because you haven't given the next instruction. **How it actually works:** You're the Chairman. The AI Leader is the CEO. You set the vision, the system executes autonomously. When the CEO finishes a task, it doesn't sit idle waiting for your next command. Instead: - It checks the task wall for the next highest-priority item - If blocked on something that needs your approval, it parks that thread and switches to parallel workstreams - It batches all strategic questions and reports them when you return — not interrupting you for every tactical decision **The self-improvement loop is what makes it different:** Once the initial feature set is delivered, the system doesn't stop. The R&D department activates: 1. Research agents scan competitors, market trends, community tools, and new frameworks 2. Findings get submitted to the decision layer 3. Multi-agent brainstorming meetings are organized (with structured debate — agents challenge each other) 4. Meeting conclusions become an implementation plan 5. Plan goes on the task wall for assignment and execution 6. Cycle repeats This is how it built itself — the system organized its own innovation meetings, conducted competitive analysis across CrewAI/AutoGen/LangGraph/Devin, debated 15 proposals from 5 different perspectives, and shipped 67 tasks across 5 innovation features. **Failed tasks don't just retry — they evolve the system:** Every failure triggers "Failure Alchemy" — extracting defensive rules, training cases for future agents, and improvement proposals. The system literally develops antibodies against mistakes. **Zero external API costs:** This runs entirely within your Claude Code subscription. No OpenAI API calls, no external LLM costs. 100% utilization of your existing CC plan tokens. The MCP tools, hooks, and agent templates are all local. **What you get:** - 22 specialized agent templates (engineering, QA, research, management) - 7 meeting templates based on Six Thinking Hats & DACI - 40+ MCP tools, 31 behavioral rules with 4-layer enforcement - React dashboard with live team status, decision timeline, task wall - Failure learning, What-If analysis, intelligent task-agent matching MIT licensed. Genuinely early stage — I'd love feedback on the self-driving methodology and whether this resonates with how you'd want to use CC. GitHub: https://github.com/CronusL-1141/AI-company 
r/ClaudeAI MohmmedAshraf

I built symbiote for a problem I kept running into with Claude Code

Not because Claude is bad at all... I actually like using it a lot, but because after a while I noticed I was spending too much time re-steering it.

Yes, there's CLAUDE.md. Yes, there's memory. And both help a lot.

But they're fundamentally passive... they store what you tell them. They don't detect patterns from your actual code, track how consistently rules are followed, or build up a structured understanding of how you work over time.

In real projects, I still kept hitting the same issue: I'd define conventions, architectural decisions, patterns to follow, things to avoid, even explain why certain choices were made… and then later I'd still find myself repeating some of it again.

Sometimes it would drift. Sometimes it would ignore part of the context. Sometimes it would make the same kind of mistake I had already corrected before.

That's what pushed me to build Symbiote.

It runs as an MCP server that plugs directly into Claude Code, a layer on top.

The core idea is pretty simple:

I don't think every new project should feel like a cold start.

Even if the repo is brand new, the developer isn't. You already have habits, preferences, patterns you trust, tradeoffs you tend to make, structure you like, things you absolutely don't want, and a general way of thinking about code.

That context usually gets lost.

So Symbiote tries to solve that from two sides.

First, it builds a codebase graph so Claude can understand the actual project structure, not just files as isolated chunks, but what connects to what, what affects what, where patterns repeat, and how changes in one area ripple through others.

That alone helps a lot, because instead of brute-force searching through files, Claude gets precise, targeted context about dependencies, impact, and structure. Fewer search round-trips, more relevant context per request.

Second, it builds what I call Developer DNA.

Basically, over time it learns how you tend to build things: your coding style, preferred patterns, recurring decisions, rules you follow, rules you reject, and the reasoning behind them. It detects these patterns from your actual code, tracks confidence, and auto-promotes rules as it sees them reinforced. You can even export your DNA profile and share it, or import someone else's.

The part I care about most is this:

When you start a new project, it's not a true cold start anymore.

The project itself may be new, but your engineering style isn't. Your preferences aren't. Your decision-making isn't. DNA profiles live globally across all your projects, so Claude comes in already knowing how you work then keeps learning from that specific project as it evolves.

It's open source and still in beta, but I wanted to share it here because I'd love early feedback from the community. What would make this useful for your workflow?

Repo: https://github.com/MohmmedAshraf/symbiote

r/SipsTea Trchickenugg_ohe

Gym bros will understand

r/SideProject Upstairs-Visit-3090

I built an email deliverability checker in my spare time. Looking for honest feedback.

Hey everyone, I’ve been working on a side project called InboxGuard. It’s a simple tool that tells you if your email is likely to go to spam.

You paste your email, and it gives:

  • A score (0‑100)
  • Inbox vs spam probability
  • A list of specific issues (spam phrases, authentication problems, etc.)
  • Suggestions to fix them

It’s still rough around the edges, but I’d love to hear from fellow side project builders:

  • Is this something you’d use before sending cold emails?
  • What’s missing?
  • How should I improve the user experience?

You can try it at https://inboxguard.me/ – feedback welcome. Thanks in advance!

r/BrandNewSentence HolyFatherLeoXIV

Eating chicken in a dark shower to see how a T-Rex hunted during thunderstorm in 100 B.C

r/AI_Agents Livid-Cellist1182

Are “monitoring agents” actually useful, or just automation with a fancy name?

i’ve been thinking a lot about where the line is between automation and actual AI agents.

a lot of tools being marketed as “agents” seem to just be workflows with an LLM attached. they run scripts, call APIs, and execute steps but they don’t really adapt or reason much.

recently i tried something interesting though: AyeWatch. the idea is basically an agent that monitors the web for specific signals (topics, discussions, news, etc) and sends alerts when something relevant shows up.

in practice it feels like a lightweight “watcher agent” you give it a goal (track X topic) and it continuously scans sources and notifies you when something important appears.

which made me wonder:

does this actually count as an AI agent, or is it still just automation with monitoring logic?

from what i understand, a true AI agent should be able to perceive its environment, make decisions, and act toward a goal autonomously rather than just follow fixed commands.

curious how people here define the boundary between:

  • automation workflows
  • monitoring tools
  • real “agentic” systems

what would make something like this a true agent in your opinion?

r/creepypasta shortstory1

I'm a police officer and I let off a guy for speeding as he is trying to go back in time

I'm a police officer and I have stopped a man from speeding, his name is dygo. The man speeding told me that he murdered his 2 room mates because he just got pissed off with living with them. So now he is trying to drive as fast as he can, so that speed can take him back in time. So as I looked at his desperate face and I instantly felt sorry for him. I allowed him to go speeding around in the hopes that he could go back in time to prevent himself on murdering his 2 room mates. So I saw him speeding around the area.

As a police officer it was hard to ignore this guy speeding around and I knew he had good intentions on doing so. When I visited the house where this guy murdered his 2 room mates, the front door was opened. Then as I entered the house, 2 of the dead rotting room mates stood up and were just staring at me. It smelled horrified as well. Then I just went out of the house, and I saw dygo speeding again as he was trying to go back in time to stop himself from murdering his two room mates.

Then dygo caused a multiple car pile up car crash and it involved 5 cars. I told dygo to drive off and he sped off before anymore police cars came up. I am doing my best to cover up dygo's tracks. I keep seeing him speeding trying to go back in time but instead of going back in time, he instead crashes into people and hurts other cars. I have had many private conversations with dygo and he assures me that he will figure out a way to go back in time.

I told reminded him that he has already hurt so many people and caused so many crashes. So I gave him one last chance and then when dygo crashed into another car, killing the driver instantly, he begged me to give him one last chance. All I could do was arrest him and he has gone too far. All he was saying to me was how he could go back in time if I let him speed around in his car, but he has failed to do in so many occasions.

Dygo kept pleading with the other officers to let him go and to let him try to go back in time, by speeding inside his car.

r/leagueoflegends FrigPolitics

How does swiftplay matchmaking work?

I'm a terrible B1 Kayn getting stomped by a plat Rengar every other game. I'm not making this post to vent or anything, I'm just curious how lobbies so imbalanced are the norm in swiftplay (at least in OCE).

Every single game will have 2 people that aren't even level 30 yet up against gold/plat OTPs with 500k mastery with no balance in terms of the ranks, levels, or mastery of the teams themselves. It feels antithetical to swiftplay as a game mode since it's intended to cater to newer players learning the game, and to people trying to pick up new champions they want to learn.

I get that the pool of players is limited, but why doesn't the matchmaking prioritise at least trying to balance the ranks/levels across the teams? Does anyone know why they don't fix this when ranked lobbies are 100x better at doing so? It seems like such a simple thing which could be implemented at any time and would require next to no investment of resources on Riot's end. I'm hoping it's not just to speed up queue times, but I could understand that justification from Riot even if I think it's a terrible reason to avoid fixing this kind of thing.

r/oddlyterrifying theclickhere

This Easter bunny photo op at a mall in Michigan

r/Art Confident_Mud894

sketch of a sculpture, fenya, graphite pencil ,2026

r/SideProject RamaMohanP

I built a Code Premier League - “IPL for developers” — Play with your coding knowledge - swipe like Tinder, battle like IPL.

Curious what our developers community think — would something like this actually be fun to play? It is a fun experiment ,would like to hear from you.

Idea is simple: Turn to use your coding knowledge into something you can Play.

It’s all about having fun with coding powers — swipe like Tinder, guess like Wordle, or battle like IPL.

🏏 Play like a cricket match

👉 Swipe Code Challenges like Tinder

🎮 Solve like a game

So I built this as a fun experiment:

**CodeRush — Kingdom of Code 👑**

A fast-paced developer game where coding becomes something you *play*

Instead of syntax questions, you face real-world dev decisions:

- bug or safe?

- optimize or overkill?

- retry or risky?

---

🏏 Code Premier League (CPL)

A cricket-style mode where:

- Each question = 1 ball

- Fast answer = 6 runs

- Wrong answer = wicket

I just played a perfect over → **36/0** 😄

---

💡 Why I built this:

I wanted something that feels like:

👉 a game

👉 a competition

👉 something you actually come back to

---

🎮 Try it here (2 mins, no signup):

https://www.reddit.com/r/coderush_playwithcode/comments/1rym4wj/coderush/

---

Curious what the dev community thinks:

👉 Would you actually play something like this regularly?

👉 What felt fun / confusing / boring?

---

Looking for early players + honest feedback 🙌

r/SipsTea beklog

1 year jail << $1M

A First Bank customer, Ojo Eghosa Kingsley, became involved in a major controversy after the bank mistakenly credited his account with ₦1.5 billion. Instead of notifying the bank and returning the funds, he diverted the money for personal use between June and November 2025.

He was arrested by the Economic and Financial Crimes Commission (EFCC) in Benin City, and on 19 January 2026, he was arraigned before the Edo State High Court. Kingsley was charged with theft and fraud under the Edo State Criminal Law (2022).

Upon the reading of the charges, he pleaded guilty without hesitation. His counsel pleaded for leniency, citing his expression of remorse. However, the court delivered the following sentence: •One year imprisonment or an option of a ₦5 million fine; and •An order to refund the outstanding balance of ₦272,252,193.59 to First Bank.

Prior to the judgment, the EFCC had recovered ₦802,420,000 from Kingsley’s account, as well as from the accounts of his mother and sister, while the bank successfully reversed transactions totaling over ₦300 million.

In a dramatic turn of events, Kingsley informed the court that he preferred serving a prison sentence to refunding the remaining amount.

In effect, he openly chose incarceration over the repayment of ₦272 million.

r/Adulting Unhappy-Mud-783

Feel like quitting and giving up, no will to live now.

Hi, It will be long and I will accept if its my fault. I have been in a relationship for 12 years now we are both 28 now. Both from different caste, he has a single mother and all responsibilities of his house is on him (but he is financially sound and has a good job). I have a very strict conventional father, so he told in his house mother agreed after a year of convincing since 2 years they want my confirmation. My brother got married in 2024 and i could not tell my father then last year my father got severely sick. I could not tell him then. But my mother knows but yk how indian mothers are she cannot cross my father. His mother gave me an ultimatum to tell in a month, they have a family friend and his mother is getting him married to their daughter. These people have been waiting since two years and my bf and mother never refused them. Now his mother says if only my father agrees and gives confirmation to the marriage only then she will say no to them. I am lost I basically grew up with him did everything with him. He is also broken as he only has his mother. He has also given up. I feel like ki\*\*\*\* myself. I dont know how to live without him I cannot get married to anyone else. I am unable to hate him, I hate myself. The thought of him marrying someone else is breaking my heart and soul .

r/DunderMifflin SmartBoy_111

This scene is so funny. It had me rolling for five minutes straight.

Just finished season 8 after a long hiatus. It is good ngl. I should have watched it after I finished season 7. It had a good number of funny moments. It might not be as great as its predecessors but it is still fine. Some moments are lame but some are really good. For people who are skeptical about watching Season 8, just give it a try. It might be boring at the start but it gets better.

r/SipsTea The_Dean_France

The TYLER PERRY actor...

r/Art Nikoyty

Portrait, nikoyty, digital, 2026

r/SipsTea The_Dean_France

Impressive pay day!

r/PhotoshopRequest Eeblelbee

Please remove shadow

Hi there! Please remove the shadow from the woman in the right’s face! Please keep everything else the same! Will tip $20, no AI please 💗

r/ChatGPT Early_Yesterday443

Garbage in, garbage out

r/SipsTea Trchickenugg_ohe

White Monsters

r/homeassistant hometechgeek

Update on the esp32 based Immich frame

As a follow up to my initial release last week, about building an esphome based photo frame for Immich, I’ve been improving and adding some features…

  • Display tone adjustment: Adjust the colour temperature of the display (to me, it’s a little blue, and the adjustment really helps warm the images up)
  • Night tone: Adjustment the screen tone between sunset and sunrise.
  • Improved photo sources: Support for memories, people, albums, favourites or all.
  • Screen scheduling: Scheduling when to turn off the display and controls for daytime and night time brightness levels.

If you’re interested in trying it, here are the docs and GitHub links, plus a link to the £40 esp32 display I used.

Love to hear feedback from anyone who’s tried it. Thanks!

r/aivideo machina9000

Forever Plan

r/EarthPorn Time-Maintenance8742

Kolob Canyon, Utah [OC][2500x1742]

r/SipsTea Expert_Ad8640

Interesting ....

r/interestingasfuck Ahmanetxed

Storm imminent at the beach

r/SipsTea Trchickenugg_ohe

Processing

r/Jokes Wayne-De-Payne

My wife tried to teach me a lesson for coming home late…

A woman, upset that her husband came home late from golf again, decided to teach him a lesson.

She left a note on the dresser that read:

“I’ve had enough. I’m leaving you. Don’t try to find me.”

Then she hid under the bed to watch his reaction.

Soon, her husband came home. She heard him moving around, then he entered the bedroom, picked up the note, and read it.

After a moment, he scribbled something on the paper and made a phone call.

“She’s finally gone. Yep, it’s about time. I’m on my way now,” he said happily.

He grabbed his keys and left.

Furious, the woman crawled out from under the bed, grabbed the note, and read what he had written:

“I can see your feet. We’re out of bread. Be back in five minutes.”

r/ClaudeAI Life_Vegetable_5442

I tested Claude's 3 models on the same coding task — here's what actually matters

I've been using all three Claude models (Haiku 4.5, Sonnet 4.6, Opus 4.6) daily for months and wanted to share what

I've learned about when to use each. Most comparisons focus on "which is smartest" — that's the wrong question.

The real difference is reasoning depth, not intelligence.

I gave all three the same task: refactor a 400-line Express.js backend to use proper middleware patterns and add input validation. Here's what happened qualitatively:

Haiku 4.5 handled the straightforward parts — extracting middleware, adding express-validator. But it missed a subtle dependency between two middleware functions where order mattered. For simple, well-defined refactors, it's perfectly fine.

Sonnet 4.6 caught the middleware ordering issue and restructured the error handling chain correctly. It also added TypeScript types unprompted, which was actually helpful. This is my default for daily development work.

Opus 4.6 did everything Sonnet did but also flagged that my auth middleware was checking permissions after the route handler had already accessed the database — a security issue I'd missed for months. This is the "thinks deeper" advantage you're paying for.

The pricing math is what matters though:

- Haiku: $0.25 input / $1.25 output per million tokens

- Sonnet: $3 / $15 per million tokens

- Opus: $15 / $75 per million tokens

That means Opus costs 60x more than Haiku per token. For a task where Haiku gets it right, using Opus is literally burning money.

My framework after months of real usage:

- Haiku → batch operations, data transformation, classification, anything repetitive across many calls

- Sonnet → daily coding, feature work, code review, 90% of tasks

- Opus → architecture decisions, security review, complex debugging where missing something costs you hours

The biggest unlock was stopping the habit of always using the "best" model. Matching model to task complexity cut my API costs by ~70% with no quality loss on the tasks that matter.

One more thing: all three models now support extended thinking, but it makes the biggest difference with Opus on complex reasoning tasks. For Haiku it barely changes the output.

What model do you default to, and have you experimented with switching based on task type?

r/me_irl Grundsicherung

me_irl

r/ClaudeAI Diligent_Comb5668

Simple rust CLI that detects bad boxes drawn in unicode.

Okay so what annoyed me into creating this is LLM's having trouble with ASCII diagrams and since making those yourself is really painful I figured instead of wasting tokens on prompting them better, or create yet another markdown file, i'd create some sort of linter for it that is recognized as an MCP server. It's called boxdraw, couldn't think of anything better lol. So here it is.

https://github.com/Axobello/boxdraw/

I hat a script laying for this in a monorepo for a while and figured i'd open this up since it is more efficient to have as a crate instead of junking up the monorepo.

So to put it simply this is what it does, after running the boxdraw lint command it detects faulty unicode diagrams like this one:

```

┌──────────────

│ No ┐ here │

└──────────────┘

```

And then the LLM can reiterate over it until it has been fixed. For me it has been more token efficient since i'd just prompt a screenshot or copy the bad ASCII diagrams to it, this script detects it and returns an output like this:

ARCHITECTURE.md:46:52 [error] box-width: bottom width (25) differs from top width (26) at column 27 

Which consumes way less tokens, still point's the LLM in the right direction.

r/personalfinance Suitable-Candy-6424

Looking sor some advice on how to manage money more appropriately

For context I’m 22 M about to turn 23 right now I currently have 10k in HYSA, 400 in Roth IRA and around 4k in checking account. My total school loans is 42k with 297 minimum payment (school loans range from 3%,4%,6% and one that is 7%). The 7% is around 14.5k and the 6% is 5k. Right now my plan is to make minimum payments on the 7% then by December pay it all in full. Since I’m also doing a masters in engineering to pay for a full semester of classes is around 4.5k (which is why I haven’t touched my HYSA since it’s for emergency). Another one of my goals is to also max Roth IRA which is 7.5k is this a good plan so far?

*Also I live with parents and my spending is around 400 a month

r/ChatGPT Mustbefree0

Why is ChatGPT still useless at the most basic tasks so many years later?

Just bought a ChatGPT subscription and i'm stuck on the same problems as 2023? ''Take this text and remove uhms and ahs but without changing any other content'' (with hyper specific prompt with rules and everything). Bam, gives me back half the text because it edited out half the content. Endless fighting.

''Here take this glossary and put it in a list'' Oh sorry i removed half the terms and explanations because fuck you.

How is ChatGPT still so awful at the most basic tasks? I can't trust it to do anything because not only does it make mistake, it literally fails at the most simple thing in the world.

For context for anyone that wants to whine that it's me, i've spent a few thousands hours on LLMs and built 2 businesses around it. I've put in my time.

r/painting rancidbarbie

Acrylic painting I did last year

Did art in school about 6-7 years ago but wanted to try getting back into it! I used a CMY colour model as well as ochre (and black and white of course).

Would love some feedback please! 🙏

r/Seattle CanISniffYourLimes

Community Mediation/Conflict Resolution services?

I have a need to find a community/peer mediator to help navigate conflict between members of my community. The issue is not legal, it is personal, but it needs to be worked through with the help of a non-associated third party. The issue has occurred between multiple people in a large friendgroup/community and it has reached the point where anyone we would have called on for mediation is going to be accused of biases.

Yeah, it sucks. I know. But shit happens.

I haven’t really been able to find what I’m looking for, search results just give me lawyers (don’t need THAT level of mediation) or classes on becoming a mediator.

Any help or am I out of luck?

Edit- the well worded messages in my inbox about “growing up” are hilarious but that’s exactly why I’m here, it’s my sentiment too and why I am very biased in this issue. Why do I care? Because as a group, we all share money and resources and if this doesn’t resolve it’s probably going to need actual legal mediation. Hope that helps. Or continue to roast me. Frankly I deserve it.

r/SideProject Present_Reindeer1427

I built a mental health app for people with serious conditions — 30+ evidence-based tools, launching on iOS/Android soon

Hey r/SideProject — sharing my side project because I think it solves a real gap.

MoodStead is a mental health tracking app designed for people managing serious conditions: schizophrenia, bipolar, BPD, PTSD, depression, ADHD, and more. Not a generic "rate your mood 1-5" app — it has CBT thought records, behavioral activation, injection site tracking, a Stanley-Brown safety plan, reality check tools, voice journaling, and AI-powered insights.

Built it because I live with a mental health condition and nothing out there was designed for people like me. The web app is live. iOS and Android are currently in review at the stores.

We're doing an early adopter offer — all features free forever for anyone who joins before April 1, 2026.

Happy to answer questions about the build, the market, or the product.

r/PhotoshopRequest Artistic-Praline-578

Grandparents

One of the few photos I have of my grandparents, and would love to have this refinished and colorized for my dad! $15 tip.

r/leagueoflegends Artarushu

Shyvana - the Sovereign of Shadow! | Sharp Jungle Diff!

r/LocalLLaMA Binqta

Tried to build a local voice cloning audiobook pipeline for Bulgarian — XTTS-v2 sounds Russian, Fish Speech 1.5 won't load on Windows. Anyone solved Cyrillic TTS locally?

Hi Everyone,

I just tried this with the help of Claude couse I am not so familiar with CMD and Powershell etc.

Tried to build a local Bulgarian audiobook voice cloner — here's what actually happened

Spent a full day trying to clone my voice locally and use it to read a book in Bulgarian. Here's the honest breakdown.

My setup: RTX 5070 Ti, 64GB RAM, Windows 11

Attempt 1: XTTS-v2 (Coqui TTS)

Looked promising — voice cloning from just 30 seconds of audio, runs locally, free. Got it installed after fighting some transformers version conflicts. Generated audio successfully.

Result: sounds Russian. Not even close to Bulgarian. XTTS-v2 officially supports 13 languages and Bulgarian isn't one of them. Using language="ru" is the community workaround but the output is clearly Russian-accented. Also the voice similarity to my actual voice was poor regardless of language.

Attempt 2: Fish Speech 1.5

More promising on paper — trained on 80+ languages including Cyrillic scripts, no language-specific preprocessing needed. Got it installed. Still working through some model loading issues on Windows.

What made everything harder than it should be:

The RTX 5070 Ti (Blackwell architecture) isn't supported by stable PyTorch yet. Had to use nightly builds. Every single package install would silently downgrade PyTorch back to 2.5.1, breaking GPU support. Had to force reinstall the nightly after almost every step.

Bottom line so far:

There is no good free local TTS solution with voice cloning for Bulgarian right now. ElevenLabs supports it natively but it's paid beyond 10k characters. If anyone has actually solved this I'd love to know.

I aprecciate every help or suggestion, what software I can use to create my own audiobooks with good sounding cloned voice.

I tried also Elevenlabs, but they want so much money for creating one small book, I cant imagine what 1 book of 1000 pages would cost.

Its all for own purpose use. Not selling or sharing.

Thanks a lot. x.o.x.o...

r/Weird mrcrocodeal

Ce cimetière de mannequin à Den Haag aux Pays Bas

J'étais en train de me balader avec mon appareil photo au bord des plages froides, et je suis tombé sur cette scène très étrange.

r/comfyui thatguyjames_uk

ok, see you all in a few hours, workflow 1 click training day

https://preview.redd.it/7iyjxeqkmdqg1.png?width=2323&format=png&auto=webp&s=f5ec2287e643d6ea593d195e4aac86afb22ac873

After a few hours cleaning up the node names and save names, I'm going to be doing some new images for using on a new Lora this weekend

my last weekend work was reported "good from one go and should be over 4 to get the results I got in OpenAI" but agrees with me that at lease 5 more training sets and use the new trained Lora each time to get better skin etc

This work flow above was to try to get better head to toe and feet (took 4 hours) of playing with settings, Lora power. i had to use a fully naked pic to get the best feet and then clothes added

https://preview.redd.it/6h6qt8yjndqg1.png?width=233&format=png&auto=webp&s=76a639f322701afb9936d0510efb9a3543ba403d

Do people think I should see if a feet Lora to use with the work flow?

r/ClaudeAI throwaway490215

Do the hard thing first

I've used claude to build a rather complicated document editor, and have shot myself in the foot a few times now when leaving too much of the design up to claude, and I just realized the common pattern that leads to complexity soup and the subsequent implosion.

Make sure you have it do the hard thing first.

If left to its own devices, it will create plans with phases that is incremental. First the simple thing and build out. This is going to bite you in the ass.

Models get significantly worse the more context is lying around, and having them debug a complicated piece when the code is already complicated is going to fail at best, or create hacking work arounds at worse.

Make sure your most complicated use case works, then have it draw the rest of the owl.

r/SideProject Friendly-Abalone-255

Does marketing your SaaS feel overwhelming or am I doing it wrong?

There are so many platforms now:

TikTok
Reels
Shorts
X
LinkedIn
Reddit

Feels like you should be everywhere… but realistically it’s impossible to keep up.

How are you dealing with this?

Trying to do everything?
Or just focusing on one channel?

r/PhotoshopRequest onyx-raptor

Grandma 🥰

Hi Everyone,

Hoping to get this pic of my grandma redone. Removing the frame and glare at the lower left side of the photo would be wonderful. Apparently she was a no nonsense kinda person until I came around 😂🥰, and I miss every bit of her… Happy to tip and thank you for your help!

r/ClaudeAI Just-One-2387

Using Claude to spread accessible microtonal music education to a wider audience

Here's a typical prompt I'll use for these kinds of articles, and I'll also attach a PDF containing all the sources I want Claude to us: "Write a detailed blog article about [subject matter] geared towards musicians with little knowledge about microtones, but good knowledge about regular music theory. Use the attached PDF as your main source, but feel free to search the web if you need further info. Use plain simple formatting that's easy for me to directly copy paste into Blogger or Medium. Link to the Xen Wiki articles for concepts whenever you can. You won't be able to access Xen Wiki because it blocks AI scrapers, but it's easy to figure out what the url should be, it is always: https://en.xen.wiki/w/Subject\_matter"

r/Adulting FancyCompetition4205

I feel so behind in life because some people are getting married and having kids while others are studying and building their future, and I’m just stuck

I feel so behind in life because some people are getting married and having kids while others are studying and building their future, and I’m just stuck

I look at people my age and it feels like everyone is moving forward in some direction

some are already married, some even have kids, living together, building a life
others are studying, getting degrees, planning careers, figuring things out

and then there’s me

I’m 22, living with my mom, working a basic job and just trying to get through the days
no relationship, no real social life, no clear direction

I feel like I’m not even on the same timeline as everyone else

the thing is I’m not some weird or antisocial guy
at work I’m talkative, I joke around, people like me
I can connect with people on a basic level

but outside of that I’m completely stuck

I don’t go out, I haven’t for a long time
I don’t meet new people, I don’t build anything
it’s like my life just stops after work

and I don’t even know how to change it
I don’t know how to go from talking to someone at work to actually asking them to hang out
it feels like there’s some barrier in my head that I just can’t break

part of it is my past
I didn’t really have a normal teenage life, I was dealing with stuff that kind of overwhelmed me
and now I’m trying to catch up but I feel lost

it’s frustrating because I’m not doing nothing
I’m working, I’m trying to improve myself, I’m moving forward slowly

but compared to others it feels like I’m way behind

and yeah, seeing people my age living normal lives, dating, studying, starting families
while I can’t even get myself to go out and meet someone
it really gets to me

I just don’t know how to take that first step and actually change things

r/SipsTea AH3Guam

Shadows while stuck in traffic on Guam

Just waiting for the light to change…and a blinker symbol appears. Guess I’ll turn right.

r/AskMen No_County_3654

For guys who like recording themselve having sex with partner, why?

For anyone who’s into recording themselves having sex—what’s the appeal for you?

Is it more about having something to rewatch later, or the excitement in the moment?

Also, do you share it with your friends

Caught my bf recording us with his phone. I don't mind

Just curious about different perspectives..

r/Art Peony-Pavilion

Vanity, R. Dearien, Drawing, 2023 [OC]

r/toastme juusoup

I have never been this mentally exhausted

I (23F) have been going to uni for almost 6 years. I was supposed to graduate last year, but I feel like I can't get past the thesis I have been trying to work on for nearly 2 years and feel burnt out. The last real break I had was in january 2025 and I feel like quitting uni. I also got into dating for the first time ever and as an autistic girl with ADHD trying to understand my own feelings has been excruciating. I 'broke up' with my potential partner a week ago, because I can't grab a hold of my own feelings and emotions and I keep spiraling. We're still friends, but I can't stop thinking about him. It feels like every decision I make is the wrong one.

I wake up tired everyday, I feel isolated despite meeting up with friends in between the days I stay home to work on my thesis. My mom (whom I still live with) wants me to change my appearance because it's 'immature', eventhough I have been working hard on being satisfied with the way I look. I don't know if I will ever be able to graduate and I don't know what I'd want to do with my life after I do.

r/Art Kentaro_Yuuki

Fire Horse, Kentaro Yuuki, Digital, 2025 [OC]

r/ForgottenTV ScramItVancity

WWE Confidential (2002-2004)

Before WWE Unreal, there was WWE Confidential. As someone who grew up watching pro wrestling in the 90s, this was quite an eye-opening experience for me to see wrestlers being interviewed their real selves and how they risked their bodies for the sake of entertainment. I think Confidential is a must-watch, though it can get a bit schmaltzy with its Access Hollywood-style presentation.

r/aivideo TikFilmer

AI Echo on her neon cyberpunk escape

r/ChatGPT two_to_tango_2023

Do we need to rewrite our constitution?

If a self-driving car has to choose between hitting a pedestrian or swerving and killing its passenger, who is responsible—the car, the owner, or the programmer?

The rate at which AI is penetrating our lives, i wonder if regulation alone will solve for this or we might have to revisit the rights and laws written in constitution...

r/StableDiffusion NoPresentation7366

[PixyToon] Diffuser/Animator for Aseprite

Hey 😎

So, recently I had some resurfacing memories of an old piece of software called "EasyToon" (a simple 2D black and white layer-based animation tool), which I used to work on extensively. I had the idea to find today's open-source alternatives, and there's Asesprite, which is fantastic and intuitive. To make a long story short: I wanted to create an extension that would generate and distribute animations with low latency, low cost, high performance, and high precision, using a stack I know well: Stable Diffusion, the egregore, and other animation models, etc., that I've used and loved in the past.

Today I'm making the project public. I've compiled Aseprite for you and tried to properly automate the setup/start process.

https://github.com/FeelTheFonk/pixytoon

I know some of you will love it and have fun with it, just like I do 💓

The software is in its early stages; there's still a lot of work to be done. I plan to dedicate time to it in the future, and I want to express my deepest gratitude to the open-source community, stable distribution, LocalLlama, and the entire network—everything that embodies the essence of open source, allowing us to grow together. I am immensely grateful for these many years of wonder alongside you.

It's obviously 100% local, utilizing the latest state-of-the-art optimizations for SD1.5, CUDA, etc. Currently tested only on Windows 11, RTX 4060 Mobility (8GB VRAM), txt2img 512x512 in under a second, with integrated live painting. I encourage you to read the documentation, which is well-written and clear. :)

Peace

r/painting Vivid_Traffic_3952

I'm learning to work with watercolors

r/raspberry_pi PLC-Pro

Raspberry Pi Camera V3 NoIR version module not getting detected

Hey there!

I have a pair of Raspberry Pi Camera V3 NoIR version camera modules. Initially when I was connecting these cameras to the 2 CSI/DSI ports on the Raspberry Pi 5 board, they were getting detected as well as the footage was being captured.

However, since yesterday one of the camera modules is not getting detected and hence no footage is being captured by it. I have tried changing the ports and the FFC connectors through which the camera module not getting detected was connected. I always turn OFF the power to the Raspberry Pi 5 before changing/disconnecting a camera module. But, I don't know why this issue is happening.

So, I seek your suggestions/advice on how I can troubleshoot this issue (if it is possible). Please let me know what I can do to not face such an issue in the future.

P.S. : I also found that using the Raspberry Pi camera module is slightly hard to get connected to the CSI/DSI port when the Raspberry Pi's official Active Cooler is installed as in it leaves a very tiny space to pry open the connector clips, which seem quite breakable. So, if it is possible to connect these cameras via USB ports, it would make the connection/disconnection of the camera modules to the Raspberry Pi a wee bit easier and robust. Please let me know if it is possible to do so.

r/AlternativeHistory mohamed-3215

The "Gaijin" Hypothesis: How the Hyksos and Israelites became the same "erased" people in the Egyptian Delta (Research & Toponymy)

1.ABSTRACT

This research synthesizes archaeological evidence, toponymic data, and multi-religious narratives to propose that the "Hyksos" of Egyptian history and the "Israelites" of religious tradition represent a single ethnic Levantine population. It suggests that their history in Egypt followed a cycle: peaceful migration, infrastructure development, political ascension, and eventual nationalist oppression under the 18th Dynasty founder, Ahmose I.

2.THE TOPONYMIC EVIDENCE: WADI EL-RAYAN

l the "Rayan" Link: Historically, the most prominent Hyksos King was Khyan. In the Semitic linguistic branch, Khyan and Rayan are phonetically consistent. The historical figure Al-Rayan ibn al-Walid from Arabic tradition is the linguistic and chronological counterpart to the Hyksos King Khyan. This confirms that the 'Gaijin' rulers were remembered by neighboring Semitic cultures under translated names. Arabic historians like Al-Masudi (in his famous book Meadows of Gold) specifically name the Pharaoh of Joseph’s time as Rayan ibn al-Walid.

l The Fayoum Oasis: The area of Wadi El-Rayan (El-Rayan Valley) and the Bahr Yussef (Joseph’s Canal) serve as a physical monument to this era. The naming of the valley and the canal has persisted for millennia, linking the region to a "foreign" administration that mastered Egyptian irrigation.

3.BEYOND THE "EGYPTO-CENTRIC" LENS: A REGIONAL PERSPECTIVE

Traditional Egyptology often views history solely through the lens of native Egyptian palace records. By adopting a Regional Perspective, we recognize that Egypt was a Mediterranean hub.

l Cultural Translation: While Pharaohs had Egyptian throne names, their Semitic-speaking subjects the "Hyksos" used names from their own linguistic family. Hyksos is a term Egyptians used to describe foreigners from the Levantine region, similar to how the Japanese use "Gaijin" for foreigners.

l The Language Bridge: Names like Al-Rayan (Arabic) and Khyan (Hyksos) are linguistic "translations" of the same historical figure across neighboring cultures.

4.THE DYNAMICS OF POWER: AL-WALID & THE MINISTERS

Traditional Arabic speculative accounts identify the Pharaoh of the Moses/Exodus era as Al-Walid ibn Mus'ab.

The Administrative Oppression:

l Haman (The Native Establishment): Pharaoh’s Chief Architect. Haman represents the native Egyptian military-industrial complex that utilized the labor of the Semitic migrants for massive construction projects.

l Qarun / Karon (The Internal Betrayal): An Israelite (Gaijin) who utilized his immense wealth to collaborate with the Egyptian authorities. His role demonstrates that the population was weakened by internal class divisions.

5.COMPARATIVE RELIGIOUS NARRATIVES

The "Gaijin Hypothesis" is uniquely supported by the synchronization of the Quran and the Bible, each providing different facets of the same historical memory:

l The Quranic Perspective (Political Structure): The Quran accurately distinguishes between the "King" (Malik) of Joseph’s time and the "Pharaoh" of Moses time, a shift that mirrors the transition from Hyksos "Kings" to the New Kingdom "Pharaohs." It further details the socio-political pressure on the Mustad'afun (the oppressed population).

l The Biblical Perspective (Logistics & Geography): The Bible provides the logistical details of the state-sponsored labor (making mud-bricks) and the naming of specific "store cities" like Pithom and Raamses, which were built near the ruins of the old Hyksos capital, Avaris (Tell el-Dab'a).

6.SOCIO-POLITICAL EVOLUTION: THE "GAIJIN" CYCLE

l Phase I: The Developers (The Migration)

History: Middle Kingdom records show a steady influx of Semitic people into the Delta. 1870 – 1800 BC.

Religion: The arrival of Jacob and the rise of Joseph (the "Engineer").

l Phase II: The Rulers (The Hyksos Era)

History: The migrants became the ruling class (Hyksos), with their capital at Avaris.
Status: They were "Gaijin" (outsiders) who became the "Managers" of Egypt.

l Phase III: The Reaction (Ahmose I)–Expulsion vs. Enslavement

The turning point for the "Gaijin" population was the rise of the 18th Dynasty. This period marks the transition from the Semitic-friendly "Kings" to the oppressive "Pharaohs."

The Expeller: Around 1550 BC, Ahmose I launched a nationalist war to "purify" Egypt. He besieged the Hyksos capital, Avaris, and expelled the foreign elite. This is documented in the Autobiography of Ahmose, son of Ebana, which records the military conquest of the Delta.

The Islamic Standpoint (The "Malik" vs. "Fir’awn" Distinction): The Quran provides a striking linguistic match for this era. In the story of Joseph (the Hyksos era), the ruler is called Al-Malik (The King), matching the title of the Semitic Heqa-Khasut rulers. In the story of Moses (the New Kingdom era), the title switches to Fir’awn (Pharaoh).

The Transition: This linguistic shift mirrors the historical reality where the Amalekite (Semitic) Kings were replaced by the native Egyptian Pharaohs (starting with Ahmose I), who reclassified the remaining "Gaijin" population as a state-security threat to be enslaved.

7.ARCHAEOLOGICAL & HISTORICAL CORRELATION

l Avaris (Tell el-Dab'a): Excavations prove the city was populated by people with Levantine customs for centuries before being displaced.

l Josephus: The 1st-century historian Flavius Josephus explicitly argued that the Hyksos and the Israelites were the same group. Josephus quotes the Egyptian historian Manetho, after describing the Hyksos (the "Shepherds") and how they left Egypt to build a city in Judea (Jerusalem), Josephus explicitly makes the connection, Quote: "This whole nation was styled HYCSOS, that is, Shepherd-kings... And evident it is from the number of years by him set down belonging to this interval, if they be summed up together, that these shepherds, as they are here called, who were no other than our forefathers, were delivered out of Egypt, and came thence, and inhabited this country."— Against Apion 1.14 / 1.103

8. CONCLUSION

The distinction between "Hyksos" and "Israelites" is a matter of perspective. One is a political label used by Egyptian victors; the other is a tribal identity. By looking at the regional context rather than just the Egyptian palace records, we see that the Gaijin were not mere invaders, but integral developers of the Egyptian state before the nationalist revolution of Ahmose I.

r/Art myriyevskyy

Evening on the Water, Volodymyr Myriyevskyy, Oil, 2026 [OC]

r/DunderMifflin MrCleanWindows87

Is Dwight Schrute actually the ultimate alpha male in The Office?

He’s not bullshitting or selling an image he consistently backs everything up with tangible results: beet farm ownership, weapons training, survival skills, and top-tier sales performance. Even socially, he shows flashes of dominance, like confidently pulling in New York, while still operating on that old-school hunter-gatherer mindset that sets him apart from everyone else in the office.

r/ClaudeAI galacticguardian90

What are your thoughts on the Claude Code channels?

This is a Research Preview in Claude Code v2.1.80 or later.

From their docs -

A channel is an MCP server that pushes events into your running Claude Code session, so Claude can react to things that happen while you’re not at the terminal. Channels can be two-way: Claude reads the event and replies back through the same channel, like a chat bridge. Events only arrive while the session is open, so for an always-on setup, you run Claude in a background process or persistent terminal.

r/DunderMifflin MrCleanWindows87

Did Toby leave Scranton because he was in love with Pam?

When you line it up, the parallel story beats especially things like the Finer Things Club make it look less like coincidence and more like a deliberate character arc.

r/mildlyinteresting Day-Hot

My family and I saw a palm tree growing through an oak tree in St. Augustine yesterday.

r/creepypasta der_sachse95

​Wie ein entspannter Abend mit TikTok mein Leben zerstörte

Ich dachte, das Löschen der App würde mich retten. Ich dachte, das neue Handy würde mich sicher machen. Doch manche Feeds lassen sich nicht aktualisieren – und manche Jäger hören nie auf zu fressen. Doch nun vom Anfang an.

Es war ein klassischer, regnerischer Dezembertag. Der Regen prasselte beruhigend gegen die Fensterbank und die kalte Luft strömte durch das angekippte Fenster in mein Wohnzimmer. Es war genau dieser typische Abend, an dem man faul auf der Couch liegt und einfach „sein Leben chillt“. Ich hielt mein Handy in der Hand und scrollte entspannt durch TikTok. Man sieht dort ja wirklich alles Mögliche: Leute, die sich völlig zum Affen machen, seltsame Trends und natürlich KI-Videos, die inzwischen so realistisch aussehen, dass man kaum noch erkennt, ob sie echt sind. Diese Entwicklung finde ich faszinierend – und gleichzeitig ist sie ziemlich beängstigend. Wer weiß, ab wann man gar keinen Unterschied mehr bemerkt. Ich musste kurz aufstehen, um auf die Toilette zu gehen. Auf dem Weg schloss ich das Fenster, da die kalte Luft den Raum bereits spürbar auskühlen ließ. Danach kuschelte ich mich wieder auf mein Sofa, zog die Decke über mich, legte die Chipstüte auf meinen Bauch und nahm das Handy wieder in die Hand. Die Startseite lud neu – und zeigte mir ein neues Video an. Dieses Video war anders als alle zuvor. Ich sah eine Frau, blutüberströmt auf einem Sofa liegend. In ihrem Bauch steckte ein Messer. Das Licht im Raum flackerte auf eine unnatürlich verzerrte, fast stotternde Weise, die an einen schlechten Horrorfilm erinnerte. Ein leises statisches Rauschen war zu hören, dann wurde das Bild plötzlich schwarz. Ich musste lachen, weil ich dachte, es handele sich um einen schlechten, geschmacklosen Scherz. Doch damit sollte ich falsch liegen. Neugierig tippte ich auf das Profil, das dieses Video hochgeladen hatte. Dort sah ich dutzende Wohnzimmer – und in jedem lag ein anderer Mensch, blutüberströmt, mit dem Messer im Bauch. Verschiedene Länder, verschiedene Menschen, doch immer identischer Tod. Ein mulmiges Gefühl breitete sich in mir aus. Ich schloss TikTok wieder. So ein Blödsinn, dachte ich mir. Irgendein kranker Trend wird das wieder sein, um Leute zu verstören und massig Aufrufe zu generieren. Ich schaltete den Fernseher an und versuchte, mich von meiner Lieblingsserie berieseln zu lassen. Nach ein paar Minuten vibrierte mein Handy. Eine TikTok-Benachrichtigung erschien. Der Kanal folgte mir jetzt. Ich fand das seltsam, vor allem weil ich nichts gelikt oder kommentiert hatte. Aber ich wollte mir nicht den Abend verderben. Wahrscheinlich ist es nur jemand, der andere erschrecken wollte. Ich legte das Handy auf den Tisch und schlief irgendwann vor dem Fernseher ein. Gegen dreiundzwanzig Uhr wachte ich auf, schaltete den Fernseher aus und wollte ins Bett gehen. Ich nahm mein Handy in die Hand – und sah, dass der Kanal ein neues Video hochgeladen hatte. Aus irgendeinem Grund war ich neugierig genug, es zu öffnen. Was ich dort sah, ließ mir das Blut in den Adern gefrieren. Ich sah mich. Ich sah mich selbst blutüberströmt auf meiner Couch liegen. Mein Handy lag vor mir. Wieder dieses statische Rauschen. Dann ein tiefes, finsteres, verzerrtes Lachen aus den Lautsprechern – und das Bild wurde schwarz. Für einen Moment war ich wie gelähmt, unfähig, irgendetwas zu tun. Die Kamera, mit der das Video aufgenommen worden war, musste direkt vor mir gestanden haben. Direkt dort, wo jetzt nur der Wohnzimmertisch stand. Meine Tür war abgeschlossen. Niemand konnte hereingekommen sein. Einen Ersatzschlüssel hatte niemand außer mir. Die Schlösser hatte ich erst vor zwei Jahren auswechseln lassen – nach der Trennung von meiner Ex. Es war unmöglich, dass jemand einen Schlüssel hatte. Also war es doch KI? Aber das Video war zu real. Zu genau. Woher sollte eine KI wissen, wie meine Couch aussieht? Oder mein Handy? Oder das Küchenmesser mit dem verzierten Griff, das ich tatsächlich besitze? Es fühlte sich nicht mehr nach Technik an. Es fühlte sich … extrem falsch an. Als würde etwas im Raum stehen und mich beobachten. Ich legte mich ins Bett, aber Schlaf war unmöglich. Die Bilder gingen mir nicht aus dem Kopf. Irgendwann beschloss ich, eine Schlaftablette zu nehmen. Doch bevor ich aufstehen konnte, ertönte der TikTok-Klingelton. Ich zuckte zusammen und bekam Gänsehaut am ganzen Körper. Mit zittrigen Fingern griff ich zum Handy. Irgendetwas zwang mich fast dazu, die Benachrichtigung zu öffnen. Als würde eine fremde Stimme in meinem Kopf flüstern: Mach es. Die Nachricht war kurz. „Du brauchst keine Schlaftablette. Du wirst diese Nacht sowieso nicht überleben.“ Bevor ich reagieren konnte, wurde mir ein weiteres Video zugeschickt. Ich öffnete es. Ich musste. Ich konnte nicht anders. Ich sah wieder mich. Diesmal lag ich in meinem Bett. Das Licht flackerte hektisch. Ich griff mir im Video an den Hals, würgte, rang nach Luft. Ein dünner Draht war um meinen Hals geschlungen. Doch niemand war zu sehen. Nur vom Ansehen bekam ich Atemnot. Ich warf das Handy panisch weg. Ich zitterte am ganzen Körper und wollte die Polizei rufen. Ich hob das Handy wieder auf – doch das Display wurde schwarz. Wieder ein Ton. Eine weitere TikTok-Benachrichtigung. Ich öffnete sie. „Du hast 10 Minuten. Schicke dieses Video an mindestens 20 Freunde. Wenn du es nicht tust, werden die Videos wahr.“ Ich zögerte kurz, da dies doch nicht wahr sein konnte. In dem Moment hörte ich, wie im Wohnzimmer das Fenster aufging. Langsam. Nicht gewaltsam – eher … bewusst. Als wäre es nie richtig geschlossen gewesen. Dies war doch unmöglich. Oder drehte ich jetzt völlig durch? Vor lauter Panik schickte ich das Video an zwanzig Kontakte. Danach blieb die Nacht still. Unnatürlich still. Diese Stille raubte mir fast den ganzen Schlaf. Am nächsten Morgen weckte mich Vogelgezwitscher. Der Tag verlief ruhig, aber ich war wie gelähmt. Bei jeder Handyvibration zuckte ich zusammen. Ein paar Kumpels fragten, was ich ihnen für einen Mist geschickt habe. Ich konnte ihnen nicht antworten. Sie würden mich ja für Verrückt halten. Aber vielleicht war ich das auch wirklich. Am Abend kamen die ersten Nachrichten. Einige meiner Freunde waren gestorben. Man fand sie in ihren Wohnzimmern – mit einem Messer im Bauch. Ich war geschockt. Zitterte am ganzen Körper. Unfähig zu reagieren. Und voller Schuldgefühle. Ich öffnete TikTok. Das erste Video, das mir angezeigt wurde, zeigte meinen besten Freund. Blutüberströmt auf seinem Sofa. In seiner Hand war ein Messer – genau das Messer –, mit dem er sich den Bauch aufschlitzte. Dieses Detail hatte man in den anderen Videos nie gesehen. Von den zwanzig Personen, denen ich das Video geschickt hatte, überlebten nur zwei. Die einzigen beiden, die es ebenfalls weitergeschickt hatten. Die anderen hielten es für einen schlechten Witz. Und bezahlten dies mit ihrem Leben. Einer schrieb mir wahrscheinlich in seinen letzten Sekunden nur ein "WARUM?". Nachdem das achtzehnte Video abgespielt war, fror mein Handy ein. Ich konnte nichts tun, nichts schließen, nichts berühren. Dann, plötzlich, verschwand der Kanal. Als hätte es ihn nie gegeben. Bis heute quälen mich die Schuldgefühle. Ich habe Menschen ins Unglück gestürzt. Ich weiß nicht, ob ich das jemals verarbeiten kann. Aber manchmal frage ich mich: Hättet ihr es auch getan? Hättet ihr das Video weitergeschickt – um euer eigenes Leben zu retten? Ich warf mein Handy damals in den Fluss. Löschte alles an Social Media. Monate vergingen und ich versuchte mein Leben wieder auf die Reihe zu bekommen. Ich kaufte mir vor kurzen ein neues Handy. Damit ich wieder für meine Familie erreichbar bin. Installiert hatte ich aber garnichts darauf, sogar alle Standardapps versucht zu löschen.

Heute kam eine neue Tiktok Benachrichtigung: "Danke für das köstliche Mahl!"

r/TwoSentenceHorror Liv-Dai

My father finally let go of his outdated morals and agreed to help lessen our family’s expenses.

My wife ran her fingers over his swollen belly, thrilled to find out how much the twins inside him would reduce our taxes.

r/Seattle honestlyeek

Job market for teachers currently 2026

Hi fellow teachers!

I’m currently teaching internationally but would like to eventually move back to Seattle. My current school offered me a new contract to stay on another year. And I have to decide by Monday morning.

Once I pass my WEST-NES ELA test, I’ll finally have my dual endorsement in ELL & ELA. But it’s looking like I won’t be fully licensed in WA until about May. (I accidentally took the test for SPED though I’m certified in ELL, so that was just an extra test for me I suppose.)

Is that too late in the year to find a potential MLL or ELA position for the 2026-27 school year? Should I stay abroad and teach internationally for another year and apply to Seattle public schools later this fall?

TIA!

r/estoration Hour_Grand_6258

fix this without changing face, no need to colourise .someone help

r/MacroPorn Dalantech

Solitary Bee on a Sourgrass Flower [OC] [6960x4640]

The weather, this time of year, has a tendency to slow critters down enough for me to get a few shots.

Tech Specs: Canon 90D (F11, 1/250, ISO 100) + a Canon MP-E 65mm macro lens (set to about 2x) + a diffused MT-26EX-RT (E-TTL mode, -1 FEC). This is a single frame taken hand held. In post I used Topaz Photo and Clarity in that order.

r/SideProject Spiritual-Lunch4759

1.2 billion views in the past 90 days — anyone open to collaborations?

I’ve been posting short-form content and it’s done pretty well recently (around 1.2B views in the last 90 days). I’m starting to look into better ways to monetize and also open to working with other creators/brands if it makes sense.

Not trying to sell anything — just curious if anyone here is in a similar space or open to connecting/collabing.

Appreciate any advice or convos 🙏

r/Art ImpressEcstatic1085

moldy strawberry, irissponsible, arcylic on canvas, 2026

r/therewasanattempt an-redditor

to prevent the expansion of pigeon real estate offerings with anti-pigeon spikes

r/OldSchoolCool RiseAppropriate8696

Palace of Versailles in 1958

Debutantes Margery Shufford (left) and Sarah Bohannon descended the stairs at the Orangerie at the Palace of Versailles to rehearse for a ball for U.S. debutantes, 1958.

r/personalfinance meadsalot

What is the most effective way for a college student to secure their future?

I (19f) have been saving money for several years. Quick context, I was raised by a single parent, have two older brothers so money was always tight. We're in a better place financially but I never want to be in that position.

Additionally, I'm a college student whose tuition is taken care of and one of my brothers pays for weekly/daily expenses ie books and dining. I also have a car and he helps with the upkeep. Basically, If I need something for school, I go to him.

I recently hit 5k in savings. I'm proud of myself but I want to do more than let it sit in a savings account. I’ve received conflicting opinions from family and friends on where best to start. I know a tiny overview about HYSAs and RothIRAs, but I’m more curious about the most effective investments right now. I get nothing is full-proof but given everything you know now, which options are the most lucrative.

r/aivideo rodi65

Ocean Sweet Monster

r/nextfuckinglevel emilycopeland

200 hours of drawing in 90 seconds

r/ChatGPT Beneficial_Pie_7169

Does anyone else constantly retype the same context into ChatGPT when switching tabs? How are you handling it?

Something that drives me insane daily — I'm deep in reading an article, I want to ask ChatGPT a follow-up question, I open a new tab and… blank. Have to retype the whole context from scratch again.

I've started noticing just how much time this eats up across a day. It feels like one of those small frictions that nobody talks about but everyone experiences.

A few things I've tried:

  • Keeping a running "context doc" I paste from — works but clunky
  • Browser bookmarks with pre-filled prompts — too rigid
  • Just retyping every time — obviously the worst option

Eventually got frustrated enough to build a small browser tool to fix it SuggestPilot however genuinely curious how others are handling this before assuming my solution is the best one.

Are there workflows or tricks you've developed to carry context across tabs without starting over? Or is everyone just silently retyping and accepting the pain?

r/AskMen Sudden_Doughnut_8741

Fathers who saw everything when the mother of your child gave birth: how did it change how you feel about her?

I can’t totally explain it, but the respect that I had for my wife for being capable of doing that turned into the strongest attraction to anyone that I’ve ever felt. In the moment it wasn’t sexual and it was just me wanting her to be ok, but eventually it turned into me being extremely sexually aroused by her to a degree that I’ve never felt before.

It isn’t a breeding thing. We both chose not to have any more kids. The best that I’ve got is that I was so impressed with her ability to do all of that that it turned into me being crazily attracted to her. Kind of like how I’d feel about someone that wrote, directed, produced, and performed in her own massive scale movie, only then proceeded to stay in the movie for her whole life.

And I saw… everything. My wife asked me to take pictures of her when the head was coming out and after the baby was totally out. I saw lots of things. I’m someone who has a really hard time with blood, and I fucking powered through with the strength of being amazed by her ability to do what she did.

I’ve heard about guys who lose all attraction to the mother of their child after seeing this. I’m curious how it affected you.

r/whatisit andycandy17

What's on this punching bag?

Ordered an Everlast 70lb punching bag on Amazon. The bag came with these blue stains all over. I already wiped the bag with some soap and water. None of the stains come off. I'm afraid this could be mold? What is it?

r/Art stormshadowfax

Rheannon, Buridan's Cat, Oil on board, 2021

SortedFor.me