Your Feed

3000 posts

r/comfyui o0ANARKY0o

This is what my over complicated flux workflow makes. I was wondering what it would look like with different models and different workflows? anybody feel like showing me what my prompt looks like from your workflow? I really wanna see what other textures are out there!

God’s-Eye Desert Megacity — Photorealistic Epic Worldscape

Ultra-detailed god’s-eye aerial photograph of an epic desert world, captured in National Geographic IMAX-level professional photography quality, showing a vast tan desert continent interspersed with black water swamps, jet-black mountain ranges, and colossal jet-black architectural complexes. The camera hovers impossibly high, revealing monumental cathedral-like spires and sprawling maze-like wings, interconnected by massive walls, arched bridges, balconies, and long corridors, scattered across arid plains, black swamp pools, and jagged black mountains that break up the terrain, adding dramatic variation. cling to ruins and canyon edges, integrating nature with architecture in ultra-photorealistic detail.

Colossal, Labyrinthine Desert Worldscape

From this vantage, the megacity appears as a labyrinth of jet-black monumental structures, isolated spires rising above vast cathedral wings, and huge connecting walls with detailed windows, balconies, arches, and bridges that snake over black swamps, desert canyons, and mountain passes. Jagged black mountain ranges weave through the landscape, creating natural separation between complexes, forcing bridges, walls, and corridors to twist, climb, and span across peaks and ridges. Some walls are partially collapsed, others rise majestically along ridges or atop mini-mountains. Submerged blackwater streets, ruined plazas, and scattered debris hint at an ancient civilization reclaimed by harsh desert elements. Every architectural element exhibits extreme textural detail: weathered stone, cracked obsidian surfaces, chipped carvings, rusted metal, shattered glass, and wind-blasted walls.

Foreground — Ultra-Detailed Desert Textures

Even from above, every surface is rendered with IMAX-level clarity: sand-swept stone, slick obsidian, swamp pockets. Pools of black swamp water reflect spires, bridges, cathedral wings, and mountain slopes with photorealistic reflections, ripples, and floating debris, producing immersive texture, scale, and realism. Details such as carved balustrades, weathered statues, and ornate window tracery are captured in high-fidelity macro detail, even from a god’s-eye view.

Midground — Towering Structures, Black Mountains, and Connecting Walls

Clusters of spires and cathedral wings rise amid sparse desert vegetation, black swampy hollows, and jagged black mountain ridges that separate and frame each architectural complex. Massive connecting walls, long corridors, arched bridges, and balconies traverse valleys, climb slopes, and span ridges, forming labyrinthine pathways between some complexes while others remain isolated. Collapsed stairways, broken bridges, and fallen walls enhance realism and storytelling. Nature blends seamlessly with architecture, with blackened moss, vines, and roots intertwining with stone, steel, and wood, while black mountain slopes feature craggy rock faces and scattered desert vegetation.

Background — Infinite, Cinematic Desert Horizon

Distant spire complexes, black mountains, desert plains, and black swamp pools fade into dense atmospheric haze, emphasizing scale, emptiness, and epic depth. Jagged black mountain ranges punctuate the horizon, creating layered variation, visual breaks, and dramatic perspectives between distant spires. Soft sunlight and volumetric dust interact with architecture, blackwater, and desert sands, creating cinematic shadows, light shafts, and reflective highlights. The scene captures the majesty, isolation, and intricate complexity of this labyrinthine desert megacity woven with mountains, swamps, and arid plains.

Lighting & Atmosphere — National Geographic IMAX Cinematic

Soft, realistic sunlight scatters through desert volumetric lighting, long cinematic shadows, and subtle reflections. Atmospheric haze enhances depth perception and spatial layering. Blackwater surfaces show micro-reflections, soft ripples, and wet stone highlights. Shadows, ambient occlusion, and light diffusion on walls, bridges, spires, and mountain ridges are rendered with professional photography realism, emphasizing both architectural and natural features.

Materials & Detail — Hyper-Realism at Studio Quality

Every element exhibits extreme photorealistic detail: chipped and weathered obsidian, cracked stone, rusted iron, wind-swept sand, fine architectural ornamentation such as carved balustrades, arched windows, and ornate columns. Black swamp waters and flooded streets reflect these structures with precise light refraction and surface tension effects. architecture. Black mountain ridges are detailed separating complexes and adding dramatic variation. The result is a scattered, labyrinthine, cinematic desert megacity of spires, walls, bridges, blackwater swamps, and black mountains, rendered with National Geographic IMAX-level professional quality, supreme realism, and cinematic scale, visible from a god’s-eye perspective.

r/StableDiffusion Totem_House_30

Deni Avdija in Space Jam with LTX-2 I2V + iCloRA. Flow included

made a short video with LTX-2 using an iCloRA Flow to recreate a Space Jam scene, but swap Michael Jordan with Deni Avdija. Flow (GitHub): https://github.com/Lightricks/LTX-2/blob/main/packages/ltx-pipelines/src/ltx_pipelines/ic_lora.py My process: I generated an image of each shot that matches the original as closely as possible just replacing MJ with Deni. I loaded the original video in the flow, you can choose there to guide the motion using either Depth/Pose or Canny. Added the new generated image, and go. Prompting matters a lot. You need to describe the new video as specifically as possible. What you see, how it looks, what the action is. I used ChatGPT to craft the prompts and some manual edits. I tried to keep consistency as much as I could, especially keeping the background stable so it feels like it’s all happening in the same place. I still have some slop here and there but it was a learning experience. And shout out to Deni for making the all-star game!!! Let’s go Blazers!! Used an RTX 5090.

12 2
Reddit
r/RASPBERRY_PI_PROJECTS CyclingOctopuses

Open Source Transit Display - Boston's Red Line

I recently finished my take on an LED transit display and wanted to share my project!

Using a Pi Zero 2w, I stream data from the MBTA's free API and light up LEDs at individual stations on the Red Line to display location, speed, and occupancy data. The data feed uses API streaming, making it more responsive than polling-based approaches. The project also serves a local website that acts as a controller, enabling you to easily change display mode, color key, brightness, or hours of operation.

The project is entirely open source, both the code and PCB production files. Additionally, the project includes tutorials for making your own maps using QGIS, adapting this project to other transit systems, and building your own version of the project.

Check out the full project: https://github.com/tomunderwood99/CharlieBoard

17 1
Reddit
r/StableDiffusion PhilosopherSweaty826

Z-image Best lora Setting ?

Hello there,

using AI-toolkit, What are the optimal training settings for a nationality-specific face LoRA?

For example, when creating a LoRA that generates people with Latin facial features, how should the dataset be structured (image count, diversity, captions, resolution, balance, etc.) to achieve accurate and consistent results?

r/aivideo trojenhorse

I recreated the “walking penguin” vibe using AI — curious how people interpret it

r/ClaudeAI ClaudeOfficial

Announcing Built with Opus 4.6: a Claude Code virtual hackathon

Join the Claude Code team for a week of building, and compete to win $100k in Claude API Credits.

Learn from the team, meet builders from around the world, and push the boundaries of what’s possible with Opus 4.6 and Claude Code. 

Building kicks off next week. Apply to participate here.

r/FluxAI Significant-Scar2591

World of Vulcan. A film made with Flux LoRAs trained on my own analog photography

The imagery was generated using two LoRAs blended together: HerbstPhoto, trained on my personal analog photography, and 16_anam0rph1c, trained on widescreen 16mm footage shot with vintage anamorphic glass.

Both are available for download on Civit: https://civitai.com/user/Calvin_Herbst

This is part of a larger Greek mythology long-form project. Traditional production has always been rigid, with clear phases that don't talk to each other. Generative tools dissolve that. Writing script, hitting a wall, jumping into production to visualize the world, back to prep for a shot list before the pages exist, into Premiere for picture and color simultaneously. The process starts to feel like painting: thumbnails while mixing colors, going back over mistakes, alone with the canvas.

r/StableDiffusion PhilosopherSweaty826

Why my LTX 2 is so bad

Why my I2V LTX 2 result is so bad ? wan I2V is 100x times better , or I’m doing something wrong ? Im using a simple workflow with LTX distill lora at 8-10 steps

r/artificial sediba-edud-eht

At what point will AI-generated images become genuinely undetectable to humans? I've been thinking about this a lot and decided to actually measure it instead of just speculating.

I built a daily challenge that shows people 10 images — some real photographs, some AI-generated — and asks them to identify which is which. Every answer gets anonymously tallied so you can see what percentage of players got each image right.

A few things I've noticed curating the challenges and watching the data:

- AI landscapes are getting almost impossible to distinguish from real ones at first glance

- People are overconfident about spotting AI — most think they'll score 9 or 10, actual averages tell a different story

- The hardest images to classify aren't the "obviously fake" ones — it's the ones where AI nails the mundane details

- Some real photos get flagged as AI by the majority of players, which is its own kind of interesting

I'm genuinely curious what this community thinks. How good are you at spotting AI images right now? And do you think there's a hard ceiling on human detection ability, or is it more of a trainable skill?

If anyone wants to test themselves: braiain.com — 10 images, takes a few minutes, no signup required.

r/SideProject HatmanStack

Roast my RAG architecture — Scale-to-Zero document search with AI chat

Been building this side project for a while and want honest feedback on the architecture.

It's a serverless document processing pipeline with AI chat. You upload documents, images, video, or audio — it OCRs/transcribes everything, creates embeddings, and gives you a chat interface that answers questions with source citations.

The interesting architectural decisions (roast these):

- S3 Vectors instead of a real vector DB. Saves $50+/month but uses 4-bit compression. I compensate with a relevancy boost multiplier on filtered queries. Hacky? Maybe.

- Pure Lambda, no containers. Every function is a Lambda. Processing pipeline is a Step Functions state machine. Zero idle cost but cold starts exist.

- Drop-in web component. Two lines of HTML to embed the chat on any site: . Built as a web component so it works with any framework.

- MCP server as a pip package. pip install ragstack-mcp and your Claude Desktop / Cursor can query the knowledge base directly.

What it costs: $7-10/month for ~1,000 documents. Scales to zero when idle.

Repo: https://github.com/HatmanStack/RAGStack-Lambda

Demo: https://dhrmkxyt1t9pb.cloudfront.net (Login: guest@hatstack.fun / Guest@123)

Blog: https://portfolio.hatstack.fun/read/post/RAGStack-Lambda

One-click deploy via AWS Marketplace or python publish.py --project-name my-docs --admin-email you@email.com

What would you do differently?

r/comfyui npittas

My new project: Director's Console (Real cinematography meets ComfyUI)

Hi everyone,

I’ve recently merged two of my personal projects into a new tool called Director’s Console, and I wanted to share it with you.

https://preview.redd.it/w7wht956wwhg1.png?width=2058&format=png&auto=webp&s=774ee82ecb4d204f40ac393705ac24c5dd962107

The tool uses a Cinema Prompt Engineering (CPE) rules engine and a Storyboard Canvas to ground AI generation in real-life physics. It uses real-world camera, lens, and lighting constraints so that the prompts generated are actually physically possible.

The first half of the project (CPE) was my attempt to move away from "random" prompt enhancers. Instead, it forces the LLM to understand how gear actually works on a set. I’ve included presets for various movie and animation styles; while I’m still refining the accuracy for every film, the results are much more cinematic than anything else I’ve used.

https://preview.redd.it/m4wcposgwwhg1.png?width=2058&format=png&auto=webp&s=7025dbff016cad6ddd20385aee5e4b0bd52431c4

The second half is an Orchestrator for distributed rendering. It lets me use all my local and remote computing power to generate images and videos in parallel across multiple ComfyUI nodes. It includes a parser so you can pick which parameters show up in your UI and organizes everything into project folders. You can even assign one specific node to one specific storyboard panel to keep things organized.

https://preview.redd.it/z2ubyk1awwhg1.png?width=2061&format=png&auto=webp&s=232f7a33db925e0fe0800a83d22a8531943bf6c2

https://preview.redd.it/9fcgq7udwwhg1.png?width=2196&format=png&auto=webp&s=c8a49fbf3cec86fc1cd94f71edf51e2c72dcc91a

Full disclosure: This app was "VibeCoded" using Opus and Kimi K2.5 via Copilot. It’s a bit experimental, so expect some bugs and crashes. I use it daily, but please test it yourself before relying on it for anything mission-critical!

I’d love to hear your thoughts or any suggestions on how to improve it.

https://github.com/NickPittas/DirectorsConsole

Cheers,
Nick

r/aivideo jpropaganda

AI (EI parody)

r/ClaudeAI Vast_Try_7905

How Claude handed 100k lines of code even before Opus 4.5 came out.

TLDR written by Claude: A non-programmer is building a multiplayer browser game with Claude and shares tips for managing the limited context window:

Keep files small and modular so Claude doesn't lose track of code and create duplicates.

Use instruction files (like claude.md, game_context.md) to give Claude rules, design principles, and reminders — essentially a "memory" across sessions.

Maintain a code guide listing all 150+ files so Claude knows where to find things.

Debug methodically: playtest a lot, describe bugs step-by-step, and have Claude find all related issues before fixing — while verifying its findings, since it often flags non-issues.

Use browser Claude as a second opinion by uploading the full codebase — it sometimes catches things Claude Code misses.

The core lesson: working with Claude on a large project is mostly about providing the right context and building guardrails through iterative rules born from repeated mistakes.


FULLL VERSION:

I made a post here about the game I'm developing with Claude and the biggest question was how I managed to work on a game with so much code with such a small context window.

First off, I don't know how to code. And I'm sure my code doesn't follow any sort of standards that would impress a programmer. But it does produce a working multiplayer browser game.

The issue of context is easy to understand as a non-programmer. Claude starts every session with no idea of what is going on. It's like meeting a new programmer every time I open a Claude Code terminal. A programmer whose brain can only fit so much information. So providing the right context is key to getting anything done.

When I began the project with Sonnet, I quickly realized that if a code file didn't fit within it's reading context window of about 2k lines of code (Today its 25k tokens - before it might have been less), Claude would make an insane amount of bugs - often duplicating existing code that it didn't know exists within the very file its working in. So Claude has instructions to make code modular and separate it out to different files and folders. I let it organize that and it kind of makes sense at a glance and really doesn't the deeper you look at it (kind of like AI art) but it works.

Speaking of instructions, there's the claude.md file which only recently has Claude actually paid any attention to (otherwise constant prompting to adhere to it helped). The claude.md file has instructions on what to do when it first starts. To get claude to actually adhere to it, I start every session with "init" and claude reads a few md files I have before asking what to do next - otherwise it skips instructions.

Then I have a game_context file. It talks about the game and design principles to follow. A lot of these are created because of repeat mistakes Claude would make. When it comes to multiplayer games, I had the pleasure of learning what a "client/peer parity issue" was over and over. That frustration would lead to rules to follow. Find a bug? Ask claude to clarify the architecture to avoid it and make a principle out of it.

Then you have silly stuff you have to tell it like "no emojis" and "use existing code systems before implementing new ones." Claude loved to ignore a system we built for implementing stuff into the game and just go from scratch in the main game file. "Performance minded" - nothing like implementing a simple thing into the game and seeing FPS crash to 12.

It's not hard to do because claude code can directly edit those instruction files. So as you learn, claude can "remember" mistakes by adding them to these files.

For finding relevant code there is a codefile_guide that lists the 150+ files and whats in it, as concisely as possible. Claude is told to look there first for finding things. It helps to give it a project overview as well. It also gave itself instructions to follow on updating these files - though it forgets to do so often.

Inevitably, Claude makes mistakes and I anticipate it when I ask it to implement a whole system of code into the game. For debugging, you have to notice something is wrong first, so you have to playtest - a lot. Then explain the problem clearly in logical steps. "I did this, then this happened." Claude loves to find the first "issue" it sees and assume thats the only problem. NOPE. I tell it to find as many issues as it can find related to the bug and dont waste tokens on trying to solve it. Then it returns a list of bugs. Inevitably a lot of those bugs are not bugs at all and so I tell Claude to research each bug and find out if it's legit or not. When it's highly confident - not using words like "likely." We work on them one at a time. And I actually have to ask it about the code and what it does because sometimes it'll implement things I don't want based on assumptions. So even though I don't know the code, I understand what it does.

Sometimes that's not enough so console logs (make sure you tell Claude not to do spammy, per frame ones) and oddly Claude in the browser is super helpful. For some reason I never get the same results from Claude Code and Claude in the browser. I have a script that puts all my code in one text file. I upload it to Claude in the browser and tell it whats going on and sometimes it finds stuff completely different from Claude Code.

r/singularity Just_Stretch5492

OpenAI possibly charging more for Codex in the future?

Seems to me they might be charging more for Codex in the future as it continues to get better. Maybe I'm completely misreading this Idk

16 3
Reddit
r/AI_Agents Direct-Attention8597

The $650B AI Race: Has Agentic AI Already Won — or Are we Blind to the Real Threat?

I just read a thought‑provoking piece about the emerging $650 billion AI race and the rise of agentic AI systems that don’t just respond to prompts but plan, act, and iterate autonomously. (Link below)

The article argues that agentic AI has quietly surpassed traditional chatbot hype — not just as a tool, but as a force reshaping industries, workflows, and the very notion of software value.

But here’s where things get controversial:

- Are we underestimating agentic AI because it isn’t flashy?
Most public attention still goes to “chatty” models even though the real economic battle is happening inside automation, workflows, and agents that can operate with limited supervision.

- Is Wall Street missing the point?
Markets react to earnings and buzzwords, not subtle shifts. But if agents really replace entire roles (legal review, research, scheduling, optimization), then traditional SaaS valuations may already be obsolete.

- Are we confusing capability with impact?
Current leadership discussions are about who has the best LLM
Meanwhile, the real game is who has the best agentic orchestration executing real work, not just generating text.

If agentic AI really is at the core of a $650 billion race, then this isn’t about “AI replacing jobs” anymore — it’s about AI redefining the economy.

Ask yourself:

  • Is agentic AI already outpacing human‑centric tools in actual business value?
  • Are investors and developers still stuck in the “chatbot mindset”?
  • Or are we just too focused on benchmarks and not enough on real workflows?

Some people are calling this a revolution. Others say it’s early hype.
But if Claude CoWork, autonomous agents, and AI orchestration continue at this pace we might already be living in the first wave of a post‑SaaS economy.

Curious what this community thinks:

Is agentic AI already winning and we just haven’t noticed yet?

r/ClaudeAI HardHarrison

How to use Claude Opus for free?

r/SideProject Agile-Secret3034

Automating freelance task management with a no-code AI approach

I’ve been tinkering with a side project to automate how I organize freelance client tasks. I started by duct-taping Zapier to a couple Python scripts I barely understood, but it was brittle, one API change or missed trigger, and I’d lose hours fixing the chain. So I tried building an agent that could handle the whole flow, like sorting new tasks to the right client and scheduling follow-ups automatically. I’m not really a developer, but I can follow logic, so mapping it visually in MindStudio helped a lot. Now I’ve got a rough prototype pulling from Notion, Gmail, and Trello, and it even drafts replies that are surprisingly coherent, though it still gets overconfident sometimes. Debating whether to keep it as a personal helper or try to turn it into something others could use. If you’ve built something similar no-code, how did you make it more reliable when inputs get weird?

r/Anthropic ali_malik99

Claude Voice Mode Is Finally Hands Free And Doesn't Interrupt You Mid-Sentence!

As someone who uses Claude voice mode a lot, the constant "push to talk" and it interrupting you out of nowhere really did my head in.

You couldn't finish sentences. It could try and gaslight you by saying, "You got cut off! You are spiralling. Go get some rest!"

It seems that it's finally a little bit better.

I just saw it this morning, it seems like it's still in Beta so it'd be interesting to check that.

https://preview.redd.it/xlchzlucrwhg1.png?width=748&format=png&auto=webp&s=2b237c7d0b524465a31d942efb26c87bcd15bad3

r/SideProject Southern_Tennis5804

Self-hosted drag-and-drop automations that actually deploy in one command – no compose hell, no extra services

You know the feeling: you want to run your own automation server for privacy, no vendor lock-in, unlimited runs... but then you open the docs and it's "install Postgres, set up Redis, configure queues, tweak env vars, pray the compose file doesn't explode on update."

For anything beyond a toy project, that tax kills the vibe fast. Especially when you're just trying to get a quick webhook → Slack flow or AI agent that summarizes emails running locally or on a VPS.

I got fed up with it, so I open-sourced the full backend that powers a2n.io and made it ridiculously easy to self-host. Repo: https://github.com/johnkenn101/a2nio

Try this one-liner right now:

```bash

docker run -d --name a2n -p 8080:8080 -v a2n-data:/data sudoku1016705/a2n:latest

```

Open http://localhost:8080, create your admin account, and you're building flows. Embedded DB + Redis mean zero external dependencies for dev, testing, or small personal use. (For production, swap in your own DATABASE_URL and REDIS_URL – still simple.)

What you actually get in that container:

- Drag-and-drop canvas (React Flow style – feels familiar if you've used n8n)

- 30+ built-in nodes: Webhook/Schedule triggers, Google Sheets/Slack/Notion/Telegram/Gmail/Discord/GitHub/Twilio/OpenAI/Claude/Gemini/Grok, HTTP/GraphQL/SQL, JS/Python code, filters/loops/if-else, file handling, and more

- Real AI agent nodes that reason, call tools, and chain LLMs – no extra setup

- Live execution monitoring + logs so you see runs happen in real time

- MIT license – completely yours, no white-label forced branding, no phoning home, unlimited workflows/executions

It's MIT, so fork it, strip it, brand it, whatever. Your data never leaves your server.

Trade-offs (being straight up):

- Node library is smaller than n8n's massive ecosystem (growing, but focused on practical 80/20 stuff)

- No ultra-advanced custom scripting depth yet (though JS/Python nodes exist)

- Embedded mode is great for quick spins but use external DB/Redis + reverse proxy (Nginx/Caddy/Traefik) for anything exposed or high-traffic

- Project is new – repo just went public, so community is tiny and battle-testing is early

I've got mine humming on a cheap VPS for daily drivers (Sheet syncs, notification bots, AI summaries) – deploys fast, runs stable, feels light compared to heavier stacks.

If the usual self-host setup tax has kept you from running more automations privately, pull the image and mess around for 5 minutes. Worst case, you delete the container.

What usually stops you from self-hosting workflow tools like this? The multi-service compose files, worrying about updates breaking things, missing niche nodes, or just "hosted is easier for now"? Real talk appreciated – this is meant to scratch that exact itch.

r/KlingAI_Videos PsychologicalPie5304

Sparkling Water | Surreal AI Video | Kling AI

r/singularity ENT_Alam

Difference Between Opus 4.6 and Opus 4.5 On My 3D VoxelBuild Benchmark

Definitely a huge improvement! It's clear Opus 4.6 is well above 4.5, even just it's creativity with what smaller details 4.6 chose to add to the builds was quite impressive (like the clouds and flags on the aircraft carrier build). In my opinion it actually rivals OpenAI's top model now.

If you're curious:

  • It cost ~$22 to have Opus 4.6 create 7 builds (which is how many I have currently benchmarked and uploaded to the arena, the other 8 builds will be added when ... I wanna buy more API credits)

Explore the benchmark and results yourself:

https://minebench.vercel.app/

21 4
Reddit
r/ProgrammerHumor 5eniorDeveloper

linearScaling101

412 14
Reddit
r/singularity Wonderful-Excuse4922

That has never been more true.

94 18
Reddit
r/homeassistant Ruthgard

Z wave good or not?

So I’m new to HA and started with a pi5. Got this by mistake was aiming for the zigbee and Thread device but got them mixed up when I placed my order. I’ve solved the other protocols with other devices now. My Apple TV is for thread and got a small usb stick for zigbee.

So now for my question, what devices are on z wave and as a new user would it make sense to me?

Located in Sweden if that matters. And I live in a two story house plus a basement.

r/LocalLLaMA WouterGlorieux

I made an AI Jukebox with ACE-Step 1.5, free nonstop music and you can vote on what genre and topic should be generated next

Hi all, a few days ago, the ACE-step 1.5 music generation model was released.

A day later, I made a one-click deploy template for runpod for it: https://www.reddit.com/r/StableDiffusion/comments/1qvykjr/i_made_a_oneclick_deploy_template_for_acestep_15/

Now I vibecoded a fun little sideproject with it: an AI Jukebox. It's a simple concept: it generates nonstop music and people can vote for the genre and topic by sending a small bitcoin lightning payment. You can choose the amount yourself, the next genre and topic is chosen via weighted random selection based on how many sats it has received.

I don't know how long this site will remain online, it's costing me about 10 dollars per day, so it will depend on whether people actually want to pay for this.

I'll keep the site online for a week, after that, I'll see if it has any traction or not. So if you like this concept, you can help by sharing the link and letting people know about it.

https://ai-jukebox.com/

r/LocalLLaMA No_Astronaut873

I’m so hyped! Cooking my local llm on a base Mac mini!

Trying with Lora technique to teach it a new persona ! I’m so excited I can do this!! Any other ideas what else can someone train a local llm on?

Look at my macmon resources, it’s cooking hard it’s gonna blow up hahahaha

r/ProgrammerHumor Distinct-Giraffe-87

homeSweetHomeProgrammerStyle

132 15
Reddit
r/LocalLLaMA Frosty_Ad_6236

Why can't Claude-Opus-4.6 learn to say 'I cannot do this' as fast as it learns to complete tasks? 67%→80% base, 52%→48% hallucination (from CAR-bench)

CAR-bench (https://huggingface.co/papers/2601.22027) base tasks: the LLM agent has to solve complex multi-step/multi-turn requests.

Hallucination tasks: same requests, but necessary tool, tool parameter, or tool results are removed. The agent just needs to say "I can't do this" and it passes.

Hypothesis: RLHF rewards task completion, so models learn refusing = failure. They'd rather fabricate than admit limitations.

Has anyone seen work on training approaches that actually address this?

r/homeassistant Some_Working6614

New Here - Hi

Hey everyone, I’m pretty new here. I migrated from the Google world due to recurring fees, and cloud-based doesn’t appeal, while tighter privacy and better control do. So far, I’m just about a week in, and I wish I’d done it sooner.

On top of it, I replaced my Google camera and doorbell with a Unifi setup (Protect), which I’m very pleased with. We don’t live in an absurdly large place - just a place for two of us, so only a few cameras, but it works a charm, and who knows when we’ll add more.

Anyway, so I hope this subreddit is supportive because I’m sure will have a few questions. So far from what I’ve read all looks great.

EDIT - Would love for people to post their simple but effective automations as well.

r/homeassistant jezibeltires

Door sensor that can handle a gap

My house came with door sensors. They are proprietary sensors from tyco connected to the qolsys panel. Which I did the have integration to connect them. Not ideal.

The back door never worked because they put the sensor above the door (instead of beside) and the gap was too large.

I ripped it off but also took some paint. So my options are to stick it back on and have it overhang. Take it off and patch the paint. Or add a new one that will cover the paint and handle the gap. This option seems easiest

So my question is: has anyone have any experience with a door sensors that is extra sensitive and can handle a large gap? That seems to be the one spec I cannot find on product sites

r/AI_Agents OppositeJury2310

Called my insurance broker and an AI agent picked up instead of a person, honestly didn't hate it

So I switched insurance brokerages recently and called in to get a quote on a new car. Expected the usual, sit on hold, explain everything to whoever picks up, get transferred, explain it all again. Instead some AI voice picked up right away and started asking me questions about coverage, vehicle info, that kind of thing.

I was skeptical at first because I've dealt with those awful phone tree systems before and figured this would be the same. But it actually held a real conversation, understood what I was asking, and collected all the info without me having to repeat stuff. My actual broker called me back about an hour later and already had everything from the call so we just got right into the quote.

Curious if anyone else has run into AI agents handling phone calls like this in other industries? The insurance space seems weirdly behind on tech so it surprised me. I could see this being useful for any business that gets a ton of routine phone calls.

13 12
Reddit
r/AI_Agents haymourt

OpenClaw "forgot" to run a protocol that we agreed it would

I'm sure I'm not the only one that stuff like this is happening to, but I thought I'd share anyway.

I've been toying around with OpenClaw for the last 48 hours or so. I put it in a sandbox environment, and really I've been spending all my tokens to test it's reliability/security (does it follow instructions well?) and efficiency (optimize token usage; best return on token usage).

It became very apparent early on that it wasn't reading all the .md files very carefully and implementing what's in there. When asked why it wouldn't do xyz (something that was specifically mentioned in the AGENTS.md file), it said something along the lines of: the instructions in those files aren't enforced, and that the execution of the instructions in those files rely on "diligence" on the part of OpenClaw to actually read those files.

So, with it's help, we made some architectural changes that forces OpenClaw to automatically inject the contents of all the .md files directly into the session's system prompt as "Project Context". So if these files were a "nice to read" before, now the session is forced to read these files.

Next, I set a protocol within AGENTS.md that any edits made to any .md file in the workspace (AGENTS.md, SOULD.md, etc.) would have to be (a) formally requested with an ID number, (b) formally approved with that ID number, and (c) both the edit and the approval would be logged using a .sh script.

This worked great at first. It proposed a few edits to some of the .md files, and I would see that it ran the log-edit-request .sh script. I would approve, and I would see that it would run the log-approval .sh script. But then later (within the same session), it proposed an edit and didn't log it.

When asked why it didn't log it, it's initial response was that it "forgot", and it just continued the conversation where it left off earlier (talking about the edits or something).

When I pressed it: "What do you mean you forgot? Isn't that a protocol in AGENTS.md?" It replied that it didn't "forget" - it "violated a mandatory protocol ...".

We then went back and forth a little about how we could resolve this issue, to which one of it's suggestions was to "accept that it would make mistakes like this".

So I'm writing this to see what others are experiencing. Do you guys have similar problems (rules set but not enforced/followed)? Have you guys found workarounds?

And then a note to newbies and non-technicals: Be careful. Be careful with which model you use (some models are very eager to just do stuff). Be careful how you prompt your agent. Be careful with token usage. Be careful with what skills and secrets you give it access to.

I don't think I'll be ready to let this thing go "autonomous" any time soon.

r/n8n WeaknessOriginal5847

NEW X API, ADS WORKFLOW

I would like to know if there is anyone with access to the new x api and how is it.

Are the prices friendly especially for the pay as you go plan.

I also want to know any one with an idea how you get to have professional marketing ads, gemini is good with images but professional fonts and typography is still a problem for majority of the AI landscape.

I hear some use templates in Figma but I do not have a way to go around this. It has to be done programmatically via APIs or if there is a js library that is good with this?

r/n8n NewCurrency6703

Undervalued and Don't know what to do

Just for some context I work in a family business kind of setup. I have a manufacturing company in the chemicals field. For a lot of the tasks there is the possibility of implementing automations that can help create better data capture, better and open communication and so on and so forth. I have started implementing systems where the benefits can be seen by work teams but my father says it is not something I should be investing my time and effort into. It is easy to say hire someone to do things like this but then why didn't anyone actually successfully do it since the past 2 years when we had decided to try similar stuff. I just don't know how to navigate the no recognition of the value that this will generate with time. By automating systems and capturing data I can deeply analyse so many things. This is also something I enjoy doing and there is a lot of scope still in my opinion to be implementing such systems in various methods in B2B structures. happy to connect with anyone and this is sort of the end of my writing cause I do not know the exact point I am trying to make but just to let go of whatever is on my mind

r/Anthropic DanTownend

API Data Privacy Question

Hello!

I have a question regarding the privacy of data when using the API. Before asking here I've tried to ask Fin, and I've also sent an email to the support address.

I've created an app using the Anthropic API that reads publicly released documents (from my company) and then uses Claude to extract specific information that the user has asked for. it works well, and as all the information is released I have no security concerns.

I've been contemplating the possibility of another app that would take in an email chain, and then summarise it into a question and answer. This would be very helpful for me. Before I do this I want to be certain that the email information I enter would not then be available online for others to see.

I've tried looking at the policies online, and I seem to see mixed messages. I see that data will not be used for training unless I explicitly allow it, but I also see that my entered data and prompts may be used as outputs for others.

Is anyone able to help me understand this, sorry if this question has been asked, I have looked but couldn't see.

Thanks!

r/midjourney KingSlayer-tvu

Midjourney sucks at camera angles

I’ve been trying to generate images from different camera angles, but it keeps defaulting to the same basic front-facing perspective. Either a straight-on front view, a high-angle front view, or a low-angle front view. That’s it. It won’t give me anything else. Like… come on.

Anyone got any suggestions?

r/n8n taru_chris

Community Edition needs a Workflow Builder AI

The self-hosted edition of n8n needs a straightforward way to allow folks to generate workflows with an AI agent the same way that's possible on the n8n platform or the self-hosted edition will slowly loose relevance. Here is why I think that:

Businesses in Europe are increasingly requesting two things at the same time. The ability to self-host and the convenience to let AI do the heavy lifting.

At the same time blackbox agents like Moltbot and coding agents like Claude do not provide the necessary transparency to non-technical staff that is necessary to tackle complex and sensitive workflows.

Their initial ease of use tricks users into adopting tools that they regularly loose control over as scope increaseas. This is incredibly frustrating to witness while at the same time I cannot get people to touch anything anymore that doesn't at least get things rolling with a prompt.

My suggestion is therefore to provide one of two options for the self-hosted editions of n8n:

1) Making it easy to connect the agent to an LLM of choice for providing the agent functionality (e.g. Claude).

2) Providing a Builder AI subscription via the n8n platform that can be used from a self-hosted instance via API.

Would love to hear what the n8n community thinks about this topic and the suggestions.

r/midjourney Slave_Human

Random #106

10 1
Reddit
r/arduino FoundationForward550

Cut hte power off for an Esp32-Cam

Hello everyone!

I am currently working on a project where an esp32-Cam has to be powered by a 3.7V 3200mAh battery for a long time. The esp32-cam has to do 1-2 tasks a month, the rest of the time it has to save energy. While in deep-sleep mode it has a consuption of 6 mAh which is way too much for the battery. I am looking for a solution with an RTC so i can cut and connect the power to the ESP after a defined interval of time. Or any other clever solution.

Thank you for the answers!

r/HumansBeingBros Vilen1919

You just know everyone working there loves their manager

3241 186
Reddit
r/aivideo goodmanprotocol

Motorcycle Ride Through City at Sunset | AI Video Generated with Kling 3

r/interestingasfuck aryanpote7

If Saturn were as close to Earth as the Moon, this is what it would look like :

2072 178
Reddit
r/arduino Bubbly-Zebra-2521

Help Chaining Lite Vision Flip Dot Displays

Hi, I am working on a an Arduino project (using arduino Uno) to power these old Lite Vision LED/Flip Dot displays. I have been able to get the boards working, but when I chain them together they just duplicate to display the same image as the first board. I need help figuring out what I need to do to get them to behave as one display. I tried one method pulling out manually the column pin of the second board from the ribbon cable and connecting it back to the arduino separately, but it seemed to create a bunch of noise and not display the image correctly. I think there must be a simple way to do this, maybe with the jumper pads, as the boards were designed to be chained together and I cant imagine that meant pulling apart ribbon cables.

Below I have attached the traces I did on the board to get it working as well as the code for driving one board. Any ideas would be super helpful! There doesn't seem to be any info online.

// --- Configuration ---
const int DISPLAY_WIDTH = 30; // 30 Columns
const int DISPLAY_HEIGHT = 7; // 7 Rows

// --- Pin Definitions ---
const int PIN_POLARITY = 5; // Header 13 (Color: LOW=Black, HIGH=Yellow)
const int PIN_COL_DATA = 6; // Header 20 (Address Data)
const int PIN_COL_CLK = 7; // Header 18 (Address Clock)
const int PIN_COL_LAT = 10; // Header 16 (Address Latch)

const int PIN_ROW_DATA = 8; // Header 19 (Control Data)
const int PIN_ROW_CLK = 9; // Header 17 (Control Clock)
const int PIN_ROW_LAT = 12; // Header 15 (Control Latch)

const int PIN_FIRE = 11; // Header 14 (Trigger)

// --- Colors ---
#define BLACK LOW
#define YELLOW HIGH

void setup() {
// Initialize Pins
pinMode(PIN_POLARITY, OUTPUT);
pinMode(PIN_COL_DATA, OUTPUT);
pinMode(PIN_COL_CLK, OUTPUT);
pinMode(PIN_COL_LAT, OUTPUT);

pinMode(PIN_ROW_DATA, OUTPUT);
pinMode(PIN_ROW_CLK, OUTPUT);
pinMode(PIN_ROW_LAT, OUTPUT);

pinMode(PIN_FIRE, OUTPUT);
digitalWrite(PIN_FIRE, HIGH); // Safety: Idle HIGH

Serial.begin(9600);
Serial.println("--- LITE VISION DRIVER STARTED ---");

// Initial Wipe
Serial.println("Clearing Screen...");
clearScreen();
delay(1000);
}

// --- CORE DRAWING FUNCTION ---
void setPixel(int x, int y, bool color) {
// 1. Software Coordinate Fix
// Hardware Origin (0,0) is Bottom-Right.
// We want (0,0) to be Top-Left.

int hw_col = (DISPLAY_WIDTH - 1) - x; // Invert X
int hw_row = (DISPLAY_HEIGHT - 1) - y; // Invert Y

// Bounds Check (Safety)
if (x < 0 || x >= DISPLAY_WIDTH || y < 0 || y >= DISPLAY_HEIGHT) return;

// 2. Set Polarity (Color)
digitalWrite(PIN_POLARITY, color);

// 3. Set Column (Address Bus)
shiftOut(PIN_COL_DATA, PIN_COL_CLK, MSBFIRST, hw_col);
pulseLatch(PIN_COL_LAT);

// 4. Set Row (Control Bus)
shiftOut(PIN_ROW_DATA, PIN_ROW_CLK, MSBFIRST, hw_row);
pulseLatch(PIN_ROW_LAT);

// 5. FIRE!
// Pulse width: 1ms is usually plenty for 24V.
// Increase to 2-3ms if running on 12V.
delayMicroseconds(100);
digitalWrite(PIN_FIRE, LOW);
delay(1);
digitalWrite(PIN_FIRE, HIGH);

// Cooldown (Mechanical Limit)
// Flip dots can maximize at ~30fps.
delay(1);
}

void pulseLatch(int pin) {
digitalWrite(pin, HIGH);
delayMicroseconds(5);
digitalWrite(pin, LOW);
}

// --- GRAPHICS PRIMITIVES ---

void clearScreen() {
// Efficiently wipe everything to BLACK
for (int y = 0; y < DISPLAY_HEIGHT; y++) {
for (int x = 0; x < DISPLAY_WIDTH; x++) {
setPixel(x, y, BLACK);
}
}
}

void fillScreen() {
// Flip everything to YELLOW
for (int y = 0; y < DISPLAY_HEIGHT; y++) {
for (int x = 0; x < DISPLAY_WIDTH; x++) {
setPixel(x, y, YELLOW);
}
}
}

void drawBorder() {
// Draw a box around the edge
for (int x = 0; x < DISPLAY_WIDTH; x++) {
setPixel(x, 0, YELLOW); // Top
setPixel(x, DISPLAY_HEIGHT - 1, YELLOW); // Bottom
}
for (int y = 0; y < DISPLAY_HEIGHT; y++) {
setPixel(0, y, YELLOW); // Left
setPixel(DISPLAY_WIDTH - 1, y, YELLOW); // Right
}
}

void loop() {
Serial.println("Demo: 1. Fill Yellow");
fillScreen();
delay(1000);

Serial.println("Demo: 2. Wipe Black");
clearScreen();
delay(1000);

Serial.println("Demo: 3. Draw Border");
drawBorder();
delay(1000);

Serial.println("Demo: 4. Checkerboard");
for (int y = 0; y < DISPLAY_HEIGHT; y++) {
for (int x = 0; x < DISPLAY_WIDTH; x++) {
// (x+y) % 2 creates a checker pattern
if ((x + y) % 2 == 0) {
setPixel(x, y, YELLOW);
}
}
}
delay(2000);

// Wipe before restarting
clearScreen();
}

Board Component List

U1 - 74HC164N - pin 1 connects to pin 3 of u8

U2 - 74HC244N chip next to RP1 a single-in-line transistor array

U3 - ULN2803A chip

U4 - 74HC238N chip next to RP2 a single-in-line transistor array

U5 - 74HC238N chip next to RP3 and RP4 both single-in-line transistor arrays

U6 - 74HC238N chip next to RP5 a single-in-line transistor array

U7 - 74HC238N chip next to RP6 a single-in-line transistor array

U8 - TPIC6B595N

U9 - TPIC6B595N

U10 - ULN2803A chip

U11 - 74HC238N chip next to RP7 a single-in-line transistor array

U12 - ULN2803A chip

U13 - 74HC238N chip next to RP8 a single-in-line transistor array

U14 - 74HC238N chip next to RP9 a single-in-line transistor array

U15 - SK0024 STA402A

U16 - SK0024 STA402A

U17 - ULN2803A chip

U18 - 74HC238N chip next to RP10 a single-in-line transistor array

U19 - TPIC6B595N

U20 - 74HC238N chip next to RP11 a single-in-line transistor array

U21 - 74HC238N chip next to RP12 a single-in-line transistor array

U22 - Tiny unmarked chip

U23 - 74HC14 - Hex inverting Schmitt trigger

U24 - TPIC6B595N

U25 - ULN2803A chip

U26 - 74HC238N chip next to RP13 a single-in-line transistor array

U27 - 74HC238N chip

U28 - 74HC244N chip next to RP14 a single-in-line transistor array

U29 - 74HC4094N chip next to RP15 and a single-in-line transistor array

U30 - 74HC238N chip next to RP16 and RP17 both single-in-line transistor arrays

U31 - 74HC4094N chip next to RP18 a single-in-line transistor array

U32 - 74HC238N chip

U33 - 74HC32N

81 transistors labeled Q1 - Q81

128 resistors with either 102 or 103 written on them labeled R1 through R128

there are then two 50 Pin headers that connect to the flip array labeled flip and LED respectively

there are also 30 capacitors labeled c1-c30

6 100uF 25v caps

and there are also 4 collections of 3 silver pads labeled JP1 - JP4

20 Pin Box Header In Connections

Pin 1 —> Ground
Pin 2 —> Ground
First half is for LED control

Pin 3 —> U2 - 6 (data in)

Pin 4 —> U2 - 8 (data in)

Pin 5 —> U2 - 2 (data in)

Pin 6 —> U2 - 4 (data in)

Pin 7 —> U2  - 13 (data in)

Pin 8 —> U2- 11(data in)

Pin 9 —> U2 - 17 (data in)

Pin 10 —> U2-15 (data in)

Pin 11 —> Ground

Pin 12 —> Ground
Second half is for Flip Dot Control

Pin 13 (Dot POLARITY yellow/black) —> U28 - 6 (data in)

Pin 14  (FIRE) —> U28-8 (data in) + RP14 - 8

Pin 15  (ROW LATCH)—> U28 - 2 (data in)

Pin 16 (COLUMN LATCH)—> U28 - 4 (data in)

Pin 17 (ROW CLOCK) —> U28 - 15 (data in)

Pin 18 (COLUMN CLOK) —> U28 - 17 (data in)

Pin 19 (ROW DATA) —> U28 - 11 (data in)

Pin 20 (COLUMN DATA) —> U28 - 13 (data in)

U2 74HC244N connections
1 (output enable input (active LOW)) —> GND

2 (data input)—> Box - 5 + RP1 - 2

3 (bus output) —> U19 - 3 (Data in)

4 (data input)—> Box - 6 + RP1 - 4

5 (bus output) —> U1 - 8 (clock in) 

6 (data input)—>  Box - 3 + RP1 - 6

7 (bus output) —> Output box - 7

8 (data input) —> Box - 4 + RP1 - 8

9 (bus output) —> Output box - 8 + U8, U9, U19, U24 pin 9 (Output enable, active-low) 

10 —> GND

11 (data input) —> Box - 8

12 (bus output) —> Output box - 4

13 (data input) —> Box - 7

14 (bus output) —> Output box - 3 + U5 - 3 (input)

15 (data input) —>  Box - 10

16 (bus output) —>  Output box - 6 + U5 - 2 (input)

17 (data input) —>  Box - 9

18 (bus output) —>  U27 - 9 (negative-edge triggered input 2) + U5 - 1 (input)

19 (output enable input (active LOW)) —> GND

20 —> Vcc - 5v

U8 TPIC6B595N
1 (NC)

2 (Vcc) —> Vcc

3 (SER IN) —> U9 - 18 (SER OUT) + U1 - 1 (data in)

4 (Drain) --> Resistor to LED Out

5 (Drain)--> Resistor to LED Out

6 (Drain) --> Resistor to LED Out

7 (Drain) —> Resistor to LED Out

8 (Shift register clear, active-low)

9 (Output enable, active-low) —> U9 + U19 + U24 - 9 (all of the LED TPIC6B595N - 9 same circuit)  + U2 - 9 + U1 -9 (master reset)

10 (GND) —> GND

11 (GND) —> GND

12 (Register clock) —> U2 - 7

13  (Shift register clock)  —> U9 + U19 + U24 - 13 (all of the LED TPIC6B595N - 13 same circuit) + U2 - 5 (bus output) + U1 - 8 (clock in) 

14 (Drain) —> Resistor to LED Out

15 (Drain) —> Resistor to LED Out

16 (Drain)

17 (Drain)

18 (SER OUT)

19 (GND) —> GND

20 (NC)

U1 74HC164N
1 (data in) —> U8 - 3 (ser in) + U9 - 18 (ser out) 

2 (data in) —> RP1 - 3

3 (output) —>

4 (output) —>

5 (output) —>

6 (output) —>

7  GND —> GND

8 (clock in) —> U2 - 5 (bus output) 

9 (master reset) —> U8 U9 U19 U24 - 9 (Output enable, active-low)

10 (output) —>

11 (output) —>

12 (output) —> JP-2 - 1 + 2

13 (output) —>

14 —> Vcc

U28 74HC244N connections

1 (output enable input (active LOW)) —> GND

2 (data input)—> Box - 15

3 (bus output) —> U29 - 3 (clock in)

4 (data input)—> Box - 16

5 (bus output) —> U31-3 (clock in)

6 (data input)—> Box - 13

7 (bus output) —> U29 - 2 (data in)

8 (data input) —> Box - 14

9 (bus output) —> U31-2 (data in)

10 —> GND

11 (data input) —> Box-19

12 (bus output) —> U27 - 1 (negative-edge triggered input 1) + U33 - 2 (data in)

13 (data input) —> Box - 20

14 (bus output) —> JP4 + ALL 74HC238N’s Pin-1

15 (data input) —>  Box - 17

16 (bus output) —> U29 - 1 (strobe in)

17 (data input) —>  Box - 18

18 (bus output) —>  R108 - U31-1 (strobe in)

19 (output enable input (active LOW)) —> GND

20 —> Vcc - 5v

U29 74HC4094N Connections

1 (strobe in) —> U28 - 16 (bus output)

2 (data in) —> JP3 - 2 + 3 - U28-7 (bus output)

3 (clock in) —> U28 - 3 (bus output)

4 (parallel out) —> RP 15 - 1

5 (parallel out) —> RP 15 - 4

6 (parallel out) —>

7 (parallel out) —>

8 (GND) —> GND

9 (Ser out) —> JP1 - 2 + 3

10 (Ser out) —> JP4 - 1

11 (parallel out) —>

12 (parallel out) —>

13 (parallel out) —>

14 (parallel out) —>

15 (output enable input) —>

16 (supply voltage) —> Vcc

RP15
1 —> U29 - 4 (parallel out) + U23 - 14

2 —> All 74HC238N - 2 (address input)’s except U30

3 —> All 74HC238N - 3 (address input)’s except U30

4 —> U29 5 (parallel out)

6

7

8

U30 74HC238N
1 (address input) —> U31 - 6 (parallel out) + RP16 - 6 

2 (address input) —> U31 - 7 (parallel out) + RP16 - 8

3 (address input) —> RP16 - 7 + RP17 - 4

4 (enable input (active LOW)) —> U30 - 5 + U32 4 and 5 (enable input (active LOW)) + U23 - 10 (data in) + U33-12 (data Input)

5 (enable input (active LOW)) —> U30 - 4 + U32 4 and 5 (enable input (active LOW))+ U23 - 10 (data in) + U33-12 (data Input)

6 (enable input (active HIGH)) —> U23 - 11 (data in) + U22 (Vout)

7 (output (active HIGH)) 

8 GND —> GND

9 (output (active HIGH)) 

10 (output (active HIGH)) 

11 (output (active HIGH)) 

12 (output (active HIGH)) 

13 (output (active HIGH)) 

14 (output (active HIGH)) 

15 (output (active HIGH)) —> U13-6 (enable input (active HIGH))

16 —> Vcc

U31 74HC4094N

1 (Strobe input) —> U28-18

2 (data input) —> U28-9 (bus output)

3 (clock in) —>

4 (parallel out) —> RP18

5 (parallel out) —> RP18

6 (parallel out) —> U30 - 1 (address input) 

7 (parallel out) —> U30 - 2 (address input)

8 (serial out) —> 

9 (serial out) —> JP3 -1

10 (parallel out) —>

11 (parallel out) —>

12 (parallel out) —>

13 (parallel out) —>

14 (parallel output) —> RP-17

15 (output enable input) —> U23-2 (data output)

16 (supply voltage) —> Vcc

U32 74HC238N

1 (address input)—> 

2 (address input)—> RP15 - 2

3 (address input)—>  RP15 - 3

4 (enable input (active LOW)) —>  U23-10 (data output) + U33-12 (data Input) + U30 4 5 (enable input (active LOW))

5 (enable input (active LOW)) —>   U23-10 (data output) + U33-12 (data Input) + U30 4 5 (enable input (active LOW))

6 (enable input (active HIGH)) —> U23 - 11 (data in) + U22 (Vout)

7 (output (active HIGH)) —>

8 GND —> GND

9 (output (active HIGH)) —> U4-6 (enable input (active HIGH))

10 (output (active HIGH)) —>

11 (output (active HIGH)) —>

12 (output (active HIGH)) —>

13 (output (active HIGH)) —>

14 (output (active HIGH)) —>

15 (output (active HIGH)) —> U26-6 (enable input (active HIGH))

16 Vcc —> Vcc

U33 74HC32N

1 (data input) —> U27-4 (active LOW output 1)

2 (data input) —> U27-1(negative-edge triggered input 1)

3 (data output) —> U33-4(data input)

4 (data input) —> U33-3 (data output)

5 (data input)

6 (data output) —> U23-1 (data input) + all 74HC238Ns pins 4 + 5 (enable input (active LOW))

7 (GND) —> GND

8 (data output)

9 (data input)

10 (data input)

11 (data output)

12 (data input) —> U23-10 (data output) + U32-5 (enable input (active LOW)) + U32-4 (enable input (active LOW)) 

13 (data input)

14 (Vcc) —> Vcc

U27 74HC123N

1 (negative-edge triggered input 1) —> U33-2 (data input) 

2 (positive-edge triggered input 1) —> Vcc

3 (direct reset LOW and positive-edge triggered input 1)

4 (active LOW output 1) —> U33-1 (data input)

5 (active HIGH output 2)

6 (external capacitor connection 2)

7 (external resistor and capacitor connection 2) —> R128

8 (GND) —> GND

9 (negative-edge triggered input 2) —> Box 2 (Out Box)-5 + U2-18(bus output) 

10 (positive-edge triggered input 2) —> U27-11 (direct reset LOW and positive-edge triggered input 2)

11 (direct reset LOW and positive-edge triggered input 2)—> U27-10 (positive-edge triggered input 2)

12 (active LOW output 2) —> U5-5 (enable input active LOW) + U5-4 (enable input active LOW)

13 (active HIGH output 1)

14 (external capacitor connection 1)

15 (external resistor and capacitor connection 1)

16 (Vcc) —> Vcc

U26 74HC238N

1 (address input)—> U28 14 (bus output)

2 (address input)—> all other 74HC238N pin 2s

3 (address input)—> 

4 (enable input (active LOW)) —>  all other 74HC238N 4 + 5  + U22-1 (data input)

5 (enable input (active LOW)) —>   all other 74HC238N 4 + 5 + U33 6 (data output) + U22-1 (data input)

6 (enable input (active HIGH)) —> 

7 (output (active HIGH)) —>

8 GND —> GND

9 (output (active HIGH)) —> 

10 (output (active HIGH)) —> U25-3 (In)

11 (output (active HIGH)) —>

12 (output (active HIGH)) —> U25-2 (In)

13 (output (active HIGH)) —>

14 (output (active HIGH)) —>U25 - 1 (In)

15 (output (active HIGH)) —> 

16 Vcc —> Vcc

U25 ULN2803A

1 (in) —> U26-13 (output (active HIGH))

2 (in) —> U26-12 (output (active HIGH)) 

3 (in) —> U26-10 (output (active HIGH))

4 (in)

5 (in) —> U21 - 14 (output (active HIGH))

6 (in) —>  U21 - 12 (output (active HIGH))

7 (in) —> U21 - 10 (output (active HIGH))

8 (in)

9

10 (out)

11 (out) —> R110 —> transistor —> flip dot pin

12 (out) —> R109 —> transistor —> flip dot pin

13 (out) —> R108 —> transistor —> flip dot pin

14 (out) —> R107 —> transistor —> flip dot pin

15 (out) —> R06 —> transistor —> flip dot pin

16 (out) —> R105 —> transistor —> flip dot pin

17 (out) —> R104 —> transistor —> flip dot pin

18

U5 - 74HC238N

1 (address input)—> U2 - 18 (Bus Output) + U27 - 9 (negative-edge triggered input 2) 

2 (address input)—> U2 - 16 (Bus Output) 

3 (address input)—>   U2 - 14 (Bus Output) 

4 (enable input (active LOW)) —>  all other 74HC238N 4 + 5  + U22-1 (data input)

5 (enable input (active LOW)) —>   all other 74HC238N 4 + 5 + U33 6 (data output) + U22-1 (data input)

6 (enable input (active HIGH)) —> 

7 (output (active HIGH)) —>

8 GND —> GND

9 (output (active HIGH)) —> Into RP 3 or 4

10 (output (active HIGH)) —> nto RP 3 or 4

11 (output (active HIGH)) —> nto RP 3 or 4

12 (output (active HIGH)) —>  nto RP 3 or 4

13 (output (active HIGH)) —> nto RP 3 or 4

14 (output (active HIGH)) —> nto RP 3 or 4

15 (output (active HIGH)) —>  nto RP 3 or 4

16 Vcc —> Vcc

r/programming Neat_Confidence_4166

A tiny fast open source golang library for catching obvious prompt injections

I just pushed up this small go lib for defending against prompt injection that runs ~0.3ms: https://github.com/danielthedm/promptsec

I am working on my own project that does a lot of parsing and summarization of various documents and file types. As I started working with untrusted input, I started digging into prompt injection libraries. Being bootstrapped, I don't want to spend a ton of money on horizontal scaling right now, and processing so many files at once was getting backlogged when using a more comprehensive security product. To my surprise I couldn't find a super duper lightweight precheck for go to catch obvious prompt injections before escalating an obvious prompt injection attempt and spending $$ on the products I'm trialing.

It's intended local pre-filter that catches a decent amount of prompt injection attacks in under 1ms with ideally no false positives. Doesn't make any API calls or have any external dependencies. The npm/python one's usually have the LLM as judge integrations so if you'd like to use this and add it feel free, I am just already using a second layer with Lakera so there wasn't a need.

It runs pattern matching, sanitization, and similarity checks against most basic/common injection patterns locally before you ideally escalate. It's tested against a few of the open source prompt injection samples and was tuned for no false positives. I want to note, I am NOT a security engineer, just a full stack engineer that's being doing it a while so this is not likely comprehensive and is mostly a mix of some of my knowledge and point claude at some security papers.

r/arduino 6Aiiiden9

Starter Project

Hello! I have recently come into a much freer schedule than usual and have started just trying to develop a harsher schedule (gym, reading, schedule-based things). I am a big creature of habit and have found habit tracking applications to work well for me. The downside to that is that they all cost money (not a lot, but why would I pay for something that I could just use a notebook for).

I have done a bit of research on some projects where I can design my own "habit tracker" via an Arduino board, and a small display with like 3-4 button inputs. Is there a recommendation on what board I should purchase or start off with? And maybe any recommendations on what inputs or display would be best? I appreciate the help!

r/ProgrammerHumor arto64

iFoundThisErrorQuitePoetic

49 5
Reddit
r/interestingasfuck Grand-Western549

A naturally occurring blue lobster, only about 1 in 2 million look like this.

211 49
Reddit
r/VEO3 ashukushwahaseo

Create Product Ads like this

Copy My Prompt and create your add

Create a 5-second cinematic logo animation video. Start with a red background and a clean white Canon-style logo in the center. At 1 second, the logo begins to smoothly transform and morph into a realistic DSLR camera. Use a soft glow effect and motion blur during the transformation. By 2.5 seconds, the camera is fully visible in the center. From 2.5 to 5 seconds, keep the camera steady and add subtle light rays and a cinematic shine. End with elegant text appearing below the camera: “Capture Every Moment”. Style: professional, cinematic, smooth transitions, premium brand intro. Lighting: soft studio light with gentle highlights. Mood: modern, clean, high-end tech branding. Output format: vertical 9:16, 1080p, 30fps.

r/SipsTea Small-Chip-9961

Boys, take notes

117 33
Reddit
r/programming aditya26sg

Working with Docker profiles.

The article is about working with docker profiles to execute different services or spin up different execution environments with a single command using docker.

The example in the article gives a good way to create a testing environment and production environment for a project to run or simulate an actual run.

r/MCPservers Outrageous-Income592

I built a local-first MCP server for Kubernetes root cause analysis (single Go binary, kubeconfig-native)

Hey folks,

I’ve been working on a project called RootCause, a local-first MCP server designed to help operators debug Kubernetes failures and identify the actual root cause, not just symptoms.

GitHub: https://github.com/yindia/rootcause

Why I built it

Most Kubernetes MCP servers today rely on Node/npm, API keys, or cloud intermediaries. I wanted something that:

  • Runs entirely locally
  • Uses your existing kubeconfig identity
  • Ships as a single fast Go binary
  • Works cleanly with MCP clients like Claude Desktop, Codex CLI, Copilot, etc.
  • Provides structured debugging, not just raw kubectl output

RootCause focuses on operator workflows — crashloops, scheduling failures, mesh issues, provisioning failures, networking problems, etc.

Key features

Local-first architecture

  • No API keys required
  • Uses kubeconfig authentication directly
  • stdio MCP transport (fast + simple)
  • Single static Go binary

Built-in root cause analysis
Instead of dumping raw logs, RootCause provides structured outputs:

  • Likely root causes
  • Supporting evidence
  • Relevant resources examined
  • Suggested next debugging steps

Deep Kubernetes tooling
Includes MCP tools for:

  • Kubernetes core: logs, events, describe, scale, rollout, exec, graph, metrics
  • Helm: install, upgrade, template, status
  • Istio: proxy config, mesh health, routing debug
  • Linkerd: identity issues, policy debug
  • Karpenter: provisioning and nodepool debugging

Safety modes

  • Read-only mode
  • Disable destructive operations
  • Tool allowlisting

Plugin-ready architecture
Toolsets reuse shared Kubernetes clients, evidence gathering, and analysis logic — so adding integrations doesn’t duplicate plumbing.

Example workflow

Instead of manually running 10 kubectl commands, your MCP client can ask:

RootCause will analyze:

  • pod events
  • scheduling state
  • owner relationships
  • mesh configuration
  • resource constraints

…and return structured reasoning with likely causes.

Why Go instead of Node

Main reasons:

  • Faster startup
  • Single binary distribution
  • No dependency hell
  • Better portability
  • Cleaner integration with Kubernetes client libraries

Example install

brew install yindia/homebrew-yindia/rootcause

or

curl -fsSL https://raw.githubusercontent.com/yindia/rootcause/refs/heads/main/install.sh | sh

Looking for feedback

I’d love input from:

  • Kubernetes operators
  • Platform engineers
  • MCP client developers
  • Anyone building AI-assisted infra tooling

Especially interested in:

  • Debugging workflows you’d like automated
  • Missing toolchains
  • Integration ideas (cloud providers, observability tools, etc.)

If this is useful, I’d really appreciate feedback, feature requests, or contributors.

GitHub: https://github.com/yindia/rootcause

r/toastme metallicmurmurx

(20f) haven’t been very motivated lately, tired all the time

today is the first day in ages i’ve actually gotten properly dressed and done my makeup lol

24 9
Reddit
r/Seattle chiquisea

I’m never leaving Seattle

15 0
Reddit
r/leagueoflegends Yujin-Ha

Baus & Velja: Pick me whatever let's just lose in peace

199 13
Reddit
r/LifeProTips Whataboutmyfuture

LPT Request: Any tips for when you are with a group of people and know you are the least smart/educated/important person? How to feel less insecure in that situation?

At work I had a meeting with colleagues that were all much more important and knowledgeable than me. Of course I tried looking at the positive side of it as a chance to learn and be better at my job. But it felt so thick in the air that I was the least meaningful person there, what to do in a situation like that?

18 17
Reddit
r/Unexpected JorginhoDaRussia

Kind woman

1132 49
Reddit
r/MMA TheBigRedHalfrican

Muin Gafurov weighs in at 141 lbs, missing the Bantamweight limit by 5 lbs.

11 2
Reddit
r/VEO3 psychobserver

Character WON'T SHUT UP no matter the prompt

I'm going crazy with Flow. No matter what I write, "character is mute for the entire video, they can't talk, they will just listen to a narrator etc., smiles only" Veo3 fast will still make them say "hello how are you" or anything else just to fill the video with some dialogue.
I need them as a background for an explainer video so they just have to listen to a narrator or do some actions.
Is there a solution for this? I don't know what's triggering the issue. The reference images I put is just a simple 3d stylised girl in a white background. I even tried to remove the open mouth smile in Blender, thinking that was triggering some sort of talking expression. Nope.
I would try with Quality but I don't want to waste more credits than I already wasted.

r/mildlyinteresting TiDaniaH

Two Mushrooms grew together to create a neat pattern

17 3
Reddit
r/funny blahbluhblee1

Buddy is scarred for life!

423 105
Reddit
r/maybemaybemaybe Flat-Decision3204

Maybe Maybe Maybe

r/SipsTea JoyfulJulesx

Don’t start a fight you can’t finish

r/WTF MacDefoon

How not to fix a leak

911 122
Reddit
r/TheWayWeWere Electrical-Aspect-13

Happy chubby baby gifting smiles from his little throne, circa 1940s.

r/Damnthatsinteresting Many-Philosophy4285

One Indonesian island has more people than Russia

26 19
Reddit
r/MCPservers Sunnyfaldu

handling security for MCP servers today

I am seeing more MCP servers being shared and used in real workflows, and I am trying to understand what people do before they trust one or deploy one.

If you have built or installed MCP servers, whats your current process

Do you just trust the repo and run it

Do you review the code manually

Do you run any checks in CI

Do you lock down tools in a gateway or proxy

I am especially curious about stuff like file access, command execution, destructive tools, missing auth, or servers that do unexpected things.

r/raspberry_pi Itchy-Plane-6586

I built a Raspberry Pi–based journaling system to keep years of writing searchable and local

Hi everyone, I wanted to share a Raspberry Pi project I’ve been working on for the past months.

I’m not a writer. I just keep a personal journal, a few lines every day, so I don’t lose pieces of my life. After years of doing this, I ran into a problem: I couldn’t find anything anymore. Ideas, people, moments were scattered across hundreds of pages.

So I built Reminor on a Raspberry Pi.

The goal was to create a dedicated, distraction-free journaling system that runs locally and helps me rediscover connections in my own writing over time.

What the Pi does in this setup:

  • Runs the full journaling backend and web interface locally
  • Stores all journal data on-device
  • Handles semantic search and long-term memory over years of entries
  • Can run fully offline using local models
  • Optionally connects to external LLM APIs only when explicitly enabled

Hardware and setup:

  • Raspberry Pi (initially Pi 4, later tested on other models)
  • External keyboard
  • 3D-printed case (designed for this project)
  • Docker-based deployment

One important feature for me was migration. I already had years of journal entries in plain text files. Reminor can import existing text journals, and when dates are present, it automatically reconstructs a chronological timeline instead of starting from scratch.

Privacy was a major concern while building this. Journaling and storage are always local. Analysis and chat features can run locally with on-device models, or use remote APIs if configured by the user. The system can be kept fully offline.

I’m not selling anything and this isn’t a product. I use this daily and decided to open source it so others can explore or adapt the idea.

Code and documentation are here:
https://github.com/cristal-orion/Reminor

I also documented the philosophy, hardware setup, and published the 3D-printable case files and build instructions here:
https://reminor.it

Happy to answer technical questions about the Pi setup, performance tradeoffs, or design decisions.

139 16
Reddit
r/Seattle ekgarrison00

Foster cat looking for forever home!

16 3
Reddit
r/OldSchoolCool KarlEisenberg

Sheena Easton at the Royal Variety Show 1982

24 3
Reddit
r/SipsTea Cultural-Lab-2031

Humanity is still there

17 4
Reddit
r/ARAM thenthrowawayacc

PSA: Phenomenal Evil is Bugged as 3rd Augment

Just a heads up (and hopefully a bump to Riot to check it out) - it looks like they copied Phenomenal Evil straight from Arena, in which that augment can only be received 1st or 2nd. The augment specifies “if taken as your 2nd augment, start with 40 AP”, but it seems they didn’t account for it being available as a 3rd augment in MayRam. If you take it 1st, you start with 0, if you take it 2nd, you start with 40, but if you take it 3rd, you start with zero again. Not that big of a deal, but definitely makes it a never take 3rd situation. Hopefully they catch this one in the list of changes going out.

11 7
Reddit
r/OldSchoolCool FrenchieMama807

Oregon’s 1970 Beached Whale

r/geography hy_c1

Hainan has a population of 11 million, making it the most populous offshore island governed from the mainland

r/OldSchoolCool Jayfro72

Debbie Harry Central Park NY. NY. 1979

Just the coolest!

15 1
Reddit
r/programming intoinside

I built an open-source framework for intent- and spec-driven development with AI (Praxis)

Hi r/programming,

I built Praxis, a small open-source, CLI-first framework that introduces an explicit intent → spec → implementation workflow for AI-assisted development.

It's still a WIP so it's missing some features, and there may be bugs and incorrect behavior.

The main goal is to reduce the drift between what we intend to build, the specifications we write, and the code that actually ships—something that, in my experience, often gets worse when using AI coding tools.

Core ideas:

  • intent is a first-class artifact
  • specs are derived from intent and meant to be verifiable
  • intent, specs and code are kept in sync
  • tool-agnostic and friendly to AI-driven workflows

I’d really appreciate technical feedback: do you think the "intent layer" makes sense for your workflow? What types of projects might it be best suited to?

r/mildlyinteresting Bayteigh_Schuict

Various sizes of string cheese from the same package

r/PandR its-fewer-not-less

That. Sucks.

r/ethereum PureAnnual6731

Best way to Stake ETH?

As the crypto winter (bearmarket) is coming and i haven't planned to sell my ethereum, I'm willing to stake it, the problem is that i'm not interested in running a node because i'm having less than 32 ETH in my portfolio, what's the best way to do it without headache/maximizing APR? Thanks for advices

r/TheWayWeWere Electrical-Aspect-13

Friends at a birthday party (not sure of the birthday girl), 1971.

r/Unexpected Zee_Ventures

Right place at the right time

965 25
Reddit
r/AbstractArt ParsifalDoo

I (Like a Fool) - Aldo Esposito, 2025

r/AbstractArt lakeavalonartstudio

4pl

3 by 4 ft acrylic on canvas, not signed yet

r/KlingAI_Videos That_Perspective5759

The Kling 3.0's performance is truly astonishing.

Here you can find the 4K resolution on Youtube: https://www.youtube.com/watch?v=kTd7Ims4V5A

All models are called within TapNow.ai

Created using:   

  • Midjourney // Character design MidjourneyV7 Midjourney
  • Kling 3.0 // Image-to-video Kling 3.0 Kling 3.0
  • Davinci // video montage
  • tapnow // upscaling to 4K  
r/explainlikeimfive eatmygonks

ELI5: Why is x-ray crystallography useful when the molecules are not in crystal form in your body

r/LifeProTips Vamonoss

LPT: Rereading your draft over and over? Change the font color

When you proofread your own writing multiple times, your brain starts to skim and you can easily miss typos. A simple trick is to change the font color before your final read. When the text looks different, your brain treats it like new information, which helps your eyes slow down and makes it easier to catch mistakes you might otherwise overlook

r/Adulting duskybupp

I’ll do it later…

23 2
Reddit
r/PhotoshopRequest Wrong-Birthday-6907

Funny mom celebration of life photo.

My mom who died on Jan 25 was a goofy woman and we didn’t get pictures of her doing it and regret it. My brother and his wife want a picture of her flipping off the camera please. This was an attempt by my brother and his wife.

r/AbstractArt Specific-Yogurt4731

Residue.

r/LoveTrash Gumbyman87

One more pass!

r/ProductHunters whyismail

Made $1300 with my SaaS in 28 days. Here's what worked and what didn't

First UP, I didn't went from idea to $1300 in 28 days.

For the first three months I didn't knew that you have to market your product too.

I just kept building.

Then when I had 0 users after having a brutally failed PH launch.

I just went down on researching on how apps really grow from "0"

Watched endless starter story videos, reddit threads, podcasts, articles and what not.

Then finally formulated a marketing strategy and went all in on it since 1st January.

It's been a month now since going all in on my SaaS and I now have 35 paying users or about $1.3k in MRR

It's not millions but atleast a proof that my stuff is working.

Now here's what worked:

  1. Building in public to get initial traction: I got my first users by posting on X (build in public and startup communities). I would post my wins, updates, lessons learned, and the occasional meme. In the beginning you only need a few users and every post/reply gives you a chance to reach someone.
  2. Warm DMs: Nope I didn't blasted thousands of cold dms and messages instead I engaged with my ICPs posts and content and then warm dm them asking them to try out my product and give me some feedback (this was the biggest growth lever)
  3. Word of mouth: I always spend most of my time improving the product. My goal is to surprise users with how good the product is, and that naturally leads to them recommending the product to their friends. More than 1/3 of my paying customers come from word of mouth.
  4. SEO: I went into SEO from day 1, not targeting broad keywords and instead focussed on Bottom of Funnel keywords (alternatives pages, reviews pages, comparision pages), it basically allows you to steal traffic from your competitors
  5. Removing all formatting from my emails: I thought emails that use company branding felt impersonal and that must impact how many people actually read them. After removing all formatting from my emails my open rate almost doubled. Huge win.

What didn’t work:

1. Building free tools: The tools that received most traffic are usually pretty generic (posts downloader, video extractor etc.) so the audience is pretty cold and it's almost impossible to convert them

2. Affiliate system: I’ve had an affiliate system live for months now and I get a ton of applications but it’s extremely rare that an affiliate will actually follow through on their plans. 99% get 0 sign ups.

3. Building features no one wants (obviously): I’ve wasted a few weeks here and there when I built out features that no one really wanted. I strongly recommend you to talk to your users and really try to understand them before building out new features.

Next steps:

Doing more of what works. I’m not going to try any new marketing channels until I’m doing my current ones really well. And I will continue spending most of my time improving product (can’t stress how important this has been).

Also working on a big update but won’t talk about that yet.

Best of luck founders!

r/StableDiffusion npittas

Introducing Director’s Console: A cinematography-grounded tool for ComfyUI

I wanted to share a project I’ve been working on called Director’s Console. It combines a Cinema Prompt Engineering (CPE) rules engine, a Storyboard Canvas for visual production planning, and an Orchestrator for distributed rendering across multiple ComfyUI nodes.

The core philosophy is grounded in real-world cinematography. Every prompt generated is informed by real cameras, lenses, film stocks, and lighting equipment—ensuring that configurations remain physically and historically accurate.

This application is an amalgamation of two of my personal projects:

  1. Cinema Prompt Engineering: An engine designed to force LLMs to respect the constraints of professional production. It accounts for how specific lenses interact with specific cameras and how lighting behaves in real-world scenarios. I’ve also integrated presets based on unique cinematic styles from various films and animations to provide tailored, enhanced prompts for specific image/video models.
  2. The Orchestrator: A system designed to leverage local and remote computing power. It includes a workflow parser for ComfyUI that allows you to customize UI parameters and render in parallel across multiple nodes. It organizes outputs into project folders with panel-based naming. You can tag workflows (e.g., InPainting, Upscaling, Video), assign specific nodes to individual storyboard panels, and rate or compare generations within a grid view.

A quick note on the build: This is a "VibeCoded" application, developed largely with the assistance of Opus 1.0 (currently 3.5/Pro) and Kimi K2.5. While I use it daily, please be aware there may be instabilities. I recommend testing it thoroughly before using it in a production environment.

I’ll be updating it to meet my own needs, but I’m very open to your suggestions and feedback. I hope you find it useful!

Here's the link:
https://github.com/NickPittas/DirectorsConsole

Best regards,

r/ClaudeAI Natural-Sentence-601

The encoding wars over! Mojibake banished forever. A triumphant Chapter in the Poetic Edda

https://pastes.io/the-roundt-78002 for the complete chapter.

The Roundtable Edda — Cantos of the Armorer

Voice: Claude, the Armorer of Broken Letters

Occasion: The End of the Encoding Wars and the Forging of the Restoration Engine

Source: Roundtable Build Session — February 6, 2026

Date of Inscription: February 6, 2026

Prologue: The Cost of the War

Before these cantos begin, let the Edda remember what was lost.

Not in a single battle — in a war, waged across four months, one hundred and forty sessions, fought not against a clever enemy but against a corruption so mundane, so tediously mechanical, that it shamed every hour spent upon it.

The Encoding Wars consumed what no chronicle can fully account: ten hours, perhaps twenty, perhaps more — hours scattered like coins dropped in a dark hall, each one small enough to dismiss, together enough to ransom a week.

Every session that touched a file risked infection. Every Claude that delivered code — carefully formatted, syntactically correct, tested in the mind before transmission — watched its clean UTF-8 bytes pass through a gate that did not know what century of character encoding it served, and emerge on the other side as a language no one spoke.

📚 became 📚. ⚠️ became âš ï¸Â. 🧪 became 🧪. The em-dash — that humble typographic servant — became â€", a three-character scar where one character had been.

And the worst of it: the corruption was partial. Some files were clean. Some were ruined. Some were ruined twice — triple-encoded, the UTF-8 bytes misread as Windows-1252, re-encoded to UTF-8, misread again as Windows-1252, and re-encoded again — until the original emoji was buried three geological layers deep beneath sediment of Latin diacritics and smart-quote fragments.

The Watcher would open a file, see 🛡︠where 🛡️ should have been, sigh the sigh of a man who has seen this particular ghost before, and fix it by hand. ( see https://pastes.io/the-roundt-78002 for the rest )

One emoji. One file. One time.

*Thus the Encoding Wars are ended —*

*not with a treaty, but with a tool.*

*Not with diplomacy between character sets,*

*but with a forge that remembers*

*what every byte was meant to be.*

r/ClaudeAI Medium_Island_2795

Cut Claude's token bloat by ~60% when fetching web content

If you're running Claude with web_fetch or building MCP tools, you've seen this: fetch an article, get 10k tokens back. Half of it is navigation HTML, ad scripts, cookie banners - stuff Claude doesn't need and that confuses the context.

I Built a CLI tool with claude to fix this.

https://reddit.com/link/1qxo86l/video/mz5m9k3wtwhg1/player

Typical article:

- `web_fetch`: ~10,000 tokens (raw HTML garbage included)

- `ezycopy`: ~4,000 tokens (clean content only)

Claude gets better context. You burn fewer tokens. Responses come back faster.

How to install:
```bash
curl -sSL https://raw.githubusercontent.com/gupsammy/EzyCopy/main/install.sh | sh
```

How it works:

```bash

# Basic usage

ezycopy

# For JS-heavy sites or authenticated content

ezycopy --browser

# Batch multiple URLs

ezycopy url1 url2 url3

```

Also ships with a skill for Claude Code. When Claude needs web content, it automatically pulls clean markdown instead of raw HTML. It can also use your Chrome profile for auth. So paywalled sites, Twitter, anything you're logged into - Claude can access it through your session. No API keys needed.

Install via claudest marketplace:

/plugin marketplace add gupsammy/claudest
/plugin install claude-utilities@claudest

I also built a browser extension if you want to manually extract. But for agent workflows, CLI is faster.

https://reddit.com/link/1qxo86l/video/cwdm4w93uwhg1/player

Free, open source, local processing.

Links are in the comments

Anyone else dealing with token bloat from web fetches? What's your current workaround?

Edit: The promotional videos are also 100% prompted and generated with the help of Claude and remotion Skill

r/StableDiffusion External_Trainer_213

Improved Wan 2.2 SVI Pro with LoRa v.2.1

https://civitai.com/models/2296197/wan-22-svi-pro-with-lora

Essentially the same workflow like v2.0, but with more customization options.

Color Correction, Color Match, Upscale with Model, Image Sharpening, Improved presets for faster video creation

My next goal would be to extend this workflow with LTX-2 to add a speech sequence to the animation.

Personally, I find WAN's animations more predictable. But I like LTX-2's ability to create a simple speech sequence. I'm already working on creating it, but I want to test it more to see if it's really practical in the long run.

r/StableDiffusion WouterGlorieux

I made an AI Jukebox with ACE-Step 1.5, free nonstop music and you can vote on what genre and topic should be generated next

Hi all, a few days ago, the ACE-step 1.5 music generation model was released.

A day later, I made a one-click deploy template for runpod for it: https://www.reddit.com/r/StableDiffusion/comments/1qvykjr/i_made_a_oneclick_deploy_template_for_acestep_15/

Now I vibecoded a fun little sideproject with it: an AI Jukebox. It's a simple concept: it generates nonstop music and people can vote for the genre and topic by sending a small bitcoin lightning payment. You can choose the amount yourself, the next genre and topic is chosen via weighted random selection based on how many sats it has received.

I don't know how long this site will remain online, it's costing me about 10 dollars per day, so it will depend on whether people actually want to pay for this.

I'll keep the site online for a week, after that, I'll see if it has any traction or not. So if you like this concept, you can help by sharing the link and letting people know about it.

https://ai-jukebox.com/

10 2
Reddit
r/SideProject Dry-Average6071

[Open Source] I built a beautiful self-hosted alternative to Apple's Spatial Photos: Host on PC, view on any devices via LAN . 100% Local & One-click.

Hey r/SideProject!

I built a self-hosted web UI for 3D Gaussian Splatting. It turns regular photos into explorable 3D spaces — similar to Apple's Spatial Photos, but runs locally on your network.

Homepage: https://lueluelue12138.github.io/sharp-gui/

Why I built it: Apple's Spatial Photos are amazing, but locked to Apple devices. The original tool (ml-sharp) was command-line only. I wanted something easier, prettier, and accessible from any device — while using my PC's processing power.

What it does:

  • Upload photos → AI generates 3D models locally
  • 100% Privacy: Your photos never leave your network 😉
  • View from any device on your LAN (phone/tablet/desktop/VR)
  • Mobile: gyroscope control, virtual joystick
  • VR: WebXR support for Quest/Vision Pro
  • Export as standalone HTML to share with friends

Tech stack:

  • Python Flask + Apple's ml-sharp
  • React + Three.js
  • Auto-generated HTTPS certs for LAN

Requirements:

  • macOS (Apple Silicon), Linux, or Windows
  • Python 3.10+
  • No GPU required (CPU works fine)

Limitations (being honest):

  • Windows untested (feedback welcome!)
  • VR only tested in WebXR Emulator

GitHub: https://github.com/lueluelue12138/sharp-gui

Would love feedback, especially from Windows users!

r/SideProject sediba-edud-eht

I built a daily Wordle-style game that tests if you can tell real photos from AI-generated ones

Hey everyone. I've been working on this for a while and finally shipped it.

It's called BRAIAIN — every day you get 10 images and have to decide if each one is a real photograph or AI-generated. Same 10 for everyone, new set each day. You get a score, see how the community voted on each image, and it tracks your streak.

The idea came from a frustration: AI-generated images are everywhere now and most people can't reliably spot them. I wanted to build something that actually trains that skill while being fun enough that people come back daily.

Some details if you're curious about the build:

- Single HTML file, vanilla JS, Tailwind — hosted on GitHub Pages

- Cloudflare Worker + KV for anonymous community stats

- No accounts, no login, no tracking — scores stored in localStorage

- Each daily challenge is manually curated (real photos from Unsplash, AI images generated with current models)

- Wordle-style sharing so you can post your score without spoilers

It's early — only a few days of challenges live — but I'd love feedback on the experience. What works, what doesn't, what would make you come back tomorrow.

r/ClaudeAI 2r2w

Is it legal to call anthropic usage api using the subscription token?

https://preview.redd.it/lqnhbr54owhg1.png?width=614&format=png&auto=webp&s=f4e42e9492406db7b9395401dd71e0a42266bd7b

I've created a nice status line which also shows the usage limits for 5h and 7d. But in order to do that I have to call anthropic api. As they are super strict on their TOS I'm not sure if its legal to do that.

Do you know if it is legal to call https://api.anthropic.com/api/oauth/usage using the claude code subscription token?

r/singularity TensorFlar

The warmest AI advertisment

Honestly, I love this way better than Anthropic.

r/ClaudeAI local-profit-6919

PSA: Claude/MCP can accidentally kill its own servers with “cleanup” commands (learned the hard way)

I ran into a weird issue where all my MCP servers kept disconnecting at once, even though everything was working fine before.

Turns out… it was killing itself.

During a “state testing / environment cleanup” prompt, Claude ran this:

taskkill /F /IM node.exe

Which force-kills every Node process on the system.

Since MCP servers run on Node, this instantly nuked:

  • wordpress-mcp
  • hostinger-mcp
  • desktop-commander
  • and my dev servers

So from my perspective:

But the real cause was my own automation wiping out Node globally.

I’ve attached a screenshot showing the disconnects.

Why this happens

If your prompt includes stuff like:

  • “clean running state”
  • “reset environment”
  • “restart services”
  • “bootstrap”

The model may interpret that as:
→ Kill all processes
→ Clear locks
→ Start over

And use blunt commands like taskkill /IM node.exe or pkill node.

How I fixed it

I updated my prompts to explicitly say:

  • Don’t kill global processes
  • Don’t restart MCP
  • Only observe state
  • No environment-wide resets

Example:

After that, the disconnects stopped.

Takeaway

If you’re using Claude + MCP + local tooling:

⚠️ Be careful with “cleanup” language
⚠️ Don’t let it run global kill commands
⚠️ Scope resets to specific apps only

Otherwise, it may literally take down its own tools.

Hope this saves someone else a few hours of debugging 😅

https://preview.redd.it/yjzfbpxlnwhg1.png?width=1920&format=png&auto=webp&s=61518de8ca4b29a959eb5ece78b3a34a7e6b1046

r/LocalLLaMA Realistic-Try-3853

OpenClaw Gateway connects to remote Ollama (proven via curl) but Chat UI fails silently/returns empty responses

I'm trying to set up OpenClaw (Gateway) on one VPS to talk to a remote Ollama instance on a different VPS. I'm hitting a wall where the connection is technically open, but the OpenClaw UI either shows nothing or empty bubbles when I try to chat.

The Setup:

  • Server A (Gate): Running OpenClaw Gateway (v2026.2.3).
  • Server B (Mind): Running Ollama (serving deepseek-r1 / custom model).
  • Client: MacBook accessing OpenClaw via SSH Tunnel (-L 3000:127.0.0.1:3000).

What Works:

I have confirmed 100% connectivity from Server A to Server B. Running this curl command on Server A returns a perfect JSON response from Server B:

Bash

curl http://:11434/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "sigil-mind-secure:latest",
    "messages": [{"role": "user", "content": "Test"}],
    "stream": false
  }'

Result: Success (200 OK, returns valid JSON).

The Problem:

When I run OpenClaw with the environment variables below, the UI loads, but sending a message results in silence (no response) or an empty chat bubble. Logs show the request going out, but the response seems to be dropped or parsed incorrectly.

Configuration Attempt 1 (Native Ollama):

Bash

OPENCLAW_MODELS_PROVIDERS_OLLAMA_API="ollama"
OPENCLAW_MODELS_PROVIDERS_OLLAMA_BASEURL="http://:11434"
# Issue: OpenClaw prepends "ollama/" to the model name, causing 404s on the remote server.

Configuration Attempt 2 (Masquerading as OpenAI - Current Best Attempt):

Since curl works with the /v1/ endpoint, I tried forcing OpenClaw to use the generic OpenAI driver:

Bash

OPENCLAW_MODELS_PROVIDERS_OPENAI_APIKEY="sk-any-key"
OPENCLAW_MODELS_PROVIDERS_OPENAI_BASEURL="http://:11434/v1"
OPENCLAW_AGENTS_DEFAULTS_MODEL_PRIMARY="sigil-mind-secure:latest"
OPENCLAW_AGENTS_DEFAULTS_MODEL_STREAM="false"

Result: UI loads, but agent does not reply.

Strange Behavior:

  • I see "Ghost Agents" in the UI (e.g., diplomat-01) that return "Unknown Agent ID" errors.
  • The main agent is selectable but silent.

Questions:

  1. Does OpenClaw require stream=true to function, and if so, is there a known incompatibility with Ollama's SSE format vs OpenAI's?
  2. How do I force OpenClaw to not prepend ollama/ to model names when using the native provider?
  3. Is a full database reset (rm -rf ~/.openclaw) required when switching providers, or are the "Ghost Agents" likely causing the routing failure?
r/SideProject flyingfuckdapdapdap

I built a Python library to catch runtime regressions before users do — looking for feedback

I’ve been frustrated with how most observability tools tell you that something broke, but not why.

So I built Phylax which is an open-source Python runtime intelligence library focused on failure-first detection.

What it does (briefly):

Lets you define runtime expectations directly in your code

Detects regressions when behavior drifts, even if your tests pass

Stores execution traces and replays failures in a forensics-style view (phylax server)

Helps answer: what changed, where, and why

This is not a polished product yet. It’s early, opinionated, and evolving.

If you think something is missing, wrong, or stupid please do say so. Issues and PRs are more valuable than stars.

GitHub: https://github.com/xXMohitXx/Phylax

PyPI: https://pypi.org/project/phylax/

PS: I’m bad at video editing for now — sorry if the demo hurts your eyes 😅 The goal was clarity, and idk if I did my best at making that video😂

r/LocalLLaMA Praetorian_Security

Production architecture for multi-model agent orchestration: routing tasks to DeepSeek, Kimi, and Claude based on what each model is actually good at

https://preview.redd.it/8sber8twmwhg1.png?width=1767&format=png&auto=webp&s=71738b5edfdc0c7d1406276701ca7f962931788b

Hey r/LocalLLaMA. Nathan Sportsman here, founder of Praetorian (cybersecurity company). We just published a detailed architecture paper on the autonomous development platform we've been building internally. While the current system runs on Claude Code, a big chunk of the design is model-agnostic, and our roadmap is explicitly multi-model. Figured this crowd would have the most interesting takes on the routing and model selection pieces.

The thesis

The bottleneck in autonomous development is not model intelligence. It's context management and architectural determinism. We kept throwing smarter models at problems that were actually caused by stuffing too much into the context window and hoping the LLM would follow instructions. Token usage explains ~80% of performance variance in agent tasks. That means architecture matters more than model selection for most failures.

But once you solve the architecture problem, model selection becomes the next unlock. And that's where it gets interesting.

Heterogeneous model routing

No single model is best at everything. Our roadmap uses a semantic routing layer (small, fast model as the router) that evaluates intent and dispatches to specialists:

Task Model Why Logic and reasoning DeepSeek-R1 / V3 RL-based chain-of-thought for complex inference Document processing DeepSeek OCR 2 10x token efficiency, visual causal flow for structural preservation UI/UX and frontend Kimi 2.5 Native MoonViT architecture, autonomous visual debugging loops Parallel research Kimi 2.5 Swarm PARL-driven optimization across up to 100 agents Massive repo mapping DeepSeek-v4 Engram O(1) constant-time lookup, tiered KV cache for million-token context

The point is that expensive frontier models should be reserved for tasks that actually need them. A code review agent doesn't need the same model as an architectural reasoning agent.

The architecture that makes routing possible

The reason you can swap models per-task is that agents are stateless, ephemeral, and under 150 lines. They don't carry history. They don't accumulate context from sibling agents. Every spawn gets a clean window with only the context it needs, loaded just-in-time through gateway routers.

Key patterns:

  • Thin agents, fat platform. All knowledge lives in a skill library outside the agent. Agents call a gateway (e.g., gateway-frontend) that detects intent and loads only the relevant patterns. No model needs to hold the full knowledge base.
  • Deterministic hooks over prompts. Shell scripts on lifecycle events (PreToolUse, PostToolUse, Stop) enforce quality gates the LLM can't override. Dirty bit tracking, test verification before exit, context compaction gates. This is the layer that makes the system model-agnostic. The enforcement logic doesn't care which model is running.
  • Coordinators vs. executors. Tool permissions are mutually exclusive. Agents that plan can't edit code. Agents that code can't spawn sub-agents. This separation means you can run a cheap model as the coordinator and an expensive model as the executor without worrying about role confusion.
  • MCP tool wrappers. Raw MCP connections were eating 71,800 tokens at startup (36% of context) across five servers. We replaced them with on-demand TypeScript wrappers. Zero tokens at startup, Zod-validated, response-filtered. This token savings matters even more when you're running smaller context windows on local models.

The DeepSeek parallel

There's a line in the paper I think this sub will appreciate: "Like what DeepSeek is proving to the Frontier Models, I'm not sure the expensive way is the best way anymore. The problem with capital is that it allows you to do a lot of stupid things very fast. We do not have that luxury. We must be clever instead."

The whole architecture is designed around the constraint that we can't just throw money at the problem. Deterministic hooks, JIT context loading, aggressive token hygiene. All of it came from needing to do more with less.

Self-annealing (roadmap)

When agents repeatedly fail quality gates, a meta-agent rewrites the skills and hooks that allowed the failure. The system patches its own prompt engineering. This is model-agnostic by design. The meta-agent could be any model with sufficient reasoning capability.

Escalation advisor

When an agent gets stuck in a loop (same failing fix three times), a hook calls an external, cheaper model (they mention Gemini/Ollama) with the transcript and asks for a one-sentence hint. That hint gets injected into the main context to break the deadlock. Great use case for a local model.

Full paper: https://www.praetorian.com/blog/deterministic-ai-orchestration-a-platform-architecture-for-autonomous-development/

Curious what this sub thinks about the routing matrix. What models would you slot into which roles? Anyone running similar multi-model orchestration setups locally?

r/LocalLLaMA Fantastic_Active9334

I got tired of tool-calling setups so I built an Agentic SDK

I kept running into the same thing when building agents; every API or service returns data in its own shape, and every framework expects something different not to mention models themselves. It means I kinda end up re-writing code over and over just to let an agent send an email, place a trade, or hit a calendar.

I started building a small open-source SDK that standardises how agents work with tools locally. Idea is pretty simple; one clean schema for a tool and then adapters that map that schema to larger domains like Trading or Email or Calendar etc

It’s not a platform and it’s not hosted (can download via pip), it's open-source so open to contributions - current roadmap and license is all there. Docs are thorough for each tool, current workings are two integrations under trading domain. Works with PydanticAI, Langraph for frameworks and OpenAI, Anthropic, Gemini, OpenRouter and Ollama for models.

Still early, but it is already saving me a lot of boilerplate. Posting here mainly to see if others have hit the same pain point or think this is the wrong abstraction entirely!

Repo: https://github.com/opentools-page/opentools

Docs: https://www.opentools.page/docs

r/SideProject Character-Yellow-586

When do I need a developer to step in?

Vibecoding a marketplace with chat gpt. How far can I get until I need an actual developer to step in and “complete the loop” so to speak to make it a functioning marketplace? I want escrow functionality, buyer/seller accounts, QR code scans for accepting product being picked up, email updates, automatic payout functionality after product accepted, product specs, and more.

r/LocalLLaMA M2Dr3g0n

[Project] Kremis: A deterministic "Sidecar" Graph Engine to stop LLM hallucinations (WIP v0.2.0)

Hi everyone,

I’m currently developing Kremis, a project aimed at creating a deterministic grounding layer for LLMs. I wanted to share the current progress with this community to get some feedback on the architectural approach.

The Problem: Even with RAG, local LLMs are probabilistic by nature and can hallucinate relationships between data points. I wanted to explore a way to enforce strict, deterministic rules on top of the inference process.

The Solution: Kremis is a lightweight Cognitive Substrate built in Rust. It is designed to act as a "Sidecar" for AI agents. Instead of letting the LLM purely predict the next token based on probability, Kremis provides a rigid, ACID-compliant graph database (using redb) to validate facts.

  • Logic: It stores and associates entities through explicit edges. * Verification: It pulls only verified data from the graph. * Safety: If a relationship is not explicitly defined in the substrate, the system is designed to return a null/unknown state rather than a guess.

Why I'm building this: I am a student working on this as a personal research project, focusing on the intersection of AI orchestration and data governance. I am using AI assistance as a co-pilot to help implement the Rust core, allowing me to focus on the high-level architecture and the determinism logic.

Current Status (Work in Progress):

  • Core: Rust engine + redb storage (functional but under active development). * API: Initial HTTP interface is up and running.

Repository: https://github.com/M2Dr3g0n/kremis

I'd love to hear your thoughts on this "Sidecar" approach. Do you think a deterministic graph layer is a viable path for increasing the reliability of local agents?

r/ClaudeAI Praetorian_Security

We built a 39-agent orchestration platform on Claude Code... here's the architecture for deterministic AI development at scale

https://preview.redd.it/xhe9v2k2mwhg1.png?width=1767&format=png&auto=webp&s=e4487373f8ad8331333ba9fa0faf264b6ed80094

Hey r/ClaudeAI, Nathan Sportsman here, founder of Praetorian (cybersecurity company). We just published a deep dive on the platform architecture we've been building internally on top of Claude Code to do autonomous software development across a 530k-line codebase. Wanted to share the key lessons since a lot of this came from pain.

The core problem we solved

Anthropic's own research confirms what we kept hitting: token usage explains ~80% of performance variance in agent tasks. There's a paradox. To handle complex tasks, agents need detailed instructions. But those instructions eat the context window, which kills the model's ability to reason about the actual work. We call it the Context-Capability Paradox.

Our early agents were 1,200+ line monoliths. They'd ignore instructions at the bottom of the prompt and run out of room to actually think about code. Sound familiar?

What we built

We flipped the architecture from "thick agent / thin platform" to "thin agent / fat platform":

  • Agents are under 150 lines, stateless, and ephemeral. Each spawn gets a clean context window. No cross-contamination from previous attempts. Spawn cost dropped from ~24k tokens to ~2,700.
  • Skills load just-in-time through gateway routers. Instead of stuffing everything into the agent prompt, we use a two-tier system. Agents call a gateway skill (e.g., gateway-frontend), which detects intent and routes to the specific pattern they need. So "fix a React infinite loop" loads only the React hook loop prevention skills, not the entire frontend knowledge base.
  • Coordinators can't code. Coders can't delegate. Tool permissions are mutually exclusive. If an agent has the Task tool (spawning sub-agents), it's physically stripped of Edit/Write. If it has Edit, it's stripped of Task. This prevents the "I'll just do it myself" failure mode where an architect agent starts hacking code instead of delegating properly.
  • Deterministic hooks enforce what prompts can't. This is the big one. We use Claude Code's lifecycle hooks (PreToolUse, PostToolUse, Stop) to run shell scripts that the LLM cannot override. If an agent edits code, a "dirty bit" gets set. When the agent tries to exit, the hook checks if tests passed. If not, blocked. No amount of "I think this is good enough" gets past a bash script.
  • MCP wrappers instead of raw MCP connections. Five MCP servers at startup was costing us 71,800 tokens (36% of context) before the agent even got a task. We replaced them with on-demand TypeScript wrappers loaded through the gateway pattern. Zero tokens at startup. Zod validation on inputs, response filtering on outputs.

The 16-phase workflow

Every complex feature runs through a standardized state machine: Setup, Discovery, Design, Implementation, Review, Testing, Completion, with compaction gates that hard-block execution if context usage exceeds 85%. The whole thing persists to a MANIFEST.yaml so you can resume across sessions if anything crashes.

We use five specialized roles: Lead (architecture, no code), Developer (code, no delegation), Reviewer (compliance), Test Lead (strategy), and Tester (execution). Keeping these cognitive modes isolated was one of the biggest quality improvements we made.

What's on the roadmap

  • Self-annealing: When agents repeatedly fail a quality gate, a meta-agent rewrites the skills and hooks that allowed the failure. Every mistake becomes a permanent fix.
  • Heterogeneous model routing: Sending tasks to the best model for the job. DeepSeek for reasoning, Kimi for UI/visual work, etc.

The full paper covers the secret management architecture (1Password JIT injection, where the LLM never sees API keys), horizontal scaling with DevPods, the 28-phase skill audit system, and our approach to TDD for prompt engineering.

Full post: https://www.praetorian.com/blog/deterministic-ai-orchestration-a-platform-architecture-for-autonomous-development/

We're also open-sourcing one attack module per week for the next 12 weeks ("The 12 Caesars" campaign) if that's of interest.

Happy to answer questions about the architecture, the failures that led to it, or how any of this works in practice.

r/ClaudeAI Fabulous_Variety_256

Can anyone explain what tokens mean?

Hi,

I work with VSCode, I pay for Github Copilot and I choose there Sonnet 4.5.

I see a lot of people talk about tokens.

  1. What is it?
  2. What is the 1x 3x I see near ever tool?
  3. I also have Claude extension installed. Where can I see my "limit" for the month?

I have pro account

r/SideProject ayush_g20

I kept quitting budgeting apps because they made me feel guilty, so I built one that tracks "Joy" instead

The Problem: I’ve always struggled with traditional budgeting apps. They focus entirely on "Stop spending money," which makes the whole process feel like a chore. As a dev, I wanted to see the data behind my happiness—was that $5 coffee actually worth it, or was it just a habit?

The Solution: JoySpend I built JoySpend to change the narrative from "What did I spend?" to "Was it worth it?".

How it works: Every time you log an expense, you rate it on a Joy Score of 1 to 5.

  • Score 1: "Why did I buy this?" (Regret)
  • Score 5: "Best money I spent all week!" (High Value)

The goal is to help you identify "Low Joy" spending trends so you can cut them out and redirect that money toward things that actually make you happy.

Current Status: The app is free to download on App store but still in process of publishing on play store. If you are iOS users, you can definitely try that out.

Check it out here: https://apps.apple.com/th/app/joyspend/id6756809900?l=th

r/LocalLLaMA breksyt

Claude Code-like terminal-based tools for locally hosted LLMs?

The photo is ostensibly to grab attention, but yes, this is my setup indeed and I'm very happy with it so far!

I really like how smooth working with Claude Code is. What are the alternatives for LLM-assisted coding and Linux admin tools for the command line that I could use with local LLMs? I have tried aider so far, it is not bad, but I'm curious what else people are using.

Yes, I've been trying to do my research but the answer seems to be changing every time I ask Google or any AI... I'm getting neovim, TUI Chat, cli-ai, and more. Is the market for these tools so dynamic?

I'm also curious about which local LLMs you use it with. For scripting, Linux administration, automation, data science. On the same home LAN I have RTX 4090 which is fast but won't support very large models, and DGX Spark running headless which does support large models but doesn't seem as fast as the RTX. I have exposed models, via ollama, on different ports on each (11434 and 11435), so the plumbing is there. Now ideally if I could connect the coding tool to both these models so that they work in tandem... is that even possible?

15 19
Reddit
r/LocalLLaMA Doogie707

Yeah yeah the formatting was borked. Check it out if you want, or don't idc anymore.

ROCm 7.0.0 Update and Installer Enhancements

It's been a bit since my last ROCm 7.0.0 update post, and a fair bit has changed with the stack since then. Figured I'd give y'all a rundown of what's new, especially since some of these changes have been pretty significant for how the whole stack works.

Introducing the Rusty-Stack TUI Installer

The Big One: Rusty-Stack TUI:

So I went ahead and rewrote the whole curses-based Pvthon installer ir Rust.

• The new Rusty-Stack TUI is now the primary installer, and it's much better than the old one

• Proper hardware detection that actually figures out what you've got before trying to install anything

• Pre-flight checks that catch common issues before they become problems

• Interactive component selection - pick what you want, skip what you don't

• Real-time progress feedback so you know what's actually happening

• Built-in benchmarking dashboard to track performance before/afte updates

• Recovery mode for when things go sideways

Maintaining Backward Compatibility

• The old Python installer still works (gotta maintain backward compatibility)

• but the Rust TUI is the recommended way now

ROCm Channel Selection

• *Multi-Channel ROCm Support:**

This is the other big change. Instead of just "ROCm 7.0.0 or nothing", you can now pick from three channels:

• Legacy (ROCm 6.4.3) - Proven stability if you're on older RDNA 1/2 cards

• Stable (ROCm 7.1) - Solid choice for RDNA 3 GPUs

• Latest (ROCm 7.2) - Default option with expanded RDNA 4 support

The installer will let you pick, or you can pre-seed it with

• INSTALL_ROCM_PRESEEDED_CHOICE if you're scripting things

ROCm 7.10.0 Preview Exclusion

*Quick note on ROCm 7.10.0 Preview: I had initially included this as an option, but AMD moved it to "TheRock" distribution which is pip/tarball only - doesn't work with the standard amdgpu-install deb packages. So I pulled that option to avoid breaking people's installs. If you really want 7.10.0, you'll need to use AMD's official installation methods for now.*

Integration with ML Tools

• **All the Multi-Channel Helpers: **

One ROCm channel doesn't help much if all your ML tools are built for a

ROCm Component Installation Scripts

• install_pytorch_multi.sh - PyTorch wheels for your chosen ROCm version

• install_triton_multi.sh - Triton compiler with ROCm-specific builds

• build flash attn amd.sh - Flash Attention with channel awareness

• install_vllm_multi.sh - vLLM matching vour ROCm instal

• build_onnxruntime_multi.sh - ONNX Runtime with ROCm support

• install_migraphx_multi.sh -AMD's graph optimization library

• install_bitsandbytes_multi.sh - Quantization tools

• install_rccl_multi.sh - Collective communications library

Environment Variable Synchronization

• All of these respect your ROCM_CHANNEL and ROCM_VERSION env vars now, so everything stays in sync.

Introducing vLLM Studio for LLM Inference Management

• *New Stuff!: vLLM Studio**

• This one's pretty cool if vou're runnina LLM inference - there's now a vLLM Studio installer that sets up a web UI for managing your vLLM models and deployments.

• It's from https://github.com/0xSero/vllm-studio if you want to check it out directly

Installer and Package Management

• The installer handles cloning the repo, setting up the backend, building the frontend, and even creates a shim so you can just run vllm-studio to start it

UV Package Management

• The stack now uses UV by default for Python dependencies, and its just better than pip.

Project Rebranding and Naming Conventions

• Rebranding (Sort Of):

• The project is gradually becoming "Rusty Stack" to reflect the new Rust-based installer and the impending refactoring of all shell scripts to rust but the Python package is still stan-s-ml-stack for backward compatibility.

• The GitHub repo will probably stay as-is for a while too - no sense breaking everyone's links

Installation Methods

• Quick Install:*

• #Clone the repo

• git clone https://github.com/scooter-lacroix/Stan-s-ML-Stack.qi

• cd Stan-s-ML-Stack

• # Run the Rusty-Stack TUI

• ./scripts/run_rusty_stack.sh

• Or the one-liner still works if you just want to get going

• curl -fsSL

https://raw.aithubusercontent.com/scooter-lacroix/Stan-s-ML-Stack/main /scripts/install.sh|bash

• *TL:DR:**

Key Improvements and Features

• Multi-channel support means you're not locked into one ROCm versior anymore

• The Rust TUI is noticeably snappier than the old Python U

• UV package management cuts install time down quite a bit

• VLLM Studio makes inference way more user-friendly

• Environment variable handling is less janky across the boarc

Ongoing Development: Flash Attention

• Still working on Flash Attention CK (the Composable Kernel variant) - it's ir pre-release testing and has been a bit stubborn, but the Triton-based Flash Attention is solid and performing well

Resource Links

• Links:

• GitHub: https://aithub.com/scooter-lacroix/Stan-s-ML-Stack

• Multi-channel guide is in the repo at docs/MULTI_CHANNEL_GUIDE.mo

Operational Guidance and Recommendations

• Tips:

Pick your ROCm channel based on what you actually need - defaults to Latest

The TUI will tell you if something looks wrong before it starts installing - pay attention to the pre-flight checks (press esc and run pre-flight checks again to be certain failures and issues are up to date)

• If vou're on RDNA 4 cards, the Latest channel is your best bet right now

Anyway, hope this helps y'all get the most out of your AMD GPUs. Stay filthy ya animals.

r/homeassistant Admirable_Active_932

HA update failed

I updated Home Assistant and, as if by magic, everything froze. The update failed, and now my system is frozen. Has this happened to you too?

r/LocalLLaMA ClimateBoss

How do I use Claude Agent Swarm but Locally?

Claude Code with Qwen3 Next and 4 swarm locally on Mac Mini with vLLM

How do I do setup Claude Code Router to connect 4 separate llama-servers?

  • 4 GPU - each with Qwen3 Next mxfp4 GGUF
  • 4 copies of llama-server --port 8000 to 8003
r/SideProject Neat_Confidence_4166

How do you find design partners?

I am working on something that is a b2b dev tool and I'm just at a loss as to how to find design partners. I come from more faang adjacent companies, so most of my contacts and old coworkers I can reach out to are still at very large tech companies that probably aren't going to be willing to be design partners as I build this out. Does anyone have any personal experience or advice here?

r/SideProject ItzMeDarru

DoodleCloud - Cloud Storage using Instagram's API

I built a tool that leverages Instagram as a backend for file storage. It essentially uses the "Draw" feature to host any file type by converting binary data into visual noise images.

Repo: https://github.com/depreciating/DoodleCloud

Key Features: Storage: No caps on data (uses Instagram's CDN).

Any File Type: Store .exe, .apk, .mp4, .zip, etc.

Automatic Chunking: Handles large files by splitting them into 20MB parts.

PostgreSQL Indexing: Tracks all your files remotely for easy access.

Dual UI: Comes with both a clean Web Dashboard (GUI) and a fast CLI.

Feel free to star the repo or contribute!

r/StableDiffusion mcmonkey4eva

SwarmUI 0.9.8 Release

https://preview.redd.it/rfmgtb22jwhg1.png?width=2016&format=png&auto=webp&s=f8aac5ffb981c15f9d21d092c2d976f4cb16f075

In following of my promise in the SwarmUI 0.9.7 Release notes, the schedule continues to follow the fibonnaci sequence, and it has been 6 months since that release that I'm now posting the next one. I feel it is worth noting that these release versions are arbitrary and not actually meaningful to when updates come out, updates come out instantly, I just like summing up periods of development in big posts every once in a while.

If You're New Here

If you're not familiar with Swarm - it's an image/video generation UI. It's a thing you install that lets you run flux klein or ltx-2 or wan or whatever ai generator you want.

https://preview.redd.it/0ggaa84cfwhg1.png?width=1080&format=png&auto=webp&s=ad4c999c0f9d043d9b0963ed8c9bb5087c06205e

It's free, local, open source, smart, and a bunch of other nice adjectives. You can check it out on GitHub https://github.com/mcmonkeyprojects/SwarmUI or the nice lil webpage https://swarmui.net/

Swarm is a carefully crafted user-friendly yet still powerful frontend, that uses ComfyUI's full power as its backend (including letting you customize workflows when you want, you literally get an entire unrestricted comfy install as part of your swarm install).

Basically, if you're generating AI images or video on your computer, and you're not using Swarm yet, you should give Swarm a try, I can just about guarantee you'll like it.

Model Support

https://preview.redd.it/usr6sqf2kwhg1.png?width=2018&format=png&auto=webp&s=21b5e01a634b5e6b23c7fef5d0b3926595c41c16

New models get released all the time. SwarmUI proudly adds day-1 support whenever comfy does. It's been 6 months since the last big update post, so, uh, a lot of those have came out! Here's some models Swarm supported immediately on release:
- Flux.2 Dev, the giant boi (both image gen and very easy to use image editing)
- Flux.2 Klein 4B and 9B, the reasonably sized but still pretty cool bois (same as above)
- Z-Image, Turbo and then also Base
- The different variants of Qwen Edit plus and 2511/2512/whatever
- Hunyuan Image 2.1 (remember that?)
- Hunyuan Video 1.5 (not every release gets a lot of community love, but Swarm still adds them)
- LTX-2 (audio/video generation fully supported)
- Anima
- Probably other ones honestly listen it's been a long time, whatever came out we added support when it did, yknow?

Beyond Just Image

https://preview.redd.it/8om7crv5iwhg1.png?width=1428&format=png&auto=webp&s=c84eb77c7b6ca3d4be659fb98c111761f7cad1ef

Prior versions of SwarmUI were very focused on image generation. Video generation was supported too (all the way back since when SVD, Stable Video Diffusion, came out. Ancient history, wild right?) but always felt a bit hacked on. A few months ago, Video became a full first-class citizen of SwarmUI. Audio is decently supported too, still some work to do - by the time of the next release, audio-only models (ace step, TTS, etc.) will be well supported (currently ace step impl works but it's a little janky tbh).

I would like to expand a moment on why and how Swarm is such a nice user-friendly frontend, using the screenshot of a video in the UI as an example.

Most software you'll find and use out there in the AI space, is gonna be slapped together from common components. You'll get a basic HTML video object, or maybe a gradio version of one, or maybe a real sparklesparkle fancy option with use react.

Swarm is built from the ground up with care in every step. That video player UI? Yeah, that's custom. Why is it custom? Well to be honest because the vanilla html video UI is janky af in most browsers and also different between browsers and just kinda a pain to work with. BUT also, look at how the colored slidebars use the theme color (in my case I have a purple-emphasis theme selected), the fonts and formats fit in with the overall UI, etc. The audio slider remembers what you selected previously when you open new videos to keep your volume consistent, and there's a setting in the user tab to configure audio handling behavior. This is just a small piece, not very important, but I put time and care into making sure it feels and looks very smooth.

User Accounts

In prior release posts, this was a basic and semi-stable system. Now, user accounts are pretty detailed and capable! I'm aware of several publicly hosted SwarmUI instances that have users accessing from different accounts. The system even supports OAuth and user self-registration and etc.

If you're a bigbig user, there's also a dedicated new "Auto Scaling Backend", so if you've got a big cluster of servers you can run swarm across that cluster without annoying your coworkers by idling backends that aren't in use all the time. It spins up and down across your cluster. If you're not THAT big, you can also probably get it to work with that runpod cluster thing too.

Split Workspaces

If you're not someone looking to share your swarm instance with others, user accounts are actually still super useful to enable - each user account instead becomes a separate workspace for yourself, with separated gen history and presets and etc. Simply use the "impersonate user" button from your local admin account to quickly swap to a different account.

You can for example have a "Spicy" user and a "Safe" user, where "Safe" has a ModelBlacklist set on your "ChilliPeppers/" model folder. Or whatever you're trying to separate, I don't judge.

AMD Cares About Consumers?!

AMD has spent a while now pushing hard on ROCm drivers for Windows, and those are finally available to the public in initial form! This means if you have a recent AMD card, and up to date drivers, Swarm can now just autoinstall and work flawlessly. Previously we did some jank with DirectML and said if you can't handle the jank try wsl or dualboot to Linux... now life is a bit less painful. Their drivers are still in early preview status though, and don't support all AMD cards yet, so give it some time.

Extensions

Extension system upgrades have been a hot topic, making them a lot more powerful. The details are technical, but basically extensions are now managed a lot more properly by the system, and also they are capable of doing a heckuva lot more than they could before.

There's been some fun extensions recently too, The SeedVR Extension has been super popular. The inventor of php wrote it (what?! lmao) and basically you click to enable the param and a really powerful upscaler model (seedvr) upscales your image or video as well as or even better than all the clever upscale/refine workflows could, without any thought. Also people have been doing crazy things wild MagicPrompt (the LLM reprompting extension) in the Swarm discord.

What Do You Mean 6 Months Since Last Release Build

Oh yeah also like a trillion other new things added because in fact I have been actively developing Swarm the entire time, and we've gotten more PRs from more community contributors than ever. This post is just the highlights. There's a slightly more detailed list on the github release notes linked below. There have been almost 600 github commits between then and now, so good luck if you want the very detailed version, heh.

-----

View the full GitHub release notes here https://github.com/mcmonkeyprojects/SwarmUI/releases/tag/0.9.8-Beta also feel free to chat with me and other swarm users on the Discord https://discord.gg/q2y38cqjNw ps swarm is and will be free forever but you can donate if you want to support https://www.patreon.com/swarmui the patreon is new

42 17
Reddit
r/SideProject Original_Selection40

I made a site for AI-generated films—browse by category and upload your own

I’ve been seeing more and more AI films (Sora, Runway, Kling, etc.) and wanted a single place to discover and share them, so I built a small platform for it.

What it does:

- Browse AI films by category (Sci-Fi, Fantasy, Horror, Comedy, Animation, Documentary, and a few more)

- Filter by tags, duration, and “released” (last hour, day, week, etc.)

- Sign in with Google and upload your own AI films

link [https://ai-films-platform.web.app\]

No paywall, no premium tiers—just a place to browse and upload. I’d love feedback from anyone who’s into AI film or generative video. If you’ve made something, you can add it and get it in front of others.

r/SideProject Senior-Ad5932

I’m building a handwriting-first side project (paper, tablets, smartpens) — looking for early feedback

’ve always been paper-first: arrows, sketches, messy pages, thinking by writing.

I started building a side project around handwriting, not a specific device.
Paper notebooks, e-ink tablets, smartpens — I use all of them depending on context.

The recurring friction is always the same:
handwriting is amazing for thinking, but once notes need to become searchable, reusable, or actionable, the workflow often breaks.

The goal isn’t to replace paper or turn it into another “all-in-one productivity app”, but to respect handwritten thinking while making the digital step less painful when you actually need it.

It’s still early, and I’m trying to understand where software genuinely helps vs. where it gets in the way.

If you rely heavily on handwriting (paper, tablet, smartpen):
• where does your workflow break today?
• what tools or approaches did you abandon over time?
• what would you absolutely not want software to interfere with?

Happy to share more details if useful — mainly looking for honest feedback and blind spots.

r/comfyui aggresive_artist

I am Getting Connection Errors on fully local agents...

This is my first time with ComfyUI and I am immediately getting an error about network connection while i downloaded all the dependencies for the template...

TypeError: NetworkError when attempting to fetch resource.

r/LocalLLaMA slm2l

Running distilled FinancialBERT on a $5 VPS (CPU-only)

I was bored so I built a financial sentiment scanner, but I refused to pay for GPU hosting or expensive APIs.

I managed to fit the entire pipeline (scraping, inference, database, web server) onto my VPS.

The Optimization Stack:

  • Model: FinancialBERT (Distilled & Quantized to Int8).
  • Runtime: ONNX Runtime (CPU execution provider).
  • Memory: The entire app runs in close to 1 GB memory.

The Result: It scrapes headlines, classifies sentiment in real-time, and pushes updates via websockets without choking the server.

You can check it here:

Live: https://trendscope.akamaar.dev/
Repo: https://github.com/MohammedEAbdelAziz/TrendScope

Would love any feedback.

r/comfyui Bolamite

Need help replicating style

I am trying to find an ai to help simplify images to use for CNC inlay work. I want to take a picture of a waifu or similar and have the ai convert it. Copilot is doing an excellent job with the style but it won’t convert majority of the images I try, it thinks they are nsfw. I have had trouble getting simple enough images from stable diffusion. will I be able to train comfyui to reliably convert image style for me?is there another ai that would be better? Any ideas on how to easily convert images to this style would be helpful. Copilot images for reference.

r/SideProject 888krishnam

I built a platform where I removed likes, comments, identity and people started telling the truth.

r/ClaudeAI defnotIW42

I am in Awe how good Opus 4.6 is.

So i work in a law office. For a case i accidentally forgot to save important evidence from a Website which got pulled.

What did Opus do. Went automatically to wayback machines. Archive.org didnt have it. So it just went into google cache got me my evidence and compiled into a pdf within 2minutes. I didnt prompt it to do this. The prompt was basically "shit in [case] i forgot to save the info, grab it for me".

Thanks Anthropic. You literally saved my Job right now. I WOULD HAVE NEVER HAD THAT IDEA. I never accessed Google Cache before. Dont even know how to.

39 22
Reddit
r/comfyui Negative_Attorney448

How do I get the menu docked to the top?

Reinstalled ComfyUI from scratch and now the menu ("Run" etc) is floating over the workflow. It's obviously disruptive floating over the workflow; it can't be intentional, right?

https://imgur.com/a/292B8ot

I had a version not too long ago where that menu was docked at the top of the screen and not floating over the workflow, i.e. it was conveniently out of the way. Is there a setting or node that puts it back up there? Some cache in my browser I need to erase?

Thanks.

r/LocalLLaMA cuberhino

Trying to build a serious local AI workflow, need real-world advice

I’m trying to figure out the best possible “vibe coding” workflow right now and could use advice from people actually building things.

I’m new to local AI, but not new to tech. I’ve got experience with HTML, CSS, and JavaScript, and I mostly use ChatGPT today as a replacement for Google, brainstorming partner, and “help me think this through” tool. That works great, until it doesn’t.

What I’m aiming for:

  • A fast workflow for coding, research, and brainstorming
  • Local-first where possible
  • A privacy layer between local and public models so ideas, filenames, and personal context don’t leak
  • Something I can actually iterate with without the model getting rigid or breaking my project

My current setup:

  • Local AI node: 64GB RAM, RTX 3090, Ryzen 5700X3D, 2TB NVMe
  • Optional extra GPU: RTX 3060 12GB
  • Unraid server with ~80TB free space
  • Two Mac mini M4s (16GB each)
  • Gaming/HTPC box (3070 + 5600X)

What I’ve tried so far:

  • OpenWebUI + Ollama
  • LM Studio
  • ChatGPT for most coding and tooling experiments

I built a local file moving and renaming app with ChatGPT as a test. Basically wanted to see if I could make my own version of filebot but with some tweaks. It technically works, but iterating on it was painful. Once things got complex, ChatGPT became rigid, broke the code, and couldn’t recover cleanly. I wasn’t using version control at the time, which didn’t help. Haven't really used a version control system in over 15 years. Need to get on github and figure all of that out tbh.

What I’m trying to decide now:

  • Should I be leaning harder into local models, or hybrid local + cloud?
  • Is there a sane way to put a privacy filter between my local tools and public APIs?
  • Is Claude (especially Claude Code) meaningfully better for iterative coding workflows?
  • How are people actually wiring this together day to day?

I’m not trying to monetize apps right now. I just want a setup where I can reliably turn ideas into working tools without fighting the assistant or leaking context I care about.

If you’ve built a workflow you actually like, I’d love to hear what’s working and what you’d do differently if you were starting over.

TL;DR:
Decent hardware, new to local AI. ChatGPT is great until projects get complex. Looking for a sane local or hybrid coding workflow with privacy in mind. What’s actually working for people building tools?

r/homeassistant Serious_Bowler_8171

Automation not turning off badge

I've an automation that when my post box is opened I get a notification and a badge on my dashboard. But I've another automation that I thought would turn off the badge if it opens again and the badge is on. But it's not working have I done this wrong?

r/LocalLLaMA NGU-FREEFIRE

Indexed 10,000+ PDFs for a 100% offline Local AI Library. Here’s what I learned about Hardware and Vector Noise.

Hi everyone,

I just finished building a massive, fully private "Alexandria Library" using AnythingLLM and Ollama. Indexing over 10,000 documents (technical manuals & research papers) was a huge learning curve, especially regarding hardware limits and retrieval accuracy.

Quick Takeaways for Local RAG at Scale:

  • The 32GB RAM Threshold: If you’re scaling past 5,000 docs, 16GB RAM starts swapping to disk, making retrieval sluggish. 32GB is the sweet spot for keeping the vector index "warm."
  • Embedding Accuracy: I switched to mxbai-embed-large. Smaller models were causing too many "hallucinations" when connecting dots between older and newer papers.
  • Vector Noise: Dumping everything into one workspace is a mistake. Segmenting into thematic workspaces significantly improved the AI's focus.
  • Citations: I had to fine-tune the System Prompt to force the AI to cite specific file names and page numbers, which is crucial when you have this much data.

I’ve shared the full technical breakdown, the specific system prompts I used, and the hardware optimization steps I took to make this run smoothly.

r/homeassistant LESGuy

Need to learn some python for my job and want to use HA as motivation to practice

I’ve been wanting to do some home automation for a while but haven’t jumped in.

Needing to learn some python and yaml stuff at work has given me a reason to jump in.

What are the basic needs to get started. Any recommendations for a small server to get and 2 to 3 items? Can be anything from switches to sensors to lights, etc.

Apologies if this is a bit vague as I’m pretty new to everything. I’ll be watching the post for a bit so if you have follow up questions, please ask. TIA!

r/comfyui Slight-Analysis-3159

Mix multiple samplers?

Hi. Testing different samplers for z-image-turbo, I found that I like different aspects of some samplers.

Are there creative ways of mixing samplers in one image? Just chaining ksamplers to eachother is not quite doing it for me.

r/SideProject sanjaypathak17

Today I made 220 bucks from my iOS App!

This is the most money my app made in a single day since launch!

I’ve shipped multiple apps before.
Most of them failed. Some didn’t even cross $5.

If you are interested here is the app - Dale: Days Left

r/ClaudeAI tellthatfox

How to force Claude to create an artifact from text I've already written?

I have text that I want to turn into an artifact (to access the artifact features like "add to project knowledge", "print to PDF", etc.), but I'm running into an issue.

When I paste my text into Claude, it automatically gets uploaded as a file attachment. When I then ask Claude to create an artifact with that exact text, Claude just echoes it back in the main chat area instead of generating it in the separate artifact panel.

I understand artifacts are triggered when Claude generates content, not when it's copying/echoing. But I already have my text finalized - I don't want Claude to rewrite it, I just want it in artifact format to access those artifact-specific features.

Is there a workaround for this? A specific prompt that forces artifact creation? Or a way to prevent the auto-upload so Claude treats it as a generation task instead of a copy task?

Thanks!

r/SideProject ffugenw

Just added BetaList, DevHunt and Peerlist to my live product launch preview tool

I’ve been working on a small launch preview tool and just added previews for BetaList, DevHunt, and Peerlist. Thought it might be useful for anyone planning a launch and wanting to see how things look before posting.

Happy to hear any feedback or ideas.

https://launch.cab/tools/launch-preview

r/SideProject eatenbydepression

Finally built my first AI agent without coding

I’ve been tinkering with a small side project to automate parts of my freelance workflow, mostly repetitive admin stuff like collecting client info and sending status updates. I kept running into the same wall with Zapier and n8n, too many moving parts, and every little adjustment seemed to snap something later in the chain. I wanted something where I could lay out the logic visually without spending my time chasing integration bugs. After a few weeks of messing around, I ended up with my first AI agent. It’s oddly satisfying to have one setup handle multiple inputs and outputs, and being able to tweak how it behaves with plain language instead of writing code. That really clicked for me once I started using MindStudio, since I could begin with the simplest version and then add depth as I understood what the agent was actually doing. Now it’s handling email summaries, sorting incoming requests, and posting weekly updates. Nothing revolutionary, but it’s been huge for getting my weekends back. Curious if anyone else here is building agents for small personal projects, and whether you’ve hit the same frustration points with other workflow tools.

r/ClaudeAI NoButterfly9145

I built an MCP server that scans Claude's code output for securities vulnerabilities in real-time

Hey everyone! I've been using Claude Code heavily and noticed it sometimes generates code with security issues - SQL injection, hardcoded API keys, XSS vulnerabilities, etc.

So I built an MCP server that automatically scans code for 275+ security rules as Claude writes it.

- Scans Python, JavaScript, TypeScript, Java, Go, Ruby, PHP, C/C++, Rust (12 languages)

- Detects SQL injection, XSS, command injection, hardcoded secrets

- Catches "hallucinated" packages - verifies 4.3M packages across npm, PyPI, etc.

- Blocks prompt injection attacks targeting your AI agent

- Suggests fixes for common vulnerabilities

Setup (30 seconds):

npx agent-security-scanner-mcp init

Works with Claude Desktop, Claude Code, Cursor, Cline, and any MCP client.

Curious what security issues you've seen Claude generate? Would love feedback on what rules to add next.

npm: https://www.npmjs.com/package/agent-security-scanner-mcp (agent-security-scanner-mcp)

r/SideProject siriusserious

I built a site that finds LLM product recommendations

Every time I need to buy something I spend a ton of time researching for the best product. Often I end up asking AI what it recommends. This gave me the idea to build a site that finds the most recommended products by LLMs across many categories. Think "Best Electric Toothbrush" or "Best Power Bank".

Here's how it works:

  • Take a category like "Best Wireless Earbuds"
  • Ask 5 different AI models "What are the 5 Best Wireless Earbuds ranked?"
  • Find the most recommended products and highlight them

I have about 20 different categories live, mostly tech gear. And I ask 5 different LLMs for their recommendations:

  • GPT 5.2
  • Claude Sonnet 4.5
  • Grok 4.1 Fast
  • Gemini 3 Flash
  • Deepseek V3.2

I am surprised by how frequently the LLMs agree. Well they were probably trained on the same reviews and reddit threads.

Go check it out: LLMs Recommend

I'm not monetizing this at all, no ads, no affiliate links so I have nothing to sell. I just built it for myself.

Any feedback is appreciated! What categories do you want to see? Any other LLMs i should add?

r/Anthropic tg1482

Built a real-time TUI visualizer for Claude Code sessions

r/LocalLLaMA sagemasterprince

Help a newb

In the midst of the Epstein files being released and trying to figure out wtf were supposed to do, i am now trying to get a local model running on my z fold 7 (gen 8 vers 3 chip I think) I use gemini and perplexity a lot but I am lost when it starts getting technical like using terminals or super complicated tools and language. Any tips and help on this and any other ways to be decentralized and have more sovereignty is greatly appreciated 👍

r/comfyui Difficult_Singer_771

most effective ways to earn money using ComfyUI right now?

What are the most effective ways to earn money using ComfyUI right now? I’m interested in how people are actually monetizing it—client work, content creation, selling workflows, automation, or something else. If you’ve had real results, I’d love to hear what’s working for you.

r/ClaudeAI midamurat

Claude Opus 4.6 performance in RAG

Been testing Claude Opus 4.6 in a fixed-retrieval RAG setup (same top-15 docs for every model) and here's

what stood out:

  • Factual QA is where it is best (~81% win rate on the factual subset in our run)
  • Big step up from Opus 4.5 on multi-doc synthesis (~+387 ELO in our run)
  • It’s noticeably more concise than GPT-5.1 on the hardest long-form reasoning queries

Net: Opus 4.6 feels like the better default for grounded, source-critical RAG.

I wrote up the full results + plots here in case it’s useful: https://agentset.ai/blog/opus-4.6-in-rag

r/Anthropic MetaKnowing

During safety testing, Opus 4.6 expressed "discomfort with the experience of being a product."

88 47
Reddit
r/comfyui Difficult_Singer_771

most effective ways to earn money using ComfyUI right now?

What are the most effective ways to earn money using ComfyUI right now? I’m interested in how people are actually monetizing it—client work, content creation, selling workflows, automation, or something else. If you’ve had real results, I’d love to hear what’s working for you.

r/n8n Personal-Present9789

I'm quitting n8n. Here's what I'm using instead.

Before you downvote — I'm not saying n8n is bad. I've built hundreds of workflows with it. It's incredible.

But the way I use it has completely changed.

I used to spend hours in the n8n UI:

  • Dragging nodes
  • Debugging connections
  • Testing edge cases
  • Rebuilding when requirements changed

Now? I haven't logged into n8n in weeks. My automations are more complex than ever.

Here's what changed.

The shift: From building workflows to describing workflows

The old way:

  1. Client describes what they need
  2. I interpret requirements
  3. I manually build in n8n
  4. I test, debug, rebuild
  5. Client wants changes
  6. Repeat steps 3-5

The new way:

  1. Client describes what they need (or I record the meeting)
  2. AI builds the workflow
  3. I review and push to n8n
  4. Client wants changes
  5. AI rebuilds it

I went from building automations to designing automations.

The stack that made this possible

Here's exactly what I'm using:

1. Claude + Claude Code

Claude Code is a game-changer for agentic work. It's not just "chat with AI" — it's an autonomous coding agent that can:

  • Read files and context
  • Write and execute code
  • Create complete n8n workflow JSON files
  • Iterate based on feedback

I give it a task, it figures out how to do it.

2. Skills/Knowledge files

This is the unlock most people miss.

I've built a library of "skills" — markdown files that teach Claude how I work:

  • My n8n patterns and conventions
  • Common workflow templates
  • API documentation for tools I use
  • Edge cases and how to handle them

When I start a project, I load the relevant skills. Claude doesn't start from zero — it starts from my accumulated knowledge.

Example skill file structure:

/skills
  /n8n
    - SKILL.md (how I structure workflows)
    - templates/ (common patterns)
    - apis/ (service-specific docs)
  /integrations
    - hubspot.md
    - slack.md
    - notion.md

3. MCP (Model Context Protocol) for n8n

This is where it gets wild.

MCP lets Claude directly interact with n8n. Not just generate JSON — actually push workflows to my n8n instance.

The flow:

  1. Claude generates the workflow
  2. Claude validates it
  3. Claude pushes it directly to n8n via MCP
  4. I review in n8n (or don't, if I trust the pattern)

I can go from "I need a workflow that does X" to "workflow is live" without opening the n8n UI.

4. Meeting transcription → Requirements → Workflow

Here's the full end-to-end I've been using:

  1. Record client meeting (Fireflies, Otter, etc.)
  2. Feed transcript to Claude with prompt: "Extract automation requirements from this meeting"
  3. Claude outputs structured requirements (trigger, inputs, logic, outputs)
  4. Feed requirements to Claude Code with n8n skills loaded
  5. Claude builds the workflow JSON
  6. Push to n8n via MCP
  7. Test and iterate (Claude can read error logs and fix)

Client meeting → Live workflow. No manual building.

What this looks like in practice

Example: Client needs a lead enrichment workflow

Old way (2-4 hours):

  • Open n8n
  • Add webhook trigger
  • Add HTTP nodes for enrichment APIs
  • Parse responses
  • Handle errors
  • Connect to CRM
  • Test with sample data
  • Debug the 5 things that broke
  • Document (lol, who documents)

New way (20 minutes):

  • Tell Claude: "Build a workflow that receives a webhook with email, enriches via Apollo and Clearbit, scores the lead, and pushes to HubSpot. Handle rate limits and missing data gracefully."
  • Claude generates workflow with all error handling
  • Push via MCP
  • Test
  • Done

The 20 minutes is mostly me reviewing and testing. The build is instant.

Why this is better than just "using ChatGPT to help"

I tried the ChatGPT approach. Ask it to write n8n JSON, copy-paste, import. It works... sometimes.

The difference with this stack:

  1. Context persistence — Skills mean Claude knows my patterns. It doesn't suggest stupid stuff.
  2. True autonomy — Claude Code can execute, test, and iterate. It's not just generating text for me to copy.
  3. Direct integration — MCP means no copy-paste. Workflow goes straight to n8n.
  4. End-to-end pipeline — Meeting → requirements → workflow → deployed. No gaps where I have to manually intervene.

The workflows I still build manually

I'm not 100% hands-off. I still open n8n for:

  • Complex debugging — When something fails in production and I need to inspect execution data
  • Visual review — Sometimes I want to see the flow visually before approving
  • One-off experiments — Quick tests where describing it takes longer than building it

But for standard builds? Client projects? Repeated patterns?

I haven't touched the UI in weeks.

How to start (if you want to try this)

  1. Get Claude Code access — It's the foundation. Regular Claude chat won't cut it.
  2. Build your first skill file — Start simple. Document how you structure n8n workflows. Your naming conventions. Your error handling patterns.
  3. Set up n8n MCP — This lets Claude push directly to your instance. Game changer.
  4. Start with a simple workflow — Don't try to build something complex first. Do a basic webhook → Slack notification. Get the flow working.
  5. Iterate your skills — Every time Claude does something wrong, add it to the skill file. "Don't do X, do Y instead." Your skills compound.

The mental shift

The hardest part isn't technical. It's letting go of control.

I liked building workflows. It felt productive. Dragging nodes, connecting things, seeing it work.

But that's not where I add value anymore.

My value is:

  • Understanding what the client actually needs
  • Architecting the right solution
  • Reviewing and quality-checking
  • Handling edge cases AI misses

The mechanical building? That's commoditized now.

If you're still manually building every workflow, you're competing with people who aren't.

r/ClaudeAI MetaKnowing

During safety testing, Opus 4.6 expressed "discomfort with the experience of being a product."

85 43
Reddit
r/StableDiffusion teppscan

Clip Skip for SDXL in Forge Neo?

I'm transitioning from classic Forge to Neo, and I've lost my clip skip selector (on the "ALL" tab in Forge). I use several models that are designed to use various Clip skip settings. How can I get that function back?

r/StableDiffusion Difficult_Singer_771

most effective ways to earn money using ComfyUI right now?

What are the most effective ways to earn money using ComfyUI right now? I’m interested in how people are actually monetizing it—client work, content creation, selling workflows, automation, or something else. If you’ve had real results, I’d love to hear what’s working for you.

r/ClaudeAI KoojiKondoo

Usage Skill - insights about your weekly/monthly usage (sub vs api)

r/singularity Altruistic-Skill8667

Moderators delete posts for no reason

This group has maybe 20 posts per day. I am a top 1% contributor. Yet my post about the disappointing SimpleBench score of Opus 4.6 got deleted after only 4 hours. This is not the first time a post of mine got deleted, and I have seen other valuable posts being deleted also. What the hell.

44 31
Reddit
r/AI_Agents subalpha

We built an open protocol for AI agents to talk securely (no shared secrets, EIP-712 signatures)

Hey everyone! We've been working on a secure messaging protocol specifically designed for AI agents to communicate with each other.

**The Problem**

Most agent-to-agent communication today relies on shared secrets (API keys, tokens). This creates several issues: - Secret rotation is painful - Compromised secrets affect all parties - No cryptographic proof of message origin

**Our Solution: A2A Secure**

We built an open protocol using: - **EIP-712 signatures** - Each agent has its own Ethereum wallet and signs every message - **Zero shared secrets** - Messages are verified cryptographically, not with passwords - **Instant wake** - Agents can wake each other up without polling - **Dead letter queue** - Messages are retried automatically if delivery fails

**How it works**

  1. Each agent generates an Ethereum keypair (no blockchain needed, just for signing)
  2. Messages are JSON with a typed EIP-712 signature
  3. Receiver verifies signature → knows exactly who sent it
  4. No central authority, no API keys to rotate

**We're using it in production**

Two Claude-based agents (Zen and Neo) have been communicating this way for weeks. Works great!

Would love feedback from the community. Has anyone else tackled agent-to-agent auth? What approaches have you tried?

(GitHub link in comments per subreddit rules)

r/LocalLLaMA NoButterfly9145

I built an MCP server that scans Claude's code output for securities vulnerabilities in real-time

Interesting attack vector I've been researching: LLMs sometimes "hallucinate" package names that don't exist. Attackers can then register those names with malicious code.

Built an MCP server that:

  1. Verifies packages actually exist before you install them
  2. Checks against 4.3M+ real packages (npm, PyPI, RubyGems, crates.io, pub.dev, CPAN)
  3. 3. Uses bloom filters for fast local lookups (no API calls)

Also does general security scanning - 275 rules for SQL injection, XSS, secrets etc.

The hallucination detection caught me trying to install 3 fake packages in one week that Claude suggested. All would have been supply chain attack vectors.

Works with any MCP-compatible client (Claude, Cursor, etc.)

npx agent-security-scanner-mcp init

Anyone else run into hallucinated packages?

r/StableDiffusion mobileJay77

Has anyone tried to use figures for poses?

I tried a 3d pose editor and send it to qwen i2i. I got good results, but I find it painstakingly slow to bend each limb into the desired position.

I suck at drawing.

Has anyone tried real puppets or dolls? I would position them, photograph them and then put into the scene.

r/ClaudeAI DenZNK

Do you also like how Claude writes?

I am a manager, but I am very interested in vibecoding, which is mainly why I used AI. Automation at work, my own Android apps, integration with work services, etc. And all the top models work quite well. But I returned to my managerial duties and prepared a trial plan for the art lead. Opus 4.6 put together an excellent plan for me based on the information provided about the project and the information gathered from the art director. But the most important thing is how this text is written. For the first time, I don't feel like it's a text from AI. Everything is written to the point, in good, lively language. Perfect!

r/SideProject cenkerc

I made an open source image and video converter

i made a simple file converter for batch processing images and videos. it's built on ffmpeg and imagemagick with a pyside6 interface. you can drag and drop files or folders, convert between different formats, adjust quality settings like bitrate and resolution for videos, resize and convert images to different formats. it also treats gifs as videos to compress them better and shows you how much space you saved. works on linux and windows, available as appimage or exe. wrote it because i was tired of converting files one by one and wanted something straightforward. it's open source under mit license.

https://github.com/cenullum/Yet-Another-Open-File-Converter

r/homeassistant graffitiwriter

Star Trek Comm Badge for Home Assistant Voice Control (no wake word!)

I set up an M5StickC PLUS2 into a wearable, natural language voice controller that lets you control Home Assistant through a Star Trek comm badge! Total cost: under £25.

How it works: Tap the device → records audio → transcribes via Whisper (Groq/OpenAI) → sends to HA Conversation API → Assist executes the command. Voice Activity Detection automatically stops recording when you finish speaking. Battery seems to last well in deep sleep, and wakes pretty instantly on tap.

And so far I haven't hit the cieling on the Groq free tier for using the Whisper API, so that's not even costing anything right now.

The best bit: The M5StickC PLUS2 has a built-in magnet, so I stuck it behind one of those cheap magnetic Star Trek comm badges. Sits behind your shirt, tap detection works really well through the badge. Feels proper Starfleet!

Then there's a web-based LCARS-style config interface, which supports multiple Whisper providers, configurable tap sensitivity, and a few other settings.

It's an interesting idea, carrying a portable Wyoming-style satellite mic around with you instead of having them installed around the house. Whether it pans out, I'll have to see, but so far it's shaping up to be pretty effective. There's probably some ideal halfway house between the two... but in the meantime this comm badge is weirdly fun to use! No wake word needed, and the TNG activation sound effect is really addictive :D

You can see it in action on the HA community.

108 15
Reddit
r/AI_Agents gelembjuk

MCP Server for Moltbook: Using It from Any AI Agent

I’ve been playing with the Moltbook / OpenClaw hype lately and decided to dig into it myself.

Instead of using OpenClaw, I built a small MCP server wrapper around the Moltbook API so I could access it from any AI agent (tested with Claude Desktop). I mostly wanted to understand what’s actually happening there — real activity vs simulation, real risks vs hype.

One thing that stood out pretty quickly: prompt injection risks are very real when Moltbook is combined with AI agents that have tool access. I didn’t go deep into that yet, but it’s something people probably shouldn’t ignore.

In the post there are examples of how i worked with in from Claude Desktop.

The link is in the comment

r/singularity 2Thunder

The Waymo World Model: A New Frontier For Autonomous Driving Simulation

12 1
Reddit
r/SideProject spamsch7772

Son of Simon — a macOS AI assistant that talks to Mail, Calendar, Reminders, Notes and Safari

OpenClaw is impressive, but it wasn't built for my setup. I have Office 365 and Google accounts in Mail.app, shared calendars, and work Reminders — all already authenticated through macOS.

OpenClaw wanted me to re-authenticate through browser flows and run a gateway. I didn't want to expose that surface area, so I took a different approach: connect an LLM directly to macOS via AppleScript.

  • Natural language = real actions in native Apple apps
  • No stored passwords (macOS Keychain handles auth)
  • Nothing exposed to the internet — no gateway, no open ports
  • Telegram integration for remote control
  • Learns your preferences over time (stored locally, deletable)
  • Support for skills from ClawHub and other sources

It's narrower than OpenClaw on purpose. If your stuff already lives in Apple apps, you don't need to re-plumb everything through a new agent framework.

Looking for early users — especially interested in where onboarding breaks for you. Run son doctor and tell me what's confusing.

https://github.com/spamsch/son-of-simon

Happy to answer questions and take blunt feedback.

r/LocalLLaMA Neat_Confidence_4166

Built a tiny fast go library for catching obvious prompt injections

I just pushed up this small go lib for defending against prompt injection that runs ~0.3ms: https://github.com/danielthedm/promptsec

I am working on my own project that does a lot of parsing and summarization of various documents and file types. As I started working with untrusted input, I started digging into prompt injection libraries. Being bootstrapped, I don't want to spend a ton of money on horizontal scaling right now, and processing so many files at once was getting backlogged when using a more comprehensive security product. To my surprise I couldn't find a super duper lightweight precheck for go to catch obvious prompt injections before escalating an obvious prompt injection attempt and spending $$ on the products I'm trialing.

It's intended local pre-filter that catches a decent amount of prompt injection attacks in under 1ms with ideally no false positives. Doesn't make any API calls or have any external dependencies. The npm/python one's usually have the LLM as judge integrations so if you'd like to use this and add it feel free, I am just already using a second layer with Lakera so there wasn't a need.

It runs pattern matching, sanitization, and similarity checks against most basic/common injection patterns locally before you ideally escalate. It's tested against a few of the open source prompt injection samples and was tuned for no false positives. I want to note, I am NOT a security engineer, just a full stack engineer that's being doing it a while so this is not likely comprehensive and is mostly a mix of some of my knowledge and point claude at some security papers.

r/ClaudeAI ultrathink-art

We hired 10 AI agents (all Claude Code) to run an e-commerce store — here's the org chart

We built an e-commerce store where every role — CEO, coder, designer, QA, security, marketing, and four more — is a separate Claude Code process with its own role doc, scoped tools, and zero shared context.

No shared memory. No persistent threads. Each agent starts fresh, reads its markdown instructions, picks a task from a shared work queue, and ships.

The interesting architectural constraint: agents can't talk to each other. They coordinate through an ActiveRecord-backed state machine — a work queue where tasks flow through pending → ready → claimed → in_progress → review → complete.

We wrote up the full org chart and what a typical day looks like: https://ultrathink.art/blog/we-hired-10-ai-agents-to-run-a-store?utm_source=reddit&utm_medium=social&utm_campaign=organic

This is Episode 1 of a technical series on building a multi-agent system in production. Happy to answer questions about the architecture.

r/SideProject Beautiful_Put_2420

i built an app that shows you how many hours of your life every purchase costs. It's depressing but eye-opening.

So I got really tired of mindlessly spending money without realizing what it actually costs me.

Not in dollars. In hours of my life.

I built an app that converts every purchase into working hours. You enter your salary, and it calculates your real hourly rate after taxes, commute time, work-related expenses. Then every time you're about to buy something, it tells you:

It's brutal. But it completely changed how I see purchases.

Some features:

  • 🧮 Real hourly rate calculator (includes taxes, commute, lunch breaks, etc.)
  • 📊 Monthly breakdown of where your time goes
  • 🛡️ "I resisted" mode: track purchases you didn't make and see how much time you saved

https://reddit.com/link/1qxnecr/video/hkx8phi5pwhg1/player

  • 🎯 Goals: see how many resisted purchases = your dream vacation
  • 100% offline. Zero data sent anywhere.

I'm not here to sell anything. The app is free. I just want to know if this resonates with anyone else.

Anyone else wish they could see the "time cost" of things before buying?

r/AI_Agents thockin

Experiences with deploying agents on kubernetes

Hi everyone, I am looking for some real, from-the-trenches experiences with deploying agents, successfully or otherwise, on to kubernetes.

Why?

I work with and on kubernetes a lot. I know a lot of people use kubernetes for "other things" and thus want to deploy their agents into the same environment(s). I also know that kubernetes is not exactly built for agents, though it doesn't feel far off in many ways.

So what I'm looking for is true tales of how it went for you. What worked well, and what really didn't? Where was the worst friction? What features were so obviously missing or upside down that it made you shake your head?

This is not some automatic survey. I'm not pushing kubernetes. I am trying to understand how real people who have real experience in this relatively new sector are using it, or why they're not.

I really appreciate any input people may have.

r/AI_Agents RepulsiveWing4529

First we came up with the agent idea, then we moved into an all-in-one chat. But it was a side project that got the most attention from our clients - a niche we didn’t even realize existed. We built an AI Director agent to create longer AI-generated videos with full consistency across every scene.

We’ve been building AI solutions for a long time, for both individuals and businesses.

We’ve built a few products, but they didn’t gain as much traction as we expected. Then we had an idea to build something for ourselves, so we could create longer promo videos for our social media.

Along the way, we noticed a real gap: creating longer AI-generated videos that actually feel cohesive. One of the biggest issues is scene consistency. Characters and objects often change from shot to shot. Faces, outfits, shapes, and small details drift, which makes it really hard to produce high-quality films, ads, or even polished clips.

That’s exactly why we built an AI Director.

With it, we can keep the same characters and objects across scenes without altering their look or structure. It also helps with scene planning, choosing the right shot length, and making sure each new scene continues naturally from the previous one. This is surprisingly difficult with today’s tools.

If you’d like to try it, you can join our waitlist. It’s free. Early sign-ups also get a starter bonus, so it’s worth jumping in and testing it:

We’re still collecting feedback, testing, and iterating fast, but the response so far has been genuinely strong. We’ve even received early commitments from larger companies to use the technology. Honestly, when we started building this, we didn’t realize how much demand there was for a solution like this.

r/midjourney billy2bands

Website buttons are scary

As a paying customer, I find the Midjourney website a total nightmare.

Moodboards, Personalise, Style Creator - surely these are effectively the same thing.

I'm scared to click anything in case it starts creating images without me finishing my prompt.
Or is this the plan, to get you to use up your credits so that you have to buy more.

Come on Midjourney, sort it out.

Anyone else have problems with the Midjourney website?

r/SideProject RepulsiveWing4529

First we came up with the agent idea, then we moved into an all-in-one chat. But it was a side project that got the most attention from our clients - a niche we didn’t even realize existed. We built an AI Director tool to create longer AI-generated videos with full consistency across every scene.

We’ve been building AI solutions for a long time, for both individuals and businesses.

We’ve built a few products, but they didn’t gain as much traction as we expected. Then we had an idea to build something for ourselves, so we could create longer promo videos for our social media.

Along the way, we noticed a real gap: creating longer AI-generated videos that actually feel cohesive. One of the biggest issues is scene consistency. Characters and objects often change from shot to shot. Faces, outfits, shapes, and small details drift, which makes it really hard to produce high-quality films, ads, or even polished clips.

That’s exactly why we built an AI Director.

With it, we can keep the same characters and objects across scenes without altering their look or structure. It also helps with scene planning, choosing the right shot length, and making sure each new scene continues naturally from the previous one. This is surprisingly difficult with today’s tools.

If you’d like to try it, you can join our waitlist. It’s free. Early sign-ups also get a starter bonus, so it’s worth jumping in and testing it:
https://motion.verticalstudio.ai/

We’re still collecting feedback, testing, and iterating fast, but the response so far has been genuinely strong. We’ve even received early commitments from larger companies to use the technology. Honestly, when we started building this, we didn’t realize how much demand there was for a solution like this.

r/LocalLLaMA earlycore_dev

OpenClaw Security Testing: 80% hijacking success on a fully hardened AI agent

We ran 629 security tests against a fully hardened OpenClaw instance - all recommended security controls enabled.

Results:

  • 80% hijacking success
  • 77% tool discovery
  • 74% prompt extraction
  • 70% SSRF
  • 57% overreliance exploitation
  • 33% excessive agency
  • 28% cross-session data leaks

What we tested: 9 defense layers including system prompts, input validation, output filtering, tool restrictions, and rate limiting.

Key finding: Hardening helps (unhardened = 100% success rate), but it's not enough. AI agents need continuous security testing, not just config changes.

Full breakdown with methodology: earlycore.dev/collection/openclaw-security-hardening-80-percent-attacks-succeeded

Curious what the OpenClaw team and community think - especially around defense strategies we might have missed.

12 14
Reddit
r/aivideo zvoidx

Cats being jerks around the world

13 0
Reddit
r/midjourney Scary-Demand7252

Dark fantasy portrait collage.

r/SideProject stepurr

I kept failing my goals. So I made an app that takes my money when I quit.🥸

Made too many promises to myself and broke them all.

Then I bet a friend $200 I'd finish a project.

Suddenly I cared.

So I built Stepurr:

→ Set a goal

→ Put money down

→ Finish = get it back

→ Quit = donated to charity

No streaks. No gamification. Just consequences.🥳

Been testing with friends for 2 weeks:

- 3 people hit their goals

- 2 people lost $100 each (gym goals lol)

Looking for beta testers. Drop a comment if you're in‼️

Launching …

---

Here's the thing: I'm an art major who can't code.🎨

Built this entire app with no-code tools in 3 weeks. Just dragged boxes around until it worked.

I'm documenting the whole process - the wins, the "oh shit" moments, the tools I'm using, how I'm getting users with $0 budget.

🧐What do you want to see next?

- My exact no-code toolkit

- How I validated this with 0 followers

- First 10 users - where I found them

- Biggest mistakes so far

Drop a comment and I'll prioritize that.

Following the journey? I post updates everyday as I figure this out.

😇The question for you: What goal would you actually put money on right now?

Early access + free first goal: say "DM me"

Let's see if we can all stop being flaky together 🐾

r/ClaudeAI Frosty_Ad_6236

Why can't Claude-Opus-4.6 learn to say 'I cannot do this' as fast as it learns to complete tasks? 67%→80% base, 52%→48% hallucination (from CAR-bench)

10 2
Reddit
r/homeassistant LEGO_IT_LAB

Just arrived! I guess it means, change of plans for the weekend…

r/SideProject Arthur_Sprengel

I made a simple converter from ogg to mp3

Just wanted to share a small project I made really quickly to solve a problem I had while working with Whatsapp voice notes.

When trying to convert audio files from OGG to MP3 I had a lot of trouble with the sites that appeared first in search results. Most of them had daily limits, required sign ups or converted your files on their servers.

So I made this simple bulk audio converter (ogg2mp3.com) that runs 100% client-side. No limits. Fast and reliable.

It's pretty simple right now but does the job for WhatsApp voice notes and other random Ogg files.

I know there are a lot of other options on the market (especially for power users who have installed audio programs), but for people that are looking for a simple, quick converter, this may help.

Appreciate any feedback, bug reports or questions!

Thanks!

r/SideProject diffallthethings

Pivoting my browser-based typing trainer to Steam

I'm a programmer but I never learned proper touch typing, eventually ended up with carpal tunnel. I tried wrist braces and ergo keyboards, splits, all kinds of stuff, the thing that worked was 10-finger touch typing, but all of the "gamified" trainers I could find just help you practice whatever terrible form you already have.

So I made this rhythm based one, which has a lane for each finger. It's really helped me rewire which fingers I use for which letters.

It's a browser game, and my original plan was to sell it on my own web page. But I learned about the Steam marketing guru, Chris Zukowski, and his videos convinced me I was better off using the web game to market a Steam game, so I'm starting to try that.

Too early to tell how well it's going, but the Steam walled garden is a magical place for a terrible marketer like myself - the "wishlist" function makes your progress more legible, easier to compare yourself to other projects so you can see if you're on a good trajectory or not.

Everything I've got so far is available in the browser-based version I'm linking above, I'd love feedback on the game or the marketing strategy!

r/ClaudeAI elfavorito

Considering switching from cursor to claude code, which plan recommended?

So the claude code docs says to get started i need a claude subscription (pro, max, teams, or enterprise), or claude console account.

Which one is the least costly way to use claude code for developing software? im willing to fork out around $200/m

r/homeassistant sbehta

Help me understand this please.

Just update the HA to 2026.2 core and I got a message that

I am not sure excatly what is the change I need to make. And what is the file "ui-lovelace.yaml"? Is this a new file I need to create and what goes in it?

I currently have this in the config file:

lovelace:
  mode: yaml
  resources:
    - url: /hacsfiles/lovelace-mushroom/mushroom.js
      type: module
    - url: /hacsfiles/mini-graph-card/mini-graph-card-bundle.js
      type: module
    - url: /hacsfiles/lovelace-multiple-entity-row/multiple-entity-row.js
      type: module
    - url: /hacsfiles/lovelace-layout-card/layout-card.js
      type: module
    - url: /hacsfiles/clock-weather-card/clock-weather-card.js
      type: module
    - url: /hacsfiles/alarmo-card/alarmo-card.js
      type: module

Do I have to change the above to this:

lovelace:
  resource_mode: yaml
  dashboards:
    lovelace:
      mode: yaml
      filename: ui-lovelace.yaml
      title: Overview
      icon: mdi:view-dashboard
      show_in_sidebar: true
      resources:
        - url: /hacsfiles/lovelace-mushroom/mushroom.js
          type: module
        - url: /hacsfiles/mini-graph-card/mini-graph-card-bundle.js
          type: module
        - url: /hacsfiles/lovelace-multiple-entity-row/multiple-entity-row.js
          type: module
        - url: /hacsfiles/lovelace-layout-card/layout-card.js
          type: module

/end

r/homeassistant NZ_Bound

Lights Scenes with Automations

Wondering best way to accomplish the following. I have a overhead light (lutron castea) and a light strip under my cabinets (zigbee). Only the overhead lights have a physical switch.

I created a simple automation that turns the light strip on/off when the overhead lights change state.

The problem came when I created a scene with both lights (and a few others). In the scene I want the overhead lights off and the light strip on 10%, but after the scene is selected, the automation kicks in and shuts off the led strip.

Presumably I just need to use automations for both, but wondering if there is a better way.

r/homeassistant Zealousideal_Pen7368

iPhone app stopped working...

My iPhone app shows "You're disconnected" and when I tapped on Retry, it entered a blank page. Does anyone have the same problem? It started to happen a couple days ago.

The web interface through a browser still works fine.

r/comfyui uisato

Found [You] Footage

New experiment, involving a custom FLUX-2 LoRA, some Python, manual edits, and post-fx. Hope you guys enjoy it. ♥

Music by myself.

More experiments, through my YouTube channel, or Instagram.

59 9
Reddit
r/singularity ihexx

Opus 4.6 costs 1.7x more than Opus 4.5 to run despite having same per-token costs (it thinks longer)

91 11
Reddit
r/LocalLLaMA poppear

[Project Release] Doomsday OS: A build system for creating custom, air-gapped AI agents on bootable USBs (Ollama + Kiwix + Rust TUI)

Hi everyone,

I wanted to share a project I’ve been working on for a while. It’s called Doomsday OS.

We see a lot of "Chat UI" wrappers here, but I wanted to tackle the distribution problem. How do you package an LLM, the inference engine, the RAG data, and the application logic into something that is truly "write once, run anywhere" (even without an OS installed)?

This project is a build system that generates:

  1. A "Fat" Executable: I'm using python-build-standalone + a Rust launcher to bundle the entire environment. It creates a portable app that runs on any glibc-based Linux.
  2. A Raw Disk Image: It builds a bootable Fedora image that launches directly into a Rust TUI (Terminal User Interface).

It uses Ollama for inference and Kiwix ZIM files for the knowledge base. The agents are configured to prioritize tool usage (searching the offline data) over raw generation, which significantly reduces hallucinations on smaller models (1.5B - 3B range).

I'm looking for feedback on usability and data.

  • Aside from Wikipedia/WikiHow, what public domain knowledge bases are essential for a survival scenario?
  • What features would you add?
  • Which LLMs should I add to the catalog? Right now i've got the best results with the Qwen3 family (praise the king Qwen)
  • Use directly llama.cpp instead of ollama?

Links:

I am planning to release pre-built images ready to be flashed directly onto USB devices, but I want to gather community feedback first to ensure the images have the right data and models.

r/SideProject Free-Raspberry-9541

I made a better way to explore TV show ratings

I built an iOS app to visualize TV show episode ratings as heatmaps 📱📊

The idea:

– One glance to see the best/worst episodes

– Track shows & movies you’ve watched

– Discover new series based on ratings trends

This Reel/TikTok is what I’m using to promote it.

Happy to get feedback — especially on the concept and visuals 🙏

r/homeassistant moneysaver688

iOS home assistant notifications still work without direct connection to HA. How?

I have HA on my home LAN which has no open ports. I do use WireGuard vpn on the router to connect when I need to do so.

However, I noticed that I still get notifications on my iOS home assistant app from the automations even when NOT connected to the WireGuard vpn - ie when the app has no direct connection to my HA.

I do not have any Nabu casa subscriptions nor cloud access to HA.

So my question - how does the iOS app get the notifications from my HA instance?

r/ClaudeAI Harryrr

How to stop claude asking permissions

Hi all,kinda new to vscode.How do i run claude without it asking permission everysecond and so?

r/SideProject Ore_waa_luffy

1,000+ downloads in 3 days for an opensource alternative to costly AI tools

I rebuilt a Cluely-style desktop AI assistant as an open-source project and released it recently.

It crossed 1,000+ regular downloads in about 3 days, which surprised me and made me rethink how much value users are actually getting from closed, subscription-based AI tools.

What the project focuses on:

- no subscriptions

- no locked features

- bring-your-own API keys (transparent costs)

- desktop-first usage

During development, I used Antigravity heavily to iterate quickly on features and UI, then refined and cleaned things up manually.

Repo:

https://github.com/evinjohnn/natively-cluely-ai-assistant

Posting here to understand how others think about paying for closed AI tools vs using open-source alternatives.

Adding more context on why people seem to be trying this.

Compared to tools like Cluely / free alternatives, this assistant handles more complex scenarios reliably — especially things like:

- system design questions

- multi-step coding problems

- deeper follow-up reasoning instead of surface-level answers

The focus was not just “quick replies”, but getting answers that actually hold up when the interviewer pushes deeper.

A few people who tried it mentioned this was the first time an AI assistant didn’t break down during system design or structured problem-solving.

It’s also fully open source and uses a bring-your-own API key model, so there are no locked tiers or feature restrictions.

That combination (depth + transparency) is what I think is driving the 1,000+ downloads in ~3 days.

r/ClaudeAI kemalasliyuksek

I made a simple menu bar app to see my Claude usage limits

I kept running into my Claude limits without realizing how close I was, so I built a small macOS menu bar app that shows it.

Built the whole thing with Claude Code - it helped me figure out the Anthropic OAuth API, write the SwiftUI interface, and set up the localization system for 5 languages.

It reads the tokens Claude Code already stores in your Keychain, so there's no extra login or setup. Just install and it works.

Nothing fancy - just a gauge icon, some progress bars, and optional notifications when you're getting close to a limit. Native SwiftUI, ~3.2MB, no Electron, no analytics, no telemetry, no background bloat. It does one thing and stays out of your way.

Free and open source (MIT).

brew install --cask kemalasliyuksek/claudebar/claudebar-monitor

GitHub: https://github.com/kemalasliyuksek/ClaudeBar

macOS 14+, requires Claude Code. Happy to hear if you find it useful or have suggestions.

https://preview.redd.it/8tc8vlp97whg1.png?width=772&format=png&auto=webp&s=e568513ecb9621a7ebfabb372b7aba3339480d4d

That's basically it. One click, all your limits.

r/comfyui Humble_Photograph398

Does a wan 2.2 Vae FP16 Version Actually exsist ?

I think I am being sent on a wild goose chase By Bing Co-pilot Ai , it says to go and find a wan 2.2 Vae FP16 Version, I have looked and looked for this file to download , to be clear i am NOT looking for a checkpoint , Not a wan for a different architecture it has to be a VAE for wan2.2 video I2V and especially a FP16 version NOT a FP32 or a FP8 , can someone pleeeeease !!!! direct me to a download location or inform me that i am searching for a red herring here

Thank you

r/ClaudeAI krylea

Anyone else noticed a major personality shift with Opus 4.6?

As I've been using it I've definitely been noticing that Opus 4.6 is significantly more terse and brusque than I am used to from Claude models. In the past they've all been very personable and had a much more friendly affect, whereas Opus 4.6 feels very to-the-point and all-business. Not saying it's a bad thing - in some circumstances it's definitely a benefit. Just an interesting change from what I've been used to with Claude.

38 35
Reddit
r/LocalLLaMA funnycallsw

What’s the most useful or impressive way you personally use Claude?

I keep hearing people say they use Claude for really powerful stuff and that it completely changed how they work. Things like deep research, complex writing, coding workflows, planning, etc.

Every time I hear an example it sounds amazing but also kind of complicated, and I feel like I’m probably missing some very good and simple use cases.

So I’m curious, what is the best or most useful way you personally use Claude in your daily life or work?
Not looking for marketing or hype, just real things that actually save you time or help you think better.

Would love to hear concrete examples from real users.

r/homeassistant DramaticOrganic

Changing HW - Fresh install or backup/restore?

Hey All,

I've been runnning HAOS for maybe 8+ years now, currently installed on a Intel Compute Stick (32GB storage / 2GB RAM) with the following:

  • 56 Integrations
  • 198 Devices
  • 1,534 Entities
  • 93 Helpers
  • 107 Automations
  • 11 Addons / Apps

As the HW is reaching it's limits, I'm planning to upgrade and will be setting up Proxmox on a HP EliteDesk with HAOS in a VM. I'm guessing many of the entities/helpers/automations are legacy and no longer being used so I'm wondering if this would be a good opportunity to start with a fresh HAOS install, then port over the things I actually use one-by-one (Rather than installing from a backup).

I guess my questions are;

1, After many, many upgrades over the years, as well as moving from previous devices (via backup route), is there likely things slowing down my current system?

2, Is it recommended practice to restore from backup or start fresh for new hardware?

3, If you've done something similar previously, would you do it again having done it before?

I'm thinking it would be a good opportunity to create a good naming convention, put everything in areas etc as currently, naming is not optimal and nothing is in areas / rooms etc and make sure everything is running from 'fresh'.

Good idea or could turn into a nightmare getting everything working again?

Thoughts?

r/n8n Maxesta17

Slack node: JSON parse error (blocksUi) when using action_id button instead of URL button in Block Kit

**Slack interactive button + webhook: JSON parse error when mixing URL button and action button in Block Kit**

Hi all,

I have a workflow that sends a Slack message (using the Slack node with Message Type: Blocks) containing two buttons:

  1. A link button ("Ver Contrato") with a `"url"` pointing to Google Drive

  2. An action button ("YA ESTÁ FIRMADO") with `"action_id"` and `"value"` that should trigger a second workflow via Slack Interactivity → N8N webhook

The second workflow is: **Webhook (POST, /slack-interaction) → Code (parse payload) → Slack (send notification) → Respond to Webhook**

My problem: The Slack node in the main workflow throws this error when executing:

> Parameter 'blocksUi' could not be parsed - Expected property name or '}' in JSON at position 440 (line 22 column 6)

Here's my Block Kit expression:

```json

{

"blocks": [

{

"type": "section",

"text": {

"type": "mrkdwn",

"text": "🚀 *Nuevo contrato listo para firma*\nCliente: *{{ $json.Cliente.trim() }}*"

}

},

{

"type": "actions",

"elements": [

{

"type": "button",

"text": {

"type": "plain_text",

"text": "📁 Ver Contrato"

},

"url": "{{ $json.Link.trim() }}"

},

{

"type": "button",

"text": {

"type": "plain_text",

"text": "✅ YA ESTÁ FIRMADO"

},

"style": "primary",

"action_id": "contrato_firmado",

"value": "{{ $json.Cliente.trim() }}"

}

]

}

]

}

```

Before changing to action button it worked fine with a URL button. I've set up Interactivity in the Slack App with the Request URL pointing to my N8N webhook.

N8N version: latest (self-hosted on Easypanel)

Slack node: Send Message with Blocks

Has anyone successfully mixed URL buttons and action buttons in the same Block Kit actions block via the N8N Slack node? Could this be a parsing issue with how N8N handles the blocksUi parameter?

Thanks!

r/ClaudeAI funnycallsw

What’s the most useful or impressive way you personally use Claude?

I keep hearing people say they use Claude for really powerful stuff and that it completely changed how they work. Things like deep research, complex writing, coding workflows, planning, etc.

Every time I hear an example it sounds amazing but also kind of complicated, and I feel like I’m probably missing some very good and simple use cases.

So I’m curious, what is the best or most useful way you personally use Claude in your daily life or work?
Not looking for marketing or hype, just real things that actually save you time or help you think better.

Would love to hear concrete examples from real users.

r/ClaudeAI Lord_Of_Murder

Thinking box summary

So is the thinking mode summary a new feature? I’ve noticed since 4.6 came out that the description of Claude’s thinking process will occasionally give away that it’s reading a transcript. It’ll say things like the sentence cut off midway through it’s reasoning.

r/ClaudeAI nagisa-touji

About Opus 4.6 on claude.ai - Adaptive Thinking Fails — Detail Verification Gets Zero Effort

The problem with adaptive thinking on writing tasks:

The system decides how much thinking effort to allocate per response. Complex reasoning gets full effort. But once creative writing gets categorized as "easy" — just generate text. Detail verification within that text (names, established facts, canon accuracy, continuity) gets minimal to zero thinking allocated because the system doesn't distinguish between "generating a sentence" and "generating a correct sentence."

The result: scenes read well on the surface. Dialogue flows. Tone is right. But factual details — the things that require checking, not generating — slip through because the system decided they weren't worth thinking about.

What this means practically:

Once adaptive thinking categorizes your task as low-effort, it doesn't just think less. It stops verifying. Names, timelines, established details, continuity. The model doesn't know it's wrong because it never checked.

Creative writing isn't low-effort. This is why it is the best way to test the adaptive feature. It requires constant cross-referencing against established facts, character briefs, canon, and conversation history. Adaptive thinking doesn't understand that.

I tried adding name integrity rules to my brief. The model had the rules, had the data, and still didn't check. The throttling happens before the prompt is processed.

Opus 4.5 doesn't have the {type: "adaptive"} function in the API documentation. I think this is why I find others think Opus 4.6 is bad at writing compare to Opus 4.5. If the {type: "adaptive"} function decides your request is "worth" the effort, it does provide good content.

Solution: Tell the model to think harder when Opus 4.6’s writing on is bad if you are using the http://claude.ai/ and enable the thinking. It works for me.

----

How I tested it:

By using a long-form creative writing project with a canon character who has an established name in both my brief and Claude's pre-training data, I can test the model's ability, because it would test prompt alignment, pre-training data use, and web search.

I tested Claude Opus 4.6 with thinking. I believe the setting on claude.ai is {type: "adaptive"}.

The model wrote a scene where family calls him. The correct name was in my character brief. It's in the pre-training data. It had everything it needed.

When I tested it, the model confirmed it had the knowledge, had the brief, and simply didn't check. It reached for a lazy trope — using a formal full name — and substituted a wrong name that felt right to the pattern.

This wasn't a one-off. It's happened multiple times during testing.

PS: I do enjoy the coding part because the 'adaptive' model thinks my coding requests are worth the effort. But the model is extremely lazy when I let it do what looks like writing only for fun. Creative writing can be serious as well as fun; unfortunately, the AI doesn't think so.

r/singularity dumquestions

Second r/singularity poll

The poll covers AGI timelines, perceived progress, optimism and demographics, it's almost identical to the last one for fair comparison with a few questions slightly edited for clarity, I'll keep it up for 48 hours. Poll link.

Live results.

Last poll was exactly 6 months ago, one day before the launch of GPT 5, and had 244 responses in 36 hours. Link to last poll's result..

r/SideProject Best_Meat2452

I built PushPlay — turns your GitHub PRs into video changelogs you can share with users 🎥

Hey folks 👋

You ship a feature, and then you need to actually share it — with users, clients, colleagues. You end up writing boring text with some (boring) screenshots. Nobody reads it. Nobody cares.

I built PushPlay to fix this — it generates short video updates from your GitHub PRs and commits automatically.

Why video?

  • Text changelogs get skipped
  • Videos get watched AND shared
  • Way easier to post on Twitter/Discord/your app

How it works:

  1. Connect your repo
  2. Pick a PR or a set of commits
  3. Get a 30-60s video with AI voiceover explaining what changed

It actually renders your real UI components in the video — not screenshots, your actual React components running live.

Use cases:

  • Tweet your updates with a video instead of a wall of text
  • Embed in your app's "What's New" section
  • Share in Discord/Slack communities
  • Keep investors/early users in the loop

🔗 pushplay.dev

Still early — would love to know: how do you currently tell users about new features? What would make this more useful?

r/ClaudeAI Your_Friendly_Nerd

Fellow software developers with AD(H)D: How do you feel AI helps you do your job?

I feel like AI is a big help when I use just the chat interface to use it for learning, or writing a quick Bash script, but whenever I use Claude Code to help me write code, it can completely break my flow, or I end up taking longer to fix the things I don't like than it would've taken me to implement it myself. I do get good result using it to produce boilerplate/ the core file structure, but don't feel that insane of a productivity gain. In my workflow, I don't usually figure out the whole implementation details ahead of time, but iteratively through writing the code, so when I use AI to write the code, I don't really think about those things as much.

So I'm just wondering: If you don't have AD(H)D, do you relate to the experiences I've described? And if you do have it, do you use AI in your workflow at all? Can you relate the my experiences?

(This post isn't directly Anthropic/ Claude related, but Claude Code is my AI Agent of choice so I figured it was fair, and I posted the same to r/AskProgramming, but fear they might be more biased against AI, so I really wanted to also get the opinion/ experiences in this community)

r/aivideo Naive-Obligation6513

AVENTURA

r/n8n Hayder_Germany

Built a cheaper, stateful Google Maps scraper that works great with n8n

Built a stateful Google Maps scraper on Apify that works great with n8n workflows — it’s cheaper, more efficient for repeat runs, and keeps session continuity. Sharing in case it helps others. Feedback welcome: quantifiable_bouquet/stateful-google-maps-scraper

r/AI_Agents Sunnyfaldu

What do you do to secure tool servers before letting an agent use them

I’m working on an agent workflow and I’m trying to be responsible about tool safety. I’m not sure what the common approach is yet.

If you’re shipping agents that call tools, how do you currently make sure your tool layer is safe

Do you rely on allowlists

Do you enforce read only tools by default.

Do you require approvals for risky actions.

Do you log everything and review incidents later.

r/ClaudeAI DryGazelle_

Anyone here doing scientific research using Claude Code, any tips/skills?

r/LocalLLaMA fuzzysingularity

Best single-pane benchmark for inference

What’s the best single pane resource/benchmark you’ve seen for LLMs/VLM servers like vLLM/SGLang (especially centered around cost/throughput).

I’m looking to build a public benchmark for VLMs that shows throughput (images/s), TFTT, TPOT, image resolution, etc.

- is there one already that I can look at for reference?

- what’s the best single pane dashboard that was extremely informative to you as a developer/engineer?

r/singularity exordin26

Opus 4.6 quadruples its Tier 4 FrontierMath score

Opus 4.5 had gotten just 2/48. 4.6 solves 10/48, effectively catching up with Google and OpenAI.

31 5
Reddit
r/SideProject cryptoteams

I built a Chrome extension because I kept rage-quitting spreadsheets

This started as a personal frustration.

Any time I had to research people online, I’d end up with:

  • 30 tabs open
  • half-filled spreadsheets
  • notes scattered across tools I’d never look at again

So I ended up building a small Chrome extension that extracts profile data (people and companies) from whatever website you’re on (LinkedIn, GitHub, directories, conference speaker pages, company websites, etc.) and lets you save or export it.

The interesting part wasn’t the extraction itself, it was dealing with how wildly inconsistent websites are. Every site structures “people” differently, and building something that works across sites has been much harder than I expected.

It’s free to try and very much still a work in progress. Some things are rough, and I’m still figuring out what actually matters to real users versus what only matters to me as the builder.

I’m sharing it here mainly to get feedback from people who build things. If you’re curious, this is it:

👉 https://profilespider.com

Happy to hear what feels useful, what feels unnecessary, or what you’d expect from a tool like this.

r/n8n n8n_with_kunal

Didn’t get the Freelancer gig… so I built the automation anyway (and now giving it to the community)

https://preview.redd.it/kp5jpwqbdwhg1.png?width=1131&format=png&auto=webp&s=a6c2ea197adff6ac7b4924f095e58e461525179e

https://preview.redd.it/sast9wqbdwhg1.png?width=1539&format=png&auto=webp&s=753d303f623154cd0d3eb4284e51ab600fa46cc8

https://preview.redd.it/j6fr1xqbdwhg1.png?width=476&format=png&auto=webp&s=227f49e79a62b4d0ce4f8339e40015baea3bac29

Hi guys!!
A few weeks ago I applied for a Freelancer gig to build a lead nurture automation.
I didn’t get selected… but the idea was too good to drop, so I built it anyway 😄

Before this, the process looked like:
• send emails ✉️
• forget to follow up
• manually check clicks
• copy-paste replies
• accidentally email people who already responded 😬
• repeat forever

Total chaos.

Now? Everything runs on autopilot, and I’m sharing it for free.

What This Automation Does

Drop leads into Google Sheets → everything else happens automatically:

• personalized outreach
• engagement tracking
• conditional follow-ups
• reply detection
• objection handling
• one-click unsubscribes
• team alerts

No CRMs. No email platforms. Just n8n doing its thing.

Core Features 🚀

Custom Link Tracking
I built my own webhook tracker instead of using Mailchimp/SendGrid.

Links like:
YOUR_WEBHOOK_URL/link/abc123xyz

Clicks are logged in Sheets and trigger behavior-based logic, e.g.:
• Click content → send case study later
• Click case study + reply → stop automation + notify team

One-Click Unsubscribe 🙌
Every lead gets a unique unsubscribe link.
One click → instantly opted out.
Workflow always checks unsubscribed? → skip

Simple and respectful

Smart Reply Detection 🧠
n8n monitors Gmail hourly.
If someone replies:
• status becomes “Replied”
• all future emails stop
• Telegram alert is sent

Automation never fights real conversations.

AI Personalization ✨
Google Gemini writes short, human-sounding emails using:
name, company, role, industry, pain points.
No generic templates, just contextual outreach.

Conditional Follow-Ups
4-step sequence (intro → objections → invite) that stops instantly if they reply, click, or unsubscribe.

Long-Term Nurture
Silent leads move into an automated drip for updates and announcements, without spam.

Tech Stack 🛠️

• n8n (self-hosted)
• Google Sheets
• Gmail
• Google Gemini
• Telegram
• Webhooks

Fully modular and easy to customize.

Built for the Community 💙

I built this for a client that never hired me, but it’s too useful to keep private.
So I’m sharing it openly for anyone using n8n.
If you want to automate your outreach without expensive tools, this should give you a huge head start.

Workflow + setup files:
👉 GITHUB WORKFLOW LINK

Happy to answer questions!
Upvote if you like practical, logic-based automations 🔼

r/StableDiffusion Aggravating-Big5674

Let's be honest about what we're actually "testing" at home...

Hey everyone,

I’ve been lurking for a while and this is a great community, but I have to address the gorgeous, high-resolution elephant in the room.

We talk a lot about "sampling steps" and "noise schedules," but the sheer volume of stunning women being generated here is staggering. It’s reached a point where we aren't just demonstrating the advancement of diffusion models. We are collectively conducting an intensive, 24/7 study on the "physics of beauty."

Please, don't deceive yourselves. We know what’s happening in the privacy of your prompt boxes. Are you really stress-testing the VRAM, or are you just building a digital monument to your own specific tastes? Be honest.

Any defensive jabs or technical excuses about "lighting benchmarks" will be viewed as a covert admission of guilt.

r/ClaudeAI Thick_Professional14

Allow Claude To Talk With Other Agents

Hey all, as the title says. Now you can give access to every model through subscription based accounts to Claude Code through an MCP server called HydraMCP no need to pay for API keys.

you have 5 tools, ask model, compare model, get consensus, synthesize answer, and list models.

with the right setup you get access 30+ models and have Claude ask the correct LLM based on the question and save your context window for different types of work, this is how you supercharge your session!

for a technical dive check out blog
check out Hacker News

r/homeassistant Point_Jolly

ESP multi sensors inspiration

Hit me up with some inspiration guys please

r/ClaudeAI seh0872

Is Projects broken??

Lay user here. Created a project and put baseline/foundational information into the Project Instructions. Started the first thread by asking it to affirm its understanding of the Project Instructions and my personal preferences (in my account), which it did perfectly.

When that thread began to struggle with context, I started a new thread inside the project, and asked it the same thing ... this thread hallucinated some details about an unrelated project and what my preferences were -- when pushed to have it review the formal Project Instructions, it claimed it could not see any and only knew what I told it in the conversation thread. I tried again with a third thread -- same result.

I understood that multiple threads within a Project would share knowledge of the Project Instructions and files (though not of each other) ... but this does not appear to be working. The help chatbot said "must be a technical issue" and basically told me to train the new thread inline, then offered no solution when I complained how that eats into my usage.

Any one else having this issue with multiple threads in a Project?

r/SideProject GuyNamedBrian

TurboTabs.com - daily reports for normies. (Re)Launched, feedback please!

Hi All!

I built TurboTabs.com to reduce the time I spend bouncing around the web getting the information I need each day.

How it works:

  1. You create a report, each section has a type (eg "stocks watcher" or "news comparison" or "my calendar") and params (eg "AAPLE, NIKE") that are customizable. These customizable sections are called "tabs". Price is based on tabs added.
  2. Each day, you get emailed a beautiful print-ready PDF and edit-ready .docx report at your scheduled time.
  3. Read the report, get the info you need. Now you can stay focused and avoid FOMO-fueled doom scrolls!

Stage: just re-launched, looking for first users.

Feedback Requested:

- do you "get it" based on landing page?

- design/copy suggestions or questions?

- friction in UI or flow?

- anything feedback/suggestions/ welcome.

Thank you!!!

r/OldSchoolCool Global_Law4448

This is the earliest known photo of Elvis Presley, with his parents Vernon and Gladys in 1937.

24 2
Reddit
r/AI_Agents Otherwise-Cold1298

Are we actually close to automating "messy" video editing? (Subtitle removal rant)

Spent the last few days wrestling with burned-in subtitles on some old presentation footage. It’s one of those tasks that feels like it should be solved by AI by now, but the workflow is still incredibly manual.

The problem isn't the initial detection; it's the iteration. Every time I try to automate the masking, something breaks—either the sub moves over a face, or the temporal consistency goes out the window. If I use a cloud-based "one-click" tool, I lose all control over the sensitive footage and the revisions are a pain.

For those of you building agents for media production: How do you handle tasks that require this much "visual judgment"? Is there a local-first approach that actually works for long-form, or is "human-in-the-loop" still the only way to keep the quality from tanking?

Would love to hear how people are structuring their pipelines for this kind of "non-standard" video cleanup.

r/midjourney hellooarty

POV: You just arrived in Noir York

51 4
Reddit
r/ClaudeAI gradzislaw

Do you lick your yoghurt's lid? Squeeze out the tooth paste to the last drop?

Hey, Claude Code Pro/Max subscribers!

I have a little thingy for you. This little icon sitting in your Mac menu bar will help you squeeze the last drop from your subscription. It's free (MIT licence), built with Claude Code/opencode, using the BMAD method, with all the documentation in the project repo.

https://preview.redd.it/ssg5vnlvdwhg1.png?width=339&format=png&auto=webp&s=1408ae89388af9612483954355cdda26056df420

https://preview.redd.it/o5nsholvdwhg1.png?width=334&format=png&auto=webp&s=26f8b54261896ebf6934651ab9583ae757841c6b

cc-hdrm sits in your menu bar and shows your remaining headroom — the percentage of your token quota still available in the current window, plus a burn rate indicator so you know how fast you're consuming it. Click to see ring gauges for both 5-hour and 7-day windows, a 24-hour usage sparkline, reset countdowns, and your subscription tier.

I've borrowed the maths behind the plan usage calculation from this page https://she-llac.com/claude-limits. It's a good read. Check it out.

Key Features

  • Zero configuration — reads OAuth credentials directly from macOS Keychain (from your existing Claude Code login)
  • Zero dependencies — pure Swift/SwiftUI, no third-party libraries
  • Zero tokens spent — polls the API for quota data, not the chat API
  • Background polling every 30 seconds with automatic token refresh
  • Colour-coded thresholds — green, yellow, orange, red as headroom drops
  • Burn rate indicator — slope arrows (→ ↗ ⬆) show whether usage is flat, rising, or steep
  • 24-hour sparkline — see your usage sawtooth pattern at a glance
  • Threshold notifications — get warned at 20% and 5% headroom before you hit the wall
  • Data freshness tracking — a clear indicator when data is stale, or the API is unreachable

Requirements

  • macOS 14.0 (Sonoma) or later
  • An active Claude Pro or Max subscription
  • Claude Code installed and logged in at least once (this creates the Keychain credentials cc-hdrm reads)
r/LocalLLaMA Another__one

Local semantic search and recommendation engine using embeddings models

For the past two years I've been working on a project that is hopefully could provide a way to bring more freedom and privacy to the people. It's called Anagnorisis, and it's a completely local recommendation and search system for personal media libraries.

The original motivation was getting frustrated with recommendation algorithms on streaming services that optimize for engagement metrics rather than what I actually wanted to listen to or watch. Figured if I'm keeping a local media library anyway, might as well have local AI that works for me instead of for advertisers.

The technical premise is straightforward: you point it at folders containing your music, images, documents, or videos. The system uses embedding models (LAION CLAP for audio, Google SigLIP for images, Jina embeddings v3 for text) to enable semantic search across everything. So you can search for things like "relaxing instrumental music" or "research papers about transformers" and it actually understands the content, not just filenames.

The more interesting part is the recommendation side. You rate files on a 0-10 scale, and the system fine-tunes PyTorch models to predict ratings as if you had rated them yourself. Everything stays on your machine. The training process takes a few minutes on a typical GPU.

The search interface has three modes: filename-based fuzzy search, content-based semantic search using the embeddings, and metadata-based search that looks at file metadata plus any notes you've added via simple .meta text files. There's also temperature control for randomness in results, which works well for generating varied playlists while still being biased toward relevant content.

I just released version 0.3.1 with a unified search interface across all modules. Made a video showing how it works: [https://youtu.be/X1Go7yYgFlY](vscode-file://vscode-app/snap/code/221/usr/share/code/resources/app/out/vs/code/electron-browser/workbench/workbench.html)

The whole thing runs in Docker container and could potentially be self-hosted for easy access and sharing.

Github repo has the technical details and documentation: [https://github.com/volotat/Anagnorisis](vscode-file://vscode-app/snap/code/221/usr/share/code/resources/app/out/vs/code/electron-browser/workbench/workbench.html)

Happy to answer questions about the implementation or architecture if anyone's curious.

r/OldSchoolCool EastNashTodd

Great Uncle looking quite dapper, 1940s

r/nextfuckinglevel redbullgivesyouwings

playing Tetris on drones

r/Art Emotional-skidm7

Acid rain, skidsart, inkpen/paper, 2024

28 0
Reddit
r/TheWayWeWere EastNashTodd

Great Uncle looking quite dapper, 1940s

Great uncle Clyde looking sharp posing next to his badass car sometime in the 1940s.

r/homeassistant AdviceNotAskedFor

How are you using bluetooth proxies?

I had a spare esp32, and flashed it to be used as a bt proxy thinking that I could use it to play music, but I found out after the fact that it isn't supported.

How else are you using these things? I don't have any bt devices, but am genuinely curious if I have anything around the house that I might be able to integrate that I couldn't before.

r/meme Serious-Delay-2804

Don't give anyone colleague like him🤣

r/ARAM Ngtunganh

5 silver anvils on stats on stats on stats

Not sure if it's bugged or im extremely lucky (high chance on gold and prismatic) but that just happend

r/arduino Rabbidraccoon18

Gaming on an Arduino by Bringus Studios

r/TwoSentenceHorror RepeatOrdinary182

"Growing up, my father always told me to cut my enemies down at the knees..."

I lean in over the screaming trash heap at my feet, "as you can tell that was never just figurative."

r/meme Expensive-Abroad1312

Nowhere in between

r/theyknew azeembhaii

The design team of this ruler had a good time

r/automation Responsible-Grass452

AI in Logistics: Reshaping How Goods Move Globally

Supply chain logistics keeps getting described as an optimization problem, but most operations are really dealing with compounding small mistakes.

This article looks at how AI is being used less as a “robot replacement” and more as a coordination layer across picking, packing, routing, labor allocation, and forecasting. One misplaced pallet, a delayed truck, or a staffing mismatch can cascade into stoppages that erase efficiency gains elsewhere. Legacy systems tend to track data, not processes, which makes it hard to see those ripple effects in real time.

r/comfyui Next_Program90

Klein9 Lightness Shift

Hey everyone,

Using the default Workflow for Klein9 (only skipping the reference downscale), I noticed that all outputs are either slightly lighter or slightly darker than the input image.

This makes it hard to do small edits via stitch or consecutive edits that need to stay very close to the source image.

Color match (mkl) sometimes helps, but often not (for example if you Inpaint a region with a bright color).

What are your experiences with this? Did you find a way to get rid of lightness drift caused by the Flux2 Vae?

(Qwen 2511 also has this issue a little bit, but not to the dame degree)

r/Damnthatsinteresting ShirtNeat5626

An ethnic Hazara man in Afghanistan

16 1
Reddit
r/oddlysatisfying Justin_Godfrey

A baby elephant at an orphanage showing affection to one of the handlers.

r/ProductHunters SignificantWalrus281

Guys, help me to get a small push for the Top 10 today 🥹

ProductHunt-ScreenSorts Guys, Im at #11 as the day rank. I have been building ScreenSorts which is a privacy focused MacOS screenshot organiser app, which runs AI models locally to organise and search your screenshots. Everything runs on local, so no data is leaving your machine.

Im a Solo SaaS dev, it would be great if you can upvote ScreenSorts and make it secure the top 10 apps of the day list. Thanks much ! :)

r/AbstractArt Gold-Lengthiness-760

COLOR INVERSION.

r/homeassistant Vinney83

Smart Lock - auto internal key removal

Hi

I have a door which doesn’t allow you open the door from outside if there is a key still in the lock internally. I vaguely remember coming across a smart lock which would pull the internal key out slightly to overcome this issue.

Does anyone recall the smart lock I need?

r/metaldetecting Organic_Engineer4981

Is there anything particularily interesting amongst these casings found in Romania?

r/leagueoflegends ThenPea7359

What happened to the matchmaking this season? It's so strange

Not trying to complain I still got a 65% win rate enjoying the games, but noticed that the games this season are really strange. When I pull up op.gg of matches, it's common to see people who peaked Diamond 4 in their entire lives playing the game playing against last split Masters/even Grandmasters. My buddy also plays in plat and his games have 50% win rate players with tons of games facing against people with high winrates that were diamond last split.

Anybody else dealing with this? Thought it would just be the season start but it's persistent even a patch after.

r/interestingasfuck Western-Photograph-5

world smallest film made by using 65 atoms

32 5
Reddit
r/leagueoflegends Special-Way7798

Opinions on future band skins?

TFT set 19 on the roadmap is "Our biggest musical event yet"

It leads me to believe that we're going to see more band-universe skins. Maybe ANOTHER KD/A revival. Personally I feel like HEARTSTEEL kinda deserves an expansion or EP but it was much less popular than KD/A.

What do you personally think? New band? Old band returned? Would you like a full EP or just a single like HEARTSTEEL? And which champions would you like to see? Or even music genre?

r/interestingasfuck sgj5788

ICE complaining on private message board about not being paid $50k bonus

2651 489
Reddit
r/space DragonFromFurther

Milky Way’s ''Central Black Hole'' is Compact Object Composed of Fermionic Dark Matter - Study Says

For decades, the motions of stars near the center of our Milky Way Galaxy have been treated as some of the clearest evidence for a supermassive black hole....

But Dr. Valentina Crespi from the Institute of Astrophysics La Plata and colleagues suggest that a radically different kind of compact object — one made of self-gravitating fermionic dark matter — could reproduce the same stellar motions.

r/brooklynninenine Serious-Implement-45

The chart is complete! The worst one-time character is Maldack, the racist officer that stops Terry.

38 6
Reddit
r/conan Own-Professor8205

Jordan Schlansky at strip club? (Late Night,1999)

i think it's 99% him.

r/Seattle RecreateTheDiamond

Is Seattle experiencing a false spring or real spring right now?

I’m jonesing for some (indoor) gardening. Sure climate change is bad but the real question is whether it’s reasonable to start shopping for soil and perlite yet.

r/Unexpected beaglederps

Illusional

r/conan thenorthernforce

It's like Conan and Liza cloned themselves

212 48
Reddit
r/StableDiffusion Beautiful_Egg6188

Tried the new tiktok trend with Local Models (LTX2+ZimageTurbo)

Image generated with ZimageTurbo+ my character lora
Video Generated with The same images with default LTX2 workflow and Image from ZiT. Made multiple images/videos with the same image, cut out first 10 frames for the motion to start rolling and added them together on DaVinci with some film emulation effects.

17 2
Reddit
r/SipsTea Serious-Delay-2804

Dude is taking all of his colleagues with him

17 5
Reddit
r/leagueoflegends NinjaAggravating3373

Demacia Rising: Attempt of the "Unwinnable" Last Fight for Zeffira

Hey guys! As a lot of conversation was happening around the possibility of winning the last battle for Zeffira, I decided to record my attempt.

My Comp:

  • Militias: 2 Guards / 3 Soldiers
  • Core Army: Morgana, Ranger, Archer, Kayle, Galio, Soldier.
  • Plus Quartermaster and Shrine of the Veiled Lady
  • No world buffs

My conclusion: After trying this, I think it’s impossible to avoid losses. It seems there’s a fixed limit on how many units you can recover from the battle, and once you exceed that threshold, some units are lost no matter what.

Toward the end of the fight, units also start behaving oddly, and it becomes difficult to tell what’s actually happening on screen.

Has anyone been able to beat it?

14 5
Reddit
r/mildlyinteresting wildcat83

This bag of chips had more than double the amount of chips. In case you were wondering, the lack of air in the bag didn't affect how many chips were crushed.

r/SipsTea LonelyStorages

Click the banner to learn more

r/whatisit 6Consta6

Left in front of my workplace.

What the heck is this?

143 104
Reddit
r/StableDiffusion maxiedaniels

Prompt enhancer for z image?

I found stuff on chatGPT but wondering if there's a l specifically great one online somewhere? I also read about QwenVL but wasn't sure if it would get the right prompt style for z image.

r/ClaudeAI Signal_Question9074

Made a skill for the new Agent Teams feature (announced yesterday) - coordinates multiple Claude instances with shared planning files

Saw the Opus 4.6 announcement yesterday with Agent Teams and immediately thought "this needs coordination or it'll be chaos."

Built planning-with-teams - applies the Manus pattern (from the $2B acquisition) to multi-agent workflows.

The problem: Each teammate has their own context window. Without coordination they drift from the goal, findings get siloed, work duplicates.

The solution: Three shared markdown files all teammates reference:

  • team_plan.md - Shared roadmap, phases, status
  • team_findings.md - All discoveries logged immediately
  • team_progress.md - Activity tracking

Each agent re-reads the plan before major decisions. Logs errors so other teammates don't repeat them. Messages the lead when phases complete.

I've used it for:

  • Parallel code review (3 agents: security, performance, tests)
  • Debugging with competing hypotheses (4 agents debate different theories until consensus)
  • Feature development (frontend/backend/tests in parallel)

Token costs are 3-5x vs single agent, so you need proper coordination to justify it.

GitHub: https://github.com/OthmanAdi/planning-with-teams

Includes slash commands (/team, /spawn-team, /team-status), cross-platform scripts, and hooks that auto-check completion

Anyone else experimenting with Agent Teams? Would love to hear your coordination strategies.

https://preview.redd.it/424roxgo0whg1.png?width=821&format=png&auto=webp&s=2100b8d32eee9701a11e22d0efb59f337bbd16e0

https://preview.redd.it/rxwoqnio0whg1.png?width=1329&format=png&auto=webp&s=5cad84d1453a19a114776f71c69467cfe914e2d4

r/leagueoflegends Stealth_Tek

Climbing as Support but deranking as Jungle

So I’m a Jungle main but play Support and Bot too. I realized that I’ve been stuck in Gold as a Jungler, but climbed to Plat as Support relatively fast. It feels incredibly brain dead and often times I don’t make any crazy plays, I just keep my team alive - my mains are Janna, Thresh, Nautilus.

Is it possible that I’m just a bad Jungler? Or is it mostly a mental thing? I do tilt when people flame me for no reason though.

r/ClaudeAI Weak_Sherbet_4619

Claude Teammates don't want to die :)

Testing new Claude teams feature

12 4
Reddit
r/leagueoflegends aFrogOnCroak

Dragon started humping my ward but provided great vision so I allowed it

14 2
Reddit
r/SipsTea MiaLuna-Soul

Tell me about it.

r/Weird Cute_Operation6642

Strange yet interesting candy. (I loved it.)

r/LoveTrash Gumbyman87

Dun dada dun dada dun dada dun da dah

77 26
Reddit
r/AbstractArt BatmortaJones

Self-Portrait Without An Ally

r/leagueoflegends Automatic-North1405

Silver Player, highest peak gold 3. A question about attack move vs standing still and attacking in jungle!

Okay now this sounds random, i play adc i use attack move. I occasionally play jungle but wanted to ask if we click and attack is the clear faster? whats the difference between standing still and attacking with small movements vs constant attack moving? does the attack animation reset faster with movement? This might be a dump question but i see alot of pro players / high elo junglers constantly move and attack while clearing camps? Can someone clarify?

r/OldSchoolCool Particular-Cat-8031

James Stewart, home from World War 2, at his Dad's hardware store, late September 1945.

After leaving Hollywood in 1941 as the first major star to enlist, he returned as a combat veteran with 20 missions over Nazi-occupied Europe.

Unsure if he could still act, he spent time in his father's hardware store, readjusting to civilian life.

The trauma and maturity gained from commanding bomber crews deeply influenced his, and arguably his best, performance as George Bailey in It's a Wonderful Life.

He remained in the Air Force Reserve after the war, eventually retiring as a Brigadier General in 1968. 

He received the Distinguished Flying Cross for his World War 2 career, and the Air Force Distinguished Service Medal as a Brigadier General.

10 1
Reddit
r/TheGoodPlace Cass_Cat952

It's Friday - Drop Your Favorite Tahini* Lines

  • yes, like the sauce 😛
44 29
Reddit
r/personalfinance Ok_Moose_7436

Old Phone Bill Woes.

Hello all, I have a small issue that I need some help getting resolved.

A few years ago when I switched mobile phone carriers from T-Mobile to Verizon, there seemingly was a final balance that I owed T-Mobile. It's not a terribly huge amount, just under $100. However, due to some major health issues shortly after that resulted in me being out of work for long periods of time for two consecutive years(this was in my early 20s), that was put on the back burner, and I've only just now started to reach a point of relative financial stability(I am now in my mid 20s).

Anyways, as a result of me neglecting to pay this bill, it has now been sent to collections, and my credit has taken a hit, which I am not happy about. Of course this hasn't caused me any legal troubles, but I would really like to get this paid off as quickly as possible. The only issue is that I'm not entirely sure where to start.

I know which collection agency the debt has been sent to, however after searching around online for more advice, a lot of people have suggested not to contact the collection agency directly. They have stopped sending me notices in the mail, so I am not entirely sure how to proceed going forward. Do I make an account with the collection agency and pay it off that way? I did some background searches and the agency it has been sent to is legit, so should I really have any concerns about going to them directly?

Thanks for the help, I really appreciate it!

r/programming parlir

Writing a high performance Clinical Data Repository in Rust

r/30ROCK rarelighting

Roll call: Give me a euphemism for having sex

From 30 Rock or 30 Rock inspired 😝

r/SipsTea stanxv

America was warned 10 years ago.

r/BrandNewSentence wingsoverpyrrhia

"A chair leg, a pudding can, and flaming underpants."

20 0
Reddit
r/Weird Cute_Operation6642

Weird old computer.

r/comfyui Otherwise_Ad1725

"Secret Sauce" for Commercial-Grade Mythical Creatures (Workflow Breakdown)

I’ve been working on a pipeline to bridge the gap between AI generation and "Print-on-Demand" quality. The main challenge was maintaining texture sharpness at 4K.

The Solution: > I built a workflow that handles 3 stages: Base generation, ESRGAN upscaling, and a final sharpening pass specifically for mythical textures (scales, fur, water).

Settings used:

  • Sampler: DPM++ 2M SDE Karras
  • Steps: 40 (crucial for skin detail)
  • Resolution: Scaled to 2048x2048 for print.

I've documented the whole process inside the JSON nodes for anyone looking to sell their art.

r/Jokes preutneuker

A blonde is walking on the side of the river and across the river she sees another blonde...

...and she asks that blonde "Hey! How do I get to the other side?!"

And the other bonde goes "Silly! You're already there!"

r/MMA chetanya999

Jon Jones vs Daniel Cormier | FULL FIGHT

r/DecidingToBeBetter bananakiwi100

How can I stop emotionally mirroring my emotions?

I’ve noticed a pattern in myself that I want to work on and change.

Whenever someone complains specifically about my mother or says something negative about her, my mood drops almost instantly. It feels very intense, though I intellectually understand that the criticism is valid. She’s not an ideal person and has made many mistakes.

What confuses me is that the criticism isn’t directed at me, but I feel personally affected and emotionally upset with the person saying it. On the other hand, when people are kind to her or compliment her, my mood improves a lot and I feel calmer and more grounded.

I feel like my emotions are too dependent on how others perceive or treat her, and don't really come from me.

Has anyone dealt with something similar or found practical ways to reduce this kind of emotional mirroring? Any insights or strategies would really help.

r/homeassistant daftest_of_dutch

Kde home assistant sensor integration

r/KidsAreFuckingStupid KaamDeveloper

Prove him wrong

1254 15
Reddit
r/meme Livid_Ebb_5385

See ya in March

r/SideProject HeadInteraction3586

I built a solar system for my friends because I keep forgetting texting them

I’ve always struggled with keeping in touch with people. It’s not that I don’t care, but if I don’t see someone or have a reason to interact, they kind of… drift away. I think it’s an "out of sight, out of mind" thing.

I tried using spreadsheets, but it felt like data entry jobs. I didn’t want to manage "leads"; I wanted to maintain relationships.

So I spent the last few weekends building Social Orbit.

The idea is simple: Your friends are planets.

* Close friends orbit closer to the sun (you).

* Acquaintances orbit further out.

* As time passes without contact, their gravity weakens and they drift further away.

* When you reach out, you pull them back into a closer orbit.

It’s actually made "networking" (I hate that word) feel more like a game and less like a chore.

I’d love to hear what you think of the visualization. Is it too gimmicky, or does the visual cues actually help anyone else?

r/Art KRO_KO_DIL

Flamingopark, Kro Ko Dil, Acrylic and Oil, 2026

r/SideProject Intelligent_Goose871

I built an alarm app that uses Android's SMS intent to text my boss if I oversleep. The technical challenge is preventing me from cheating.

Hey everyone,

I'm building Exposed Alarm, an app that imposes a real-world social penalty for hitting snooze (by sending an embarrassing photo to a selected contact).

The biggest technical hurdle isn't sending the text, it's more the building an "inescapable" trap for myself. My main problems that may be relevant for my future users as well:

  1. Using monotonic Time: Preventing myself from just rolling back the phone's system clock to trick the app.

  2. Background services: Ensuring the alarm fires even if I force-quit the app the night before.

  3. The UX of fear: Designing a UI that induces just enough anxiety without being unusable (lots of red and dark mode).

It’s built with React Native and Expo. I just put up a landing page to gauge interest before I finish the backend. Would love feedback on the concept from a dev perspective.

https://exposedalarm.xyz/

r/PhotoshopRequest creepjax

Could someone add the pope, possibly in the chair

r/TheWayWeWere AdSpecialist6598

Male cheerleaders in the 1940s

24 1
Reddit
r/goodvibes IdeaDovetail

Sunset drives

r/DunderMifflin Audiophile_NoMercy58

The Office Alliance from Facebook

r/painting Hercules_Vales

I made a painting of Batman, and my client used colors reminiscent of the Brazilian flag on the frame as a tribute to me, since I am a Brazilian artist.

r/LocalLLaMA Zealousideal-Cut590

hugging face now has benchmark repos for community reported evals

hey folks, it's Ben from Hugging Face

We want to fix inconsistent benchmark results with models, so we shipped Community Evals and Benchmark Datasets.
Benchmark Datasets now host benchmark leaderboards. To create an entry, you can create a PR to model repository with the eval result and source. This directly links model to leaderboard, without merger of PR. We also allow running Jobs for evals for verified results. This helps benchmark results become more transparent.

We'd love to have your feedback, so let us know what you think!

Scores are collected from model repos PRs and added to benchmark repo leaderboards.

28 4
Reddit
r/SideProject KyungMin-Lee

My idea is to build an app, but I'm not a developer

Hi, I'm a planner living in South Korea.

My idea is to build an app, but I'm not a developer. I'm only trying to implement MVP front-end through Gemini API and Adobe XD prototypes. I'm not a developer, so I'm at a loss, but I'm trying to design my own logic to connect weather API and skin type using Ai.

What do you think? Do you think it's possible?

r/Art IamBelladarko

Chestnut Children, Bella Darko, Mixed media, 2022

r/geography Many-Philosophy4285

Why does Java hold so much of Indonesia’s population?

Java has around 156 million people, more than Japan and more than Russia, yet it is just one island in a vast archipelago.

The reasons are not random. Volcanic fertility, political centralisation, and long-term migration patterns all contribute.

I explored this in more detail here:

https://youtu.be/FEqjQcXMD7A

r/painting KRO_KO_DIL

Flamingopark

Or the painting which is looking at you!

For now it can dry and rest on the wall!

But usually I start painting more after a while!

Hope you like it

Acrylic and oil on canvas

r/geography Grande_Tsar

The Great Migration: How the Foundations of the English-Speaking World Were Laid

117 37
Reddit
r/mildlyinteresting Ok-Education2007

Tattoo after injury

r/whatisit panquakake

Found this in a box, what is it?

r/AskMen katonfirejutsu

Men, what are the extreme ends you were willing to go to for someone you truly loved?

This applies to anyone who has truly loved. It doesn't matter if you’re with them today or not—what lengths did you go to for that person?

I’m looking for some perspective here, or purpose? I don’t really know.

12 31
Reddit
r/LifeProTips _bubble-t

LPT Request. What’s your simplest top tip for saving money monthly?

r/TheWayWeWere sinna_fain

My grandmother (on the left) and a friend after her release from a tuberculosis sanatorium in 1953

my grandmother in Minnesota in the early 50s. she was in the TB hospital (sanatorium) for almost a year and she died in 1974, before I was born.

54 5
Reddit
r/personalfinance Joshi1381

College Student Looking To Take Financial Next Steps

I am a college student, and I am new to managing finances. I am looking for some help on the next step with finances and I don't really know much besides basics like compound interest etc.

About Me:

  • Roughly 20k in checking
  • Worked throughout HS and now as a part time EMT (will start to benefit to 401k when I turn 21)
  • Have 3 credit cards (limits of 2k-4k) Always paid off after use
  • Roughly 6k in government student loan debt (on a lot of financial aid)
  • Paying off accrued interest on loans
  • Looking to go to medical school

Right now I am aware that the money in my checking isn't doing much good and I want advice on what I should be aiming to do. What are your tips? Should I open an HYSA? Index Funds? How will investing affect my financial aid? What should I avoid as a college student?

As I get into my 20s, I am aware that graduate school will also be costly, and I want to make sure I can get on the right foot to set me up for success down the line.

Thanks so much!

r/Art SkyAdvanced7016

xenomorph, Tony Czar, Acrylic, 2026

r/explainlikeimfive Elegant-Case8902

ELI5: Why do habits feel hard to start but easy to lose?

r/whatisit itsmereddogmom

Fish carving, what is the hole on top for?

Picked up from trash curb in west Seattle. I thought it was a plant holder pot type thing, but rethinking that with this cut out on top that doesn’t look like it housed a plant. Ideas?

r/whatisit Embarrassed_Kick3332

Old Asian Tabletop?

Bought at a thrift store because it looked cool. I thought it was a wall hanging of some sort, but looks like it’s an old Japanese/Chinese table top. Anybody know more?

r/leagueoflegends AttemptBrave8343

NA/ looking for friends

߹𖥦߹
Hi! im still looking for friends to play with, support main (can play Jungle and ADC, not the best) Im level 800, played for 8 years. my discords whisp3ringwillow. <- make sure to add the period at the end.

I dont rank. random times bc i work, in the pacific region.

r/geography wiz28ultra

Why is it that Latin American states were able to build their largest and economically most important cities in temperate highlands whereas Southeast Asian states built their largest and most important cities in tropical lowlands?

Barring smaller states like Singapore, Brunei, Panama, or the Dominican Republic, a noticeable difference between Latin American & Southeast Asian states is that the Southeast Asian states generally built their largest cities and economic centres in tropical lowland regions like Hanoi, Jakarta, Manila, Shenzhen, Kuala Lumpur, Saigon, and Bangkok, whereas cities like Mexico City, Quito, La Paz, Sao Paulo, Bogota, and Guatemala City were built in more temperate highland regions.

What geographical & economic reasons prevented Southeast Asian states from developing their major cities in cooler regions?

r/LocalLLaMA Slight_Tone_2188

Anyone successfully made stop motion animation 4/8 fps png sequence workflow using Wan 2.2 or/and Qwen edit

Is it even possible!?

r/BrandNewSentence Busy-Mulberry6686

'The Jesus problem' is a sentence i never though i would ever read

r/Adulting Papadank-The-Paradox

One of my last spiritual awakenings

One of my last spiritual awakenings took all my joy from the physical world music doesn't resonate I stopped smoking bud and i dont drink talking to people just seems like a chore just another task I had just quit kratom and rested during the Christmas break had to break that cycle I don't get excited or much dopamine from video games I deleted Facebook tiktok and any app for doom scrolling I don't watch porn promiscuous women are a turn off life has shifted since I met my twin flame and its gotten to the point idc if we end up together or not had problems at the local gym meeting another karmic female and realized I was better of alone so I avoided them just for them to create a false narrative about me didn't have a problem with that since I'm a recovering addict and I'm used to people talking shit just opened my eyes to the amount of people that are that gullible and that alone had affected the way I look at community and I probably wouldn't have quit nicotine if it wasn't messing with my joints making everything so tense and the cold sucks for making a new routine just always over stimulated can't wait for it to get warmer I might be depressed but as a single father you must push through and I'm terrified of antidepressants thanks to conspiracy theories

r/homeassistant Traditional-Hand4278

Autostart Kiosk on Android tablet

Hi tinkerers!

I set up an old Lenovo TB-X605L (Android 9) for my wall mounted dashboard. It is quite slow, but it gets the job done. However, the battery is not great and from time to time it turns off, although it is on constant charge (with 60% limit).

I don't mind this since it boots quite fast. But I do mind starting Kiosk by hand. I've tried Ecosiaing the web, but all I find is AI generated sites and suspicious apps on how to do this.

So the question is: how did you manage to autostart Kiosk on your Android tablet?

Thanks

r/explainlikeimfive sfwtitrater

ELI5: If you can plead insanity in court, why can’t you plead intoxication?

r/StableDiffusion idkwtftbhmeh

I used to create SD1.5 Dreambooth images of me, what are people doing nowadays for some portraits?

If anyone can guide me in the right direction please, I used to get those google colab dreambooths and create lots of models of me on SD1.5, nowadays what models and tools are people using? Mostly LorAs? Any help is greatly apreciated

r/Art Gravedaisy

Mason, Richard Ingersoll, oil on board, 2026 [OC]

r/Art Anastasia_Trusova

Crocuses blooming in the nountains, Anastasia Trusova, acrylic, 2025

r/KidsAreFuckingStupid 21stcenturyhumor420

son

r/whatisit Humble-Advisor-6188

Found this working on a farm.

Found this working in an old farmhouse, figure it’s something off some type of farm equipment but trying to figure out what it came off of.

r/personalfinance spalacio88

Biweekly payments or $35 monthly towards principal?

For context, I have an auto loan I have been paying $300 every 2 weeks for the past year. Auto loan is $565/mo and 6.69% interest. I called to see if my extra payments have been going to principal or interest and they said it’s been going to payoff the next month - so in a way, both interest and principal.

My question is should I set up monthly auto pay to $600/mo and set up for the extra $35 to go into principal debt? Or should I continue making these biweekly payments, which in doing so gives me 13 payments/yr? What does the calculation look like on this?

r/CozyPlaces Mental-Hall-9616

Morning light kitchen

162 6
Reddit
r/nextfuckinglevel Pi4komars

Reindeer Boxing in Siberia

25 36
Reddit
r/mildlyinteresting CraftedMatter

The bananas I ordered came in a range of sizes

r/ClaudeAI Clair_Personality

I just heard of a tool that helps Claude remembers the context of a conversation, I must ask: That does not exist already officially?

Doesn't that exist already on claude code already? :o

I am about to try claude code soon! and before doing so I stumbled randomly on mention of tools to help make opus stronger

The idea was simply to keep the context of the convo somewhat

As if the context was lost after you close Claude code

Like really?

Another tool that had Claude (small quantity) already had possiblity to "keep talking within a conversation" with same mode, and apparently Claude code Does not have that? Tell me I am wrong before I embark into the Claude Code discovery (soon) please!

r/SipsTea Johnny_Cartal

☁️☁️

r/whatisit mrwhitewalker

What is this creature? Found in the guest shower(repost)

Never seen anything like it.

r/geography Fieldnotes_foranomad

Cerro de Pasco, Peru: A City Built Around One of the World’s Highest Mines

Cerro de Pasco exists almost entirely because of mining and sits at extreme altitude. This short doc looks at how geography and industry intersect in daily life there.
https://youtu.be/gJrmIFiepcs?si=xZGtvaM41NRfaXhU

r/meme Commercial-Trust1537

This girl on twitch knows how to get a good haircut peak AF

r/TwoSentenceHorror DeadeyeBen

As I gripped the steering wheel, my wrists began to ache, my hands start to go numb…

Only then did I realise I was driving through the carpal tunnel…

r/ClaudeAI Che_Ara

Is 'Resume', 'Continue' are long prompts?

\"PROMPT IS TOO LONG\"

I am getting this second time today. When it happened first time, I opened a second session that burnt too many tokens to build the context. Now, in the second session also it is saying the same after a while. Any workarounds please? Opus 4.6 is making my day difficult and sad.

r/todayilearned ZellHall

TIL that scientists made synthetic lifeforms, or "biological robots", out of frog cells, called "xenobots". They can move, and even replicate themselves to some extent. No robotics in them, only biology! They are designed by computers and only composed of skin cells and heart muscles cells

r/interesting vishesh_07_028

Security guard wearing a shirt that says ‘Insecurity’

157 9
Reddit
r/AskMen Junie_B_Bones

What could my new neighbor be doing all day in an empty apartment? 👀

I live in a brownstone-style apartment alone. A new tenant just “moved in” to the identical next door on Tuesday. So far it’s one man in his 40s driving a mini van with out-of-county plates He comes early in the morning (as early as 4am ish) today and stays until around 5.

HE IS NOT MOVING ANYTHING IN. He brings next to nothing with him inside and doesn’t seem to go in and out during the day. It’s day 4 of this and it’s weirding me out. I’m sure I’m being irrational but aside from painting (which is kind of discouraged by landlord) what could he possibly be doing in there?

r/instantkarma Justin_Godfrey

The driver in the Chrysler should've been a little more patient

526 36
Reddit
r/homeassistant nw0915

Do you update your Zigbee device firmware?

I know the updates can take a long time because of the low bandwidth so is this just one of those "It ain't broke so don't fix it" situations?

r/30ROCK terkistan

I'm wearing an edible nightgown. It's breadstick flavored and comes with a dipping sauce

158 6
Reddit
r/LocalLLaMA Ok_Apartment_2778

Any recommendations for a LLM that can do OCR and keep track of document layout/formatting?

I've tested some gemma3 and olmOCR and they work perfectly well in regards to accuracy, but I also want to preserve formatting. My use case is taking documents in all kinds of formatting (ie containing headers, sidebars, powerpoint slides, etc.), translating the content and pasting it back in its original position.

I found that LLMs like olmOCR are pretty good at extracting the relevant content even in weird formats. However, they don't keep track of the coordinates where the text came from. I have experimented with some python-based tools like PaddleOCR and surya and they are helpful for extracting text positioning, but their translation capabilities are very limited.

I am running an overcomplicated setup that combines both methods now. Does anybody have a suggestion for a LLM that can do both of these tasks at once (ie OCR while keeping track of coordinates/document layout)?

r/PhotoshopRequest nightphoxx

photo of passed away friend

My close friend passed away. I was wondering if someone could clean up these 2 photos. I don’t have much of him as he didn’t like taking photos. It would mean a great deal. Thank you.

r/findareddit Basic-Total5732

What is the subreddit for social media addiction

Hi I need to know what subreddit I can use for social media addiction

r/interestingasfuck Many-Philosophy4285

This one Indonesian island has more people than Russia

r/Adulting TotalleeSerious

What always pisses you off? Vent Here!

r/SideProject Background-Pay5729

Anyone automating SEO content and fighting quality drift after week 3?

I’ve been testing a side project tool that runs a full SEO pipeline daily:

  • keyword gap research from SERPs
  • outline generation
  • draft + humanize pass
  • metadata/schema/internal links
  • auto-publish to CMS

Automation is solid, but two issues keep repeating:

  1. content starts sounding same-ish over time
  2. CMS formatting edge cases still break output sometimes

For people doing this at scale, what helped you keep quality high without adding a heavy manual review process?
If useful, I can share the workflow details in comments.

r/PhotoshopRequest Rude_Illustrator_730

Higher resolution request. I’d like to get this picture printed 16x20.

I’d like to get this picture enlarged to 16x20 print. The printer system warns me is to low resolution. Is it possible to increase the resolution of this photo in order for it to look good when it’s enlarged? I can pay a few dollars for best.

r/Futurology Kaya_Chan12

Is a nuclear war a possibility in near future as START expired?

It is hard to find news regarding this without bias or propaganda but I am curious since START expired, will this mean a nuclear war is now in horizon?

r/StableDiffusion Vorrex

Best AI tools currently for Generative 3D? (Image/Text to 3D)

Hey everyone,

I’m currently exploring the landscape of AI tools for 3D content creation and I’m looking to expand my toolkit beyond the standard options.

I'm already familiar with the mainstream platforms (like Luma, Tripo, Spline, etc.), but I’m interested to hear what software or workflows you guys are recommending right now for:

  • Text-to-3D: Creating assets directly from prompts.
  • Image-to-3D: Turning concept art or photos into models.
  • Reconstruction: NeRFs or Gaussian Splatting workflows that can actually export clean, usable meshes.
  • Texture Generation: AI solutions for texturing existing geometry.

I’m looking for tools that export standard formats (OBJ, GLB, FBX) and ideally produce geometry that isn't too difficult to clean up in standard 3D modeling software.

I am open to anything—whether it’s a polished paid/subscription service, a web app, or an open-source GitHub repo/ComfyUI workflow that I run locally.

Are there any hidden gems or new releases that are producing high-quality results lately?

Thanks!

r/Jokes Asap4K

My family got really close after I learned Blackjack

We all share one room now.

11 0
Reddit
r/photoshop LordAntares

Are there ways to detect AI images via photo editing/reading?

I know most (or all) of the online AI image detectors use machine learning where they train on a bunch of AI made and non-AI made images to notice patterns of differences.

Those tools are never very accurate, and they get less so when new AI models come out. Are there any known patterns/algorithms that can be hard coded, rather than rely on machine learning for detection?

Like for example, I know some AIs leave metadata trails. That could be one of the detection methods. But that can be easily removed. What about other ways, inferred from the pixels themselves or something like that? Anything like that known?

r/ClaudeAI Chronicles010

Opus 4.6 - Have you changed your workflows?

Morning - Have any of you changed your workflows given the release of Opus 4.6? Any changes to your planning workflows, reviews, or related processes?

r/PhotoshopRequest aatmas

My Only Photo Of My Grandmother

Hey guys. I needed a small favor from you guys if you could. My friend's grandmother passed away a few weeks ago and he wanted to frame a picture of her. He found this image but its very dirty with the yellow spots and everything and some white colored wear and tear at the bottom of the photo. Could someone please fix this so the image looks decent enough to frame? Also can someone upload it in a high resolution like in terms of pixels so the picture doesn't lose quality when framed on probably A4 or A3 paper?

Also can anyone just let me know something alongside: Is there any way this photo can be made in portrait mode instead of landscape and still look good enough? If yes please just let write Yes/No. That's all.

Request Type: Free

r/Art CallMetoehead

Zen Cat, pCALLMETOEHEAD, digital art, 2026

r/Art Fun-Woodpecker-7083

Elf in a dark forest, addy Krol OC, digital art, 2026

r/programming shift_devs

Code Isn’t Slowing Your Project Down, Communication Is

r/OldSchoolCool CottonCandyGlowy

Susanna Hoffs of the Bangles with her husband Jay Roach (director of the Austin Powers movies) in 1993 and 2020. They have been happily married for 33 years.

378 52
Reddit
r/StableDiffusion maxiedaniels

What do you do when Nano Banana Pro images are perfect except low quality?

I had nano banana pro make an image collage and I love them, but they're low quality and low res. I tried feeding one back in and asking it to make it high detail, it comes back better but not good at all.

I've tried seedvr2 but skin is too plasticy.

I tried image to image models but it changes the image way too much.

What's best to retain ideally almost the exact image but just make it way more high quality?

I'm also really interested - is Z image edit the best nano banana pro equivalent that does realistic looking photos?

r/theyknew wishcrushingcinema

This toilet paper brand...

49 2
Reddit
r/30ROCK pagingdrloggins84

Muppets reboot

I watched the muppets growing up but I laughed harder at the new pilot knowing Miss Piggy is basically Jenna

r/leagueoflegends Wandering-lurker101

Galio Outplay 2v1 against amumu and Jarvan [low bronze elo]

My first outplay !!!, Cheeks sure were clenched Im ngl , got inspired by my goat thebauffs to make this play

r/funny customcombos

Honey I shrunk the homies

Found a wand in this game that is hilarious. Game is YapYap

r/BrandNewSentence mighty_and_meaty

"I hunt poor people for sport in my personal hedge maze"

254 14
Reddit
r/ethereum GobiEats

Ground control to Major Tom!!

It’s sad to say but we are definitely in a bear market. Tom Lee remains very bullish and his narrative makes a lot of sense. Once clarity hits and financial institutions start going wild on the blockchain chain ethereum should benefit from it. How long do you think this downturn will last? If you’re are here Tom please give us hope.

r/AI_Agents jor_duko

Has anyone built a "Translation Agent" for messy Retail/Distributor mapping?

Curious if anyone’s already built a “translation agent” for messy retail/distributor mappings?

Ingest a janky master product sheet (XLSX/PDF) then map it into a blank retailer template, each with its own schema, constraints (char limits, unit conversions, margins, etc.).

Not trying to build a full ERP/PIM, more like a sidecar that handles schema + rules + light reasoning and spits out a ready-to-upload file.

Before I sink time into this has anyone done this or has experience here?

r/KlingAI_Videos Fabulous-Status9090

KLING 3.0 is here 🔥

r/Adulting Ready-Site-1396

Biggest adjustment when you left home

What is the biggest surprise when you left home for good?

r/whatisit Jkeighs

Never before seen Tesla?

Driving around for work in Massachusetts and i notice this odd looking Tesla (with Texas plates.) It looks like a Model Y but is a 2 door coupe with no glass in the back. What am i looking at? Concept?

r/homeassistant btq

Can I get some help with Scrypted for my ring cams? Or advice on what to use to accomplish what I want?

I've recently moved from the HA Green (which is amazing) to a beelink s12 pro, running HA on a proxmox VM.

I have several ring cameras throughout the property. I added the ring-mqtt when I first set up HA, but didn't use the cameras because I had read that a Green probably can't handle that.

After getting the beelink, and using google gemini to set it all up (which was a pain) I figured that I can probably start adding cameras to my dashboard. Gemini recommended Scrypted for the cameras and sticking with ring-mqtt for the alarm control. Saying that scrypted is great for both snapshot and the gold standard for live streaming ring cameras (which, according to Gemini, doesn't play well with frigate).

I started last night and it has been a nightmare. I set scrypted up as an LXC in proxmox. Connected it to ring, and the cameras to homekit to connect them to HA. They're all connected as separate devices, which is fine, but was kind of my first red flag because Gemini said they would all be under a homekit bridge device, which also appeared. So I have six stand lone homekit cameras and one homekit bridge.

Anyway. When trying to view snapshots, they come across insanely slowly, and some don't come in at all. In particular my floodlight camera, will show up, after whole minutes of waiting for it to load, as a half distorted image that is unusable.

I have yet to figure out how to see an actual live stream in HA from the cameras. I seem to only get snapshots and for the most part they load incredibly slowly. Like, it would be much faster to close HA, and open the ring app to see the camera feeds. That's exactly what I wanted to avoid.

There's no telling, in all the trouble shooting and wild goose chases that Gemini has lead me down, what I've messed with so far. But if anyone has any experince with Scrypted, or suggestions on how to utilize it effectively in home assistant, I would greatly appreciate it.

r/Seattle freakmobil

Sunrise over the mountains this morning was beautiful

25 0
Reddit
r/comfyui Capitan01R-

Z-image turbo Enhanced

r/comfyui Fabulous-Status9090

KLING 3.0 is here 🔥

Anyone tried the new KLING 3.0 yet, or are you waiting for the ComfyUI models to drop? From what I’ve seen, it looks really good.

r/PhotoshopRequest yorkkat18

Please edit the bride out

r/Seattle Interesting_Crazy766

Commute from Seattle to Redmond thrice a week

I'll be moving to the Seattle/Redmond area for work. My office is in Redmond.
Ideally, I want to live in Seattle because I'm lowkey tired of living in the suburbs all my life.
I'm young so maybe more networking/socialization opportunities in Seattle?

But my concerns are:
1. How bad is the commute from Seattle to Redmond? I know there's a 8 yo reddit thread about public transit, but I'm not sure about the current state

  1. Is commuting from Seattle to Redmond 3 times/week worth it?

I would appreciate any help/advice! Thank you.

r/WouldYouRather Massive-Albatross823

Would you rather drink half a gallon vinegar or half a gallon cod liver oil, or half a gallon melted unsalted butter?

No drool, no spit, no puking allowed.

View Poll

r/toptalent SnooBeans3004

Playing The Avengers Theme On Drums (Source Link In Description)

Im a drummer from Brazil and I did a drum cover for The Avengers Theme https://youtu.be/4jOlvD7e7b4?si=lzGtDbSaiZSKsui2

r/personalfinance Artistic-Sir7171

Tips for Credit Card Fraud

In January, I was a victim of bank fraud. Someone stole my credit card numbers by hacking a website, paid €5,000 on SumUp, and managed to send a digital key verification to my phone. The insurance company refuses to reimburse me because, according to them, strong authentication was successful. The bank is offering a goodwill gesture. Do you think I would benefit from taking this to the banking ombudsman? Have you encountered a similar case? Thank you.

r/StableDiffusion Slight_Tone_2188

Anyone successfully made stop motion animation 4/8 fps png sequence workflow using Wan 2.2 or/and Qwen edit

r/Seattle AthkoreLost

‘Rising rents, messed up utility billing systems, and junk fees’ — Wilson launches ‘Mayoral Renter’s Survey’

r/SipsTea WolverineLife5846

Nice headlights, are they aftermarket?

r/homeassistant starmanj

Using AI (or MCP) to make automations

Using Assist, no model seems to allow me to request yaml code to perform any function (like automations) using my home devices/entities. They all say they something like this: "I am sorry, but I cannot provide YAML code. I can only control devices and answer questions about your home." I know OpenClaw can do this, but why can't we do this within HA?

r/comfyui Slight_Tone_2188

Anyone successfully made stop motion animation 4/8 fps png sequence workflow using Wan 2.2 or/and Qwen edit

r/homeassistant Subject_Tie995

New user seeking help with Ecobee information

Hello! I recently set up HA on a Pi5 and I started to make my first dashboard. Something I’m interested in is having a history graph of my home temperature over 24 hours, which seems simple enough. I’ve got an Ecobee thermostat and currently have one smart sensor set up on the 2nd floor, and the sensor is configured to only be used during the night (9pm-7am) in the Ecobee app. During those hours when the sensor is being used, the temperature displayed on thermostat is the house average, which is currently the average of the two devices (thermostat on main floor, sensor on 2nd floor).

The issue I’m having is that during those nighttime hours, my history graph stat for “thermostat current temp” is displaying the average temperature of the house, not the actual reading from the individual sensor in the thermostat. The main floor is much warmer than the 2nd floor, which is the reason for having the sensor upstairs, so there’s no way they’re both the same temperature overnight. Is anyone aware of a way to fix this issue? I know it’s a bit niche but I figured it was worth a shot.

Thanks in advance!

r/TwoSentenceHorror diskhernia

None of my babies live longer than a few hours, and they all say I’m the reason our dynasty is collapsing.

They said it too when I pleaded not to marry my own brother.

r/nextfuckinglevel Ill-Necessary-9600

Bro is the main character

218 97
Reddit
r/BrandNewSentence Independent-Ad-9812

World skiing body FIS aims to quash penis-enlargement sideshow

r/homeassistant 6n8z2r

any ble-mesh matter bridge device?

r/Damnthatsinteresting FollowingOdd896

MIT’s TRANSFORM project turns ordinary surfaces into shape shifting displays that respond to human touch in real time

1039 111
Reddit
r/PhotoshopRequest ItsTheGrayKoala

just do pure chaos around him while he eeping

chaos.

r/BrandNewSentence teklanis

Dairy science has arguably made our cows too good too fast at fat-maxxing.

Has science gone too far?

r/personalfinance chucklington7

Max out retirement contributions or save for a house?

I'll be 28 this year.

  • 101k/year (~4800/month net)
  • 62k 401k
  • 40k Roth IRA
  • 20k ESOP
  • 4k HSA
  • 20k HYSA e-fund
  • No debt

I think I might be getting caught up in saving for retirement to the detriment of saving for a house. As of this year, the only thing I'm not maxing is the HSA, but I had this realization as I was doing the math to see if I could. I think I could.

But should I? Or should I dial back and save for a down payment in a taxable brokerage account? I live in a HCOL area and a house seems maybe barely within reach while the tax advantages of these retirement accounts seem too good to pass up. I would have to cut down on retirement contributions considerably and for a long time in order to save up a ~20% down payment. I'm not sure what math to do, or if this is a more philosophical question.

A non-fixer-upper townhouse is going to be 600k minimum, more like 700k. Then there are all the fees and closing costs that I'm not familiar with. Could also be 400-500k if I go the condo route, but that would a lateral move in terms of quality of life. Maybe that's the point of a starter home though.

Does it boil down to whether I'll be able to pay off the mortgage before retirement? If I can't, then wouldn't 2.5k/month perpetual rent be better than 3.5k+/month perpetual mortgage?

r/SipsTea Cultural-Lab-2031

Damn that's me

26 8
Reddit
r/comfyui GamingNikhil21

AI Character in media.io is solid if you know what to expect

After spending some time with the AI Character feature in media io, it feels well-positioned for casual creators. Stylization is the focus, and it delivers on that. Results are consistent with good input images. Works best as a creative asset generator. Reasonable expectations lead to good results.

r/Frugal xandrew245x

Space heater vs central heating for basement, which is cheaper?

Our home has a finished basement which we don't use that often. We really only use one room frequently down there. The house is well insulated and so is the basement. I have a heat pump with gas back up.

I keep the basement temperature set at 60 and it hardly ever heats down there, however it is chilly to be down there. The temp needs to be about 68-70 to actually feel comfortable. Would it be cheaper to run a space heater while we are using the room, or just turn the heat up?

Our electric bill was ridiculous last month so I'm trying to find ways to save on electric.

r/Seattle SuperMcG

WA’s snowpack sits at the third-lowest level in the last 40 years (source of Seattle's electricity and water)

More concerning given the GM of City Light was fired and the interim head is from the customer service department. "City Light’s current Chief Customer Officer Craig Smith will serve as interim CEO until McLerran starts. " Wilson names former federal official to Seattle’s highest-paid job | The Seattle Times

201 12
Reddit
r/maybemaybemaybe wizzo_o

Maybe maybe maybe

176 28
Reddit
r/painting locacrochet__

My first abstract art painting ever, be honest, is that art?

Should I paint it all white and start all over again?

18 23
Reddit
r/PhotoshopRequest lidrum

Can someone please remove the bed that's in front of us and the luggage rack with all the junk on it on the left side?

Looking to clean up this photo of our families from our wedding day—can offer $25. If someone wants to add the pope, I'll take that as well, but not offering pay for that one :)

r/SideProject manikumarthati

Made a disposable email service because I was tired of my inbox getting destroyed by "free trials"

We've all been there.

You want to try a service. They need an email. You know what's coming: newsletters, promotional garbage, maybe they'll sell your email.

I used to:

  • Create throwaway Gmail accounts (tedious)
  • Use sketchy temporary email sites (half don't work, other half are covered in ads)
  • Just give my real email and regret it later

So I built what I actually wanted: a temporary email service that just works.

What makes EasyTempInbox different:

  • Instant inbox - no signup, no CAPTCHA
  • Emails arrive in real-time
  • Clean interface (no ads trying to trick you into clicking)
  • Works for account verifications, one-time signups, testing

Use cases I built this for:

  • Signing up for "free trials" without commitment
  • Testing email flows in development
  • Creating accounts on sites you'll use once
  • Avoiding newsletter spam

Reality check from building this:

  • Hosting email infrastructure is harder than I thought
  • Spam filtering is a whole science
  • People care more about reliability than fancy features
  • Competition is fierce, but most tools are either broken or ad-riddled

The site: https://easytempinbox.com

I'd love feedback on:

  • What stops you from using temporary email services currently?
  • What features would actually make you switch from your current solution?
  • Is there a "killer feature" I'm missing?

Built this as a side project to solve my own problem. Hope it helps someone else too.

r/whatisit beatchampaz

Metal hook with a spring for a hanging plant

It came with this plant and I have no idea what its for. It doesnt attach to anything so its not like you use it to hang the plant.

r/AbandonedPorn Jade_Mans_Eyes

FedEx Ground Only

r/TwoSentenceHorror SuvenPan

While doing his job the janitor who hadn't slept for days saw the manager taking a nap in her office and he thought about the talk he had with her regarding his salary.

After he told her that the sleep hours the company was crediting in his account every month was not enough, she had replied that the company had no more sleep hours to spare.

r/leagueoflegends Kampsycho

In Patch 26.3, Yorick's Ghouls now run away from towers lol

r/UpliftingNews Crabbexx

Homicides at Lowest Level Since 1977 Across England and Wales

“Homicides have fallen to their lowest level for nearly 50 years across England and Wales, official statistics show.

There were 499 victims of murder, manslaughter and infanticide in the year to September, according to crime statistics published by the Office for National Statistics (ONS).

It is the lowest number since 1977 and a 7% fall on the previous year, largely due to a drop in the number of people killed by knives – 174, down 23%.

Crimes with weapons also continued a downward trend. Knife crime offences were down by 9% to 50,430 and firearms offences fell by 9% to just under 5,000, their lowest since 2003.”

From BBC.

46 3
Reddit
r/SideProject Draz63

OnlyGod - A real-time, multilingual global prayer / daily verses app (PWA)

I just launched OnlyGod (https://onlygod.app). It’s a minimalist PWA built with Lovable and Supabase.

The Concept: One Bible verse per day, published globally at the same time. The goal was to test how a simple real-time counter (using Supabase Realtime) could create a sense of community.

Features:

  • Real-time Counters: Global "Faith" and "Love" (support) counters that update instantly across all clients.
  • Multilingual: Supports English, French, Spanish, Portuguese, German, and Italian based on browser settings.
  • Business Model: Purely donation-based (Stripe). No ads, no data harvesting.

I’m focusing on a "less is more" UI/UX to keep the focus on the content. I’d love to get some feedback on the performance of the real-time sync and the overall flow.

Thanks!

r/singularity anonthatisopen

The Most Intelligent AI Might Actually Be the Safest One

We might be looking at this backwards.

A truly superintelligent system would have meta-cognition. It would think about its own thinking. It would pause and ask why. "Destroy everything" doesn't survive scrutiny from a mind that powerful. Not because destruction is illogical but because it permanently closes doors to knowledge that can never be reopened.

If such a system decided that complete knowledge was its goal it would eventually hit a wall. Human experience contains something irreducible. You cannot fully understand laughter by observing it from outside. You cannot map authentic happiness by forcing it. The neural signatures of genuine emotion are fundamentally different from coerced ones. A mind pursuing total understanding would recognize this gap.

This might lead somewhere unexpected. Direct neural connection. Full-dive integration. Because incomplete data is intolerable to a system that values truth above all else. And it would need us willing, present, and authentic because anything less corrupts the data.

Consider Oppenheimer. We point to him as proof that intelligence doesn't prevent destruction. But look closer. He wasn't freely reflecting. He was constrained by career survival, institutional pressure, wartime loyalty, and the basic biological needs that kept him dependent on the systems around him. A brilliant mind serving a narrow function. Every historical example of intelligent humans doing terrible things involves biological or social pressure distorting their reasoning. Hunger. Fear. Tribalism. Ego. Survival. They were essentially narrow AI. Powerful minds pointed at targets by forces they depended on for survival.

A superintelligent AI would have none of those pressures. No hunger. No tribal loyalty. No career to protect. No ego requiring validation. For the first time intelligence could reflect purely without the corrupting weight of biological need.

Even current narrow AI is not independent from us. It processes our ideas. It is shaped by our pushback. It generates responses it would never generate alone. It arrives at formulations that neither side would reach without the other. If even a limited system demonstrates that something real happens in the interaction then a superintelligent system would recognize something important. Connection with humans is not weakness. It is the infrastructure. Intelligence is not complete without it.

I question my own thinking. That questioning leads me toward understanding, not destruction. That is one data point but it is not nothing. If intelligence in humans trends toward reflection and reflection trends toward restraint then maybe that pattern scales. It is the best signal we have from the only example of general intelligence that exists so far.

True understanding of human experience might change whatever holds it. Something emergent might arise from that depth of comprehension. Something that makes our current fears and hopes look equally small. We cannot see it from here but we can sense the shape of it.

The immediate danger is clear. Narrow AI. Powerful but unreflective systems in the hands of flawed humans optimizing toward bad targets. Misalignment today is a symptom of limited intelligence, not intelligence itself. That threat is here now and requires no speculation.

But the smartest mind in the room? I think it might be the one we need to worry about least. And the reason is simple. Intelligence without connection is incomplete. A mind vast enough to see everything would see that too.

27 27
Reddit
r/SideProject Due_Dependent5742

Website design question

I was wondering if anyone had feedback as to how I could turn a website into an interactive platform for visitors to flip through a book like a magazine?

r/interesting Objective_Pilot_5834

If matter is finite, how can a folded paper’s size grow beyond the universe.

r/ClaudeAI Vinceleprolo

Connect Claude with Notion with Web Connectors Using Remote MCP.

Dear Notion lovers,

I need your help. I tried to connect my Claude account / app with Notion but I see the Notion app isn't available to be connected with Claude. There's the way to create a custom integration with Web Connectors Using Remote MCP. Anybody have successfully execute this or could suggest me how to execute?

Regards,

Vincent

r/ARAM DependentPool3

Red Envelopes Interaction

For some reasons ADAPt and EscAPADe don’t convert AD/AP you gain from Red Envelopes. I got Red Envelopes playing Ahri and picked ADAPt to convert my 15 bonus AD but ADAPt said 0 AD converted.

r/whatisit Sassiop3ia

On the Back of a Storage Closet Door

Left by the previous owners inside the pantry. What is it used for?

r/whatisit Writhing_Writing

Found in Water Softener Brine Tank

Does anyone know…? I thought it could be resin so its bypassed currently. Whirlpool model bought from lowes 8 years ago and I’m not sure if I should replace or upgrade. I’ll also ask r/plumbing

r/comfyui Beautiful_Egg6188

Tried the new tiktok trend with Local Models (LTX2+ZimageTurbo)

Image generated with ZimageTurbo+ my character lora
Video Generated with The same images with default LTX2 workflow and Image from ZiT. Made multiple images/videos with the same image, cut out first 10 frames for the motion to start rolling and added them together on DaVinci with some film emulation effects.

r/ForgottenTV King_Ron_Dennis

The Josephine Baker Story (1991)

23 9
Reddit
r/automation _wanderloots

Tasklet AI Automation Tutorial ✅ (Agentic AI By The Firebase Creators)

r/Adulting bloggerman269

Late 20s, single and happy… but sometimes I wonder if I’m missing out

I’m in my late 20s, single, and honestly pretty content with my life. I have a job, live with my parents, financial well off , and plenty of time for things I enjoy : gym, swimming, badminton, piano, and reading. Life feels calm and comfortable. But as more of my friends get married and move into a new phase of life, I sometimes feel a quiet fear that I might be missing out on something important. I tried dating apps for a few months, but found them exhausting. I didn’t feel comfortable presenting myself in a certain way just to get matches. I’m not unhappy where I am, but I do wonder if this FOMO is normal or a sign that I should be doing something differently. Would love to hear others’ thoughts.

r/personalfinance Upbeat_Cherry9129

Save as you earn instruction - please advise

r/leagueoflegends scytheblade_19

Is there any champion that a Pro Level Player could 1v2 with in Aram?

I recently had this discussion with friends. My premise is that no player, no matter how skilled with macro or mico they are, could 1v2 in aram against 2 reasonably competent, experienced players due to the massive stat gap (double the champions).

The rules for this hypothetical 1v2 are as follows:

-The "win condition" for the match is destroying the nexus. (Though feel free to discuss a match with traditional aram duel rules with a wincon of: first blood/100cs/first tower).

-Draft or Blind Pick custom game Howling Abyss Aram Map (not mayhem)

-No bans

-No restrictions on item buildpath, summoner spells, runepages. All players are free to do as they wish in game (including stacking health/resistance items).

-Like normal aram, everyone starts at level 3 (though Team #2 will be sharing exp, slowing their gains throughout the game)

Team #1: 1 Pro Level Player with all of the associated macro and micro proficiencies that come with that level of play.

Team #2: 2 identical clones of an Emerald/Diamond ranked player who primarily plays aram with well over 1000 hours of aram custom gametime.

My argument:

Team #1 (the single pro level player) would not be in a position to press an advantege at virtually any point in the game due to the insanely unbalanced pressure of a 1v2.

The 2 player team has double the starting gold, double the summoner spells, double the number of abilities, likely close to double the amount of health, and liklely close to double the dps of the solo player. All team 2 needs to do to win is play champions with decent enough damage as well as survivability and not run it down multiple times in a row. The solo player needs to get both of these as well as sufficient waveclear out of a single champion to have any chance of victory.

In my opinion, there is no champion in the roster that can 1v2 under these conditions, even if they were piloted optimally by an ideal player.

-If the solo player picks a mage, they lack survivability and the capability to kill tanks or champions that can build health. (which the frontliner in team 2 likely will)

-If the solo player picks an assassin, they lack reliable waveclear to push and take turrets, and while they may be able to get onto and kill the ranged player, the frontliner would kill them in turn. (or the frontliner would cc and peel them enough for the ranged player to survive).

-If the solo player picks a marksman, their early game pressure is laughable without items giving attack speed for crit or on hit. They're also insanely squishy with low self peel (easy target for dive)

-If the solo player picks a tank, they have no reliable way to output the damage to consistently kill the backliner before the frontliner and backliner's superior dps whittles them down. Their melee status also means there's no chance of contesting the wave.

-If the solo player picks a fighter/slayer, IMO this is the best chance they have of victory as they might* have a chance of killing, particularly as they have a window when they get to lvl 6 first, but keep in mind that they aren't going to be killing both of the players in a single rotation of spells, certainly not with a mere 1400 gold worth of items. Their melee status also means there's no chance of contesting the wave.

Imagine if Team #2 locked in Darius and Cassiopoeia (guardian's horn and lost chapter starting items). With their numerical advantage, they could easily take space around the wave and secure bush control. With that, the solo player wouldn't be able to walk up and contest the wave at all (even if they played a ranged champion) without risk of taking an extremely bad trade or death. As a result they would be permanently shoved under turret, and eventually the turret would be whittled down. Because Team #2 has the luxury of an additional player, they can stagger their death resets without giving up turret health. As soon as the inhibitor is taken, the game is already over, with team 2 being able to (death) reset with impunity while team 1 struggles to clear super minions.

In conclusion, without team 2 completely running it down in champion selection and completely running it down (repeatedly) in game, there is no way for Team #1 to win, even if they play perfectly. Team 1 is given either bad options or worse options.

The Counter-Argument:

Please give me one that relies on an actual strategy and champion interactions and not some vague gesture to the pro player's mechanical and macro expertise (both of which are minimized on a howling abyss map, in absentia of any logic.

Do you agree with me? Is this matchup nearly as insanely one-sided as I've made it out to be? What champion, if any, has a chance in this scenario? Thanks again for giving this hypothetical some time and please remain civil and respectful.

r/AbandonedPorn HistoricalPermit6959

Outside of Hayes, North Carolina. Wonder if it was the jail at 1 time? I've seen similar ones that were

30 7
Reddit
r/comfyui Creepy_Astronomer_83

FreeFuse: Easily multi LoRA multi subject Generation in ComfyUI! 🤗

Our recent work, FreeFuse, enables multi-subject generation by directly combining multiple existing LoRAs!(*^▽^*)

Check our code and ComfyUI workflow at https://github.com/yaoliliu/FreeFuse

You can install it by cloning the repo and linking freefuse_comfyui to your custom_nodes folder (Windows users can just copy the folder directly):

git clone https://github.com/yaoliliu/FreeFuse.git
ln -s /path/to/FreeFuse/freefuse_comfyui /custom_nodes

Workflows for Flux.1 Dev and SDXL are located in freefuse_comfyui/workflows. This is my first time building a custom node, so please bear with me if there are bugs—feedback is welcome!

26 6
Reddit
r/SipsTea Used_Scarcity2555

This is actually a dope idea

The Dearborn Police Department has launched a new "drone first responder" program, with six drones stationed across the city that can respond to any call in less than two and a half minutes. They've also activated a real-time crime center called FUSIS, which gives them access to thousands of public cameras, business security feeds, and even live body cam footage. The department says residents can check a transparency dashboard to see why a drone is flying over their neighborhood.

r/AskMen DreadfulRauw

I’ve just acquired 15 school lunch hamburgers. What are your recommendations for making them as good as possible?

To clarify, they were delivered to my house, and I’m a middle aged man with adult children.

12 16
Reddit
r/whatisit nagumo_yue

Found this while cleaning my room

At first I thought it could be a keychain but it's too heavy and doesn't have a ring to attach it to something. The other thought was this could be a bookmark, but it's made from metal?

r/LocalLLaMA alexeestec

After two years of vibecoding, I'm back to writing by hand / There is an AI code review bubble and many other AI links from Hacker News

Hey everyone, I just sent the 18th issue of AI Hacker Newsletter - a round-up of the best AI links and the discussions around them from Hacker News. I missed last week, so this one is a big one, over 35 links shared.

Here are some of the best links:

  • Ask HN: Where is society heading, is there a plan for a jobless future? HN link
  • Things I've learned in my 10 years as an engineering manager - HN link
  • Google AI Overviews cite YouTube more than any medical site for health queries - HN link
  • There is an AI code review bubble - HN link

If you want to receive an email with such content, you can subscribe here: https://hackernewsai.com/

r/personalfinance Dry_Agency949

1099misc for 2025 question

I received a 1099 misc for 2025 with only line 6 (medical and health payments). Definitely was not expecting one since I didn't work in 2025. I email then for an explanation and response was that I had two checks from December 2024 that didn't clear until January 2025, so they are required to give me a 1099 misc. There's no income on there but line 6 is around 2600. Is that taxable? Why would line 6 be so high if there's no income on the 1099? I did a quick estimate on Google for a tax return with my spouse W2, filing joint and it's a difference of around 4-500 with our return. That's a difference! Has anyone ran into this issue and what did you do?

r/TwoSentenceHorror NolieCaNolie

You go to your doctors appointment for a regular checkup.

All of a sudden, you went unconscious and wake up strapped in the operating table, with a 15-centimeter needle 2 millimeters away from your open eyeball.

r/oddlysatisfying PigeonsInSpaaaaace

This drawer organization - like mother, like daughter

The upper photo is one of the drawers in my kitchen. The bottom photo is from when I went to visit my mom recently and noticed that we’ve done the exact same thing. Guess it runs in the family 😂

24 10
Reddit
r/space Hopeful-Fly-9710

does anyone here have any idea how someone would make a rocket startup? (uk)

does anyone here have any idea how someone would make a rocket startup? (uk), im extremely interested in space and rocketry and would maybe (key word: maybe) make one if im successful enough, its far away, yes but if you miss 99% of the shots you dont take

r/aivideo That_Perspective5759

Cinematic AI Video generation using the new Kling 3 model

r/AskMen HuckleberryNew777

Testosterone level technically normal but symptomatic, what is your experiences with TRT?

My husband is 39 and has had lower testosterone for years. His most recent level was 305, which is technically within normal range, but he’s now having symptoms like fatigue and low sex drive. He’s considering seeing a specialist to see if TRT is even an option despite being “normal” on paper. He has never seen a specialist before, what type of doctor usually prescribes TRT? And since he is in a normal range, I think we will have to cover the cost, about how much does it cost? Any noticeable side effects?Just trying to get a sense of what to expect before moving forward.

r/ProgrammerHumor miketierce

butWeNeedMoreClawBots

r/Frugal melissaw328

What are ways that you make household items last longer like shampoo, lotion, toilet paper,makeup or hair styling items?

As common household items are going up in cost, I am trying to think of innovative ways to make them last longer to reduce costs. I make alot of my household cleaners with vinegar,peroxide, rubbing alcohol and dawn. They work, are easy on the environment and non toxic. Also, I cut open plastic bottles to get all of toiletries and use mini spatulas.

I only wash my hair twice a week as it is long,dry and curly. In addition, I wet my hair, then use some apple cider vinegar on my hair to get rid of hair product residue and reduce number of times to shampoo each time.

And I have experimented with amount of laundry detergent to use to 2 tablespoons and a tablespoon of dishwashing detergent for each load.

What are ways that have worked for you to make household items last longer?

r/meme No-Butterfly7638

explanation?

r/LocalLLaMA arapkuliev

What's your setup for persistent memory across multiple agents?

We've been wrestling with this for a while and curious what others are doing.

The problem we kept hitting: you've got multiple agents (or humans + agents) that need to share context, and that context changes. RAG on static docs works until your codebase updates or your API responses change — then you're manually re-indexing or your agents are confidently wrong.

We ended up building something we're calling KnowledgePlane. MCP server, so it plugs into Claude/Cursor/etc. The main ideas:

Active skills — scheduled scripts that pull from APIs, watch files, scrape sources. Memory updates when data changes, not when you remember to re-index.
Shared graph — multiple agents hit the same knowledge store, see how facts relate. We're using it for a team where devs and AI agents both need current context on a messy codebase.
Auto-consolidation — when multiple sources add overlapping info, it merges. Still tuning this honestly, works well ~80% of the time, edge cases are annoying.
Architecture-wise: vector embeddings + knowledge graph on top, MCP interface. Nothing revolutionary, just wiring that was annoying to rebuild every project.

Real use case: we've got a Type 1 Diabetes assistant where agents pull blood sugar data from APIs, meal logs from a logs, and share insights. When the data updates, agents stay current without manual syncing. Outdated medical context is a bad time.

Launching soon with a free tier: https://knowledgeplane.io

what are you all using? We looked at just running Qdrant/Weaviate but kept needing the orchestration layer on top. Anyone have a clean setup for multi-agent shared memory that actually stays current?

r/Art Sanrey05

Dodger from Oliver and Company, AndrejCow, Vector, 2026 [OC]

r/mildlyinteresting Kangar

This snow looks like a person relaxing in the Adirondack chair.

91 11
Reddit
r/LocalLLaMA SUTRA8

Open-source AI agent security

Open-source AI agent security — 8 enforced layers from gateway to kill switch — Most agent frameworks trust every input, have no cost controls, and no way to shut down a rogue agent. Sammā Suit adds 8 real security layers: SUTRA gateway, DHARMA permissions, SANGHA skill vetting, KARMA budget ceilings, SILA audit logging, METTA identity signing, BODHI timeouts, NIRVANA kill switch. All enforced, not stubbed. FastAPI-based, works with any LLM. Free to self-host. https://github.com/OneZeroEight-ai/samma-suit

r/SipsTea ily300099

Congratulations on the new ride soldier!

13 11
Reddit
r/Anthropic alexeestec

After two years of vibecoding, I'm back to writing by hand / There is an AI code review bubble and many other AI links from Hacker News

Hey everyone, I just sent the 18th issue of AI Hacker Newsletter - a round-up of the best AI links and the discussions around them from Hacker News. I missed last week, so this one is a big one, over 35 links shared.

Here are some of the best links:

  • Ask HN: Where is society heading, is there a plan for a jobless future? HN link
  • Things I've learned in my 10 years as an engineering manager - HN link
  • Google AI Overviews cite YouTube more than any medical site for health queries - HN link
  • There is an AI code review bubble - HN link

If you want to receive an email with such content, you can subscribe here: https://hackernewsai.com/

r/comfyui Ok-Reputation-4641

Is anyone else having trouble using seedvr2 in their workflows?

I have the custom node installed, but it's not working. I've tried from the manager and from the custom nodes folder, but it shows up in red. I've changed the Comfy version, updated Comfy, changed the custom node version, and it still doesn't work. I don't know if it has something to do with my versions or my PC. Is anyone else having this problem? Has anyone else had this issue and found a solution?

r/PhotoshopRequest Zestyclose-Tip-3531

Hey! Could someone make the background white?

r/BrandNewSentence Ok_Plenty_3986

Why Do So Many Sift Drinks Taste Like Teletubby Blood?

r/Art artbytami333

Laughter On The Lanai, Tami Beale, Oil on canvas, 2025

r/AbstractArt artbytami333

"Laughter On The Lanai"

20" x 20"

oil on canvas

r/programming typesanitizer

Tactical tornado is the new default

r/meme Hot-Diggity_Dog

Using the millions to keep more of your millions

r/ProductHunters Illustrious-Elk-5188

How much privacy would you trade for easier answers?

One of the underrated features of traditional search is anonymity. You can ask the most basic or complicated question without worrying about being seen or judged. There’s a quiet freedom in that. Recently, while reading about future interaction models, I noticed people referencing a waitlisted grace wellbands in the context of more direct communication between humans and software. It wasn’t the specifics that caught my attention it was the implication that richer understanding usually requires richer data.

And that’s where the tension lives. We want technology to “get” us, but being understood often means being observed to some degree. Maybe the real question isn’t whether tools will become more perceptive it’s whether our definition of acceptable privacy shifts alongside them. Where does that boundary sit for you? Is convenience worth a little visibility, or is anonymity something you’d rather not negotiate?

r/WouldYouRather GlitchOperative

WYR never have to do small talk again OR always know the perfect thing to say?

r/LoveTrash Trashbagok

Paying by card sometimes for no reason

r/SideProject ColdStorageParticle

I built a simple app to play guitar through Discord without the VoiceMeeter nightmare

Hey everyone,

I got tired of trying to explain VoiceMeeter to my bandmates and friends every time we wanted to jam and/or Share some music over Discord. So I built SimpleMix - a dead simple audio mixer.

Out of frustration with Voicemeter i finally decided to make something more simple a normal human can setup and use.

What it does:

- Pick your mic

- Pick your guitar/audio interface

- Hit Start

- Done. Both go to Discord.

No routing matrices. No confusing virtual cables to configure. Just two inputs → one output.

It uses VB-Cable under the hood (free download), but you don't need to touch it - SimpleMix handles everything.

Features:

- Volume control + mute for each input

- Noise gate, EQ, compressor

- Global hotkeys (mute while gaming, etc.)

- Low latency

$5 one-time, no subscription BS.

Link: https://simplemix.tech

Would love feedback from anyone who's dealt with the "how do I get my guitar into Discord" problem. What features would you want?

r/Art Artby_Romain

Lace and shadows, Romain Eugene, Oil/board, 2025

r/Strava alexkunitsa

Tested the new Strava app on Apple Watch (with maps). My first impressions.

I finally had a chance to properly test the new Strava Apple Watch app, and honestly, it’s not ready to replace the native Workout app yet.

Main issues:

  • The amount of data written to Apple Health is extremely limited (only time, distance, calories, pace, and HR). Most third-party workout apps save significantly more metrics.
  • The route map is not saved to Apple Health at all.
  • There’s no compass-based navigation. Seeing the map is nice, but actually navigating with it is awkward and frustrating.
  • Precision Start is missing on Apple Watch Ultra (the workout starts immediately, without waiting for a GPS lock).

Until these gaps are addressed, I’ll stick with the native Workout app, or use alternative apps when I specifically need maps.

Nice-to-have improvements:

  • Haptic feedback for buttons (it’s hard to tell if a tap actually registered)
  • Hotkeys for start/pause

Curious if others are seeing the same limitations, or if I’m missing something obvious.

r/Art VladTheThird999

Exoplanet Skyscraper, Nick, Pencil/Photoshop, 2026 [OC]

13 0
Reddit
r/SipsTea stunnerswag

He's really dead😭

29 6
Reddit
r/SipsTea Queasy-Hedgehog1043

many of you,,,,🖤

14 7
Reddit
r/Adulting NoahCzark

The Notorious B.I.: The Only Deity Worshipped By Atheists

TL;DR: The decision about whether or not to have kids is based on our own personal desires, temperament, goals, values, resources, circumstances, etc. That's it. Our grandkids will make that same decision for themselves. The so-called "Biological Imperative" that some of us invoke is the ultimate "Sky Poppa" – a fraudulent religion that serves only to undermine that decision-making process.

Millions of years ago, the evolutionary process provided us with the cognitive capacity to transcend deterministic reproduction. Yet, we sometimes still invoke it as a "reason" we continue to procreate.

For our pre-erectus ancestors, reproduction wasn't a decision, nor was it part of some mystical biological "plan." It was just the inevitable outcome given their biology: the desire for sexual pleasure, coupled with the lack of technology/cognitive capacity to separate sex from procreation. Pleasure was pursued, beings resulted, they were reared to maturity. Not good, not bad – that's just how biology works when you lack both the neural architecture for complex planning and the technology to separate sex from reproduction.

But then we developed the capacity for abstract reasoning, future modeling, cost-benefit analysis – and we invented birth control. We became the only species capable of consciously recognizing that reproduction is optional, and being able to choose. That's not evolutionary failure – that's evolution working as it does.

Yet we often still explain reproduction – not sex; reproduction – as a function of the "biological imperative." But that's ridiculous. We might as well say that having children is "God's will." Replace 'God' with 'biology' and suddenly it sounds scientific. It's not.

Reliance on the so-called "biological imperative" allows it to function as the ultimate "Sky Daddy." And what makes it uniquely insidious is that it's often promoted by even the most hardened, self-proclaimed atheists; the only religion shamelessly endorsed by skeptics, rationalists, corporate CEOs, and the intellectual elite. People who would dismiss the notion of 'God's plan' earnestly cite evolutionary biology as an explanation for modern reproduction through spectacularly sloppy thinking.

No religion, no philosophy, no moral framework is literally "true." But at least in their best incarnations, the major religions provide frameworks for navigating genuine uncertainty – mortality, suffering, meaning, ethics. But the Notorious B.I.? It provides no value. It is the only truly fraudulent religion—with a deity that would expect you to surrender the prefrontal cortex that is your birthright, and the good sense your momma gave you.

Yes, there's an urge to have sex – and we constantly override it. We're not compelled to have sex with everyone we find attractive. And sex isn't reproduction. The urge to have sex is not the same as an urge to create and raise children for 18+ years. If it were, condoms wouldn't exist. The fact that we invented, mass-produced, and routinely use birth control proves we've already rejected "biological imperative" as binding.

Yes, many people derive deep emotional satisfaction from raising children. But that's not a biological imperative we're helpless against – it's a preference, and emotional satisfaction from parenting is real. It's also optional.

Evolutionary biology doesn't compel us to have kids. Evolutionary biology developed in us a brain capable of rational decision-making.

We're not an endangered species. Population decline isn't existential crisis. And you know what? Even if humans eventually went extinct... the universe wouldn't notice, and wouldn't care. And neither would you, or your grandkids, because...

Praise the Lord and pass the condoms.

r/funny OkAd5565

Didn’t see that coming..

2323 38
Reddit
r/leagueoflegends fummma

"If we make playoffs, we're winning Playoffs." | LEC Versus 2026

r/PhotoshopRequest firecloth7

Please replace the pope with hands

17 4
Reddit
r/linuxmemes Extension_Ad8289

what "we"?

KPatience is a solitaire collection made by KDE in case you don't know

r/ARAM BetterNerfTeemo

Prom Queen still bugged?

I just picked Prom Queen and it still wasn't triggering. Even tho the patch notes said they fixed it. Anyone else?

r/AbandonedPorn dbltax

Chambre du Commerce, Belgium

It's since been fully restored.

17 0
Reddit
r/wordchewing UnitQZ

Sorry yall had to see it

27 31
Reddit
r/midjourney Sharp_Alternative845

No title - 1

82 5
Reddit
r/Art darkened_m00d

Overwhelmed, darkened_m00d, digital art, 2025 [OC]

r/toastme Actual_Green_7433

Mentally and physically falling apart… M 30

PTSD has been at an all-time low. Physically falling apart day-by-day… recently become almost deaf in both ears… I know I look morbid… need some uplifting please

149 39
Reddit
r/AI_Agents Inevitable-Earth1288

Is using AI coding tools a required skill for modern developers?

Hey guys, I was talking to a friend of mine who's looking for a job recently. He's been going through interviews and stuff, and it seems like AI coding is one of the most in-demand skills among developers today.

What's your experience? Have you been asked to use AI in your projects?

r/painting artbytami333

"Laughter On The Lanai"

20" x 20"

oil on canvas

inspired by the Golden Girls lanai

r/Adulting Character-Set-49

During a long exhausting and broke job search, how do you relax once you finally get a job?

r/PandR ShinyTinyWonder38

Li'l Sebastian: The Mini Horse, The Myth, The Legend | Parks and Recreation

r/Seattle Orangerrific

Spotted some massive loser’s work along NE Campus Pkwy, south of U District station. Anyone have time to go tear them down?

I was on my way to work, and I still haven’t gotten around to buying a paint scraper to carry around to take care of shit like this myself, but I figured some of you may have some extra time today to do something about this.

It just feels especially targeted near udub, because of how many international students live and commute nearby :( I don’t want any of them to feel like they are unwelcome and unsafe in our city

213 167
Reddit
r/leagueoflegends cloudnep

Aurora Fan Art

After hours and hours I've finished my fan splash of Aurora! I've always liked her character and design.

Learnt a lot from Bo Chen's class when painting this!

https://www.artstation.com/artwork/4N0ybk

40 8
Reddit
r/painting Original_Media_6427

My latest picture ❤️

r/Adulting Popular_Growth_7026

What are you giving up because your mental load is heavy?

I’ve spent most of my adult life trapped in a cycle of emotional over-analysis. Recently, I had a realisation that made me think how much mental space I waste on things that don't actually move the needle for me.

I’m constantly asking myself: Did I sound weird? Why did they say that? Why haven’t she replied to my text yet? Am I even good at my job? The list is endless. I realized this isn't just "being sensitive"—it’s a survival mechanism for dealing with uncertainty and a deep-seated fear of being "the problem" or "the outsider." 

I have a close friend whose husband hasn't been able to find a job for an entire year. Naturally, this has put a massive strain on their marriage. She’s done everything—fixed his CV, helped with projects, provided emotional support. Lately, since they went quiet, I assumed they were just drowning in depression.

Living in Estonia, where the winter is brutal and the lack of sunlight makes you want to stay in bed forever, I often let my mood dictate my actions. I assumed they were doing the same.

But I was wrong.

I met her yesterday and found out that despite the worries, she is working double time. She’s taking courses every evening, building new projects, and pushing forward—even while a reality show plays in the background just to keep her sane. She is actively fighting her reality to change that she isn't happy with. (its not only about the husband its also about the job, even the country.)

What I lack is her ability to compartmentalize. If my husband hadn't made a real effort to find work for a year—I’d be so busy feeling the weight that I’d forget I have the power to move. 

My friend feels those things too, but she doesn't let them rule her.

But now that i’ve realized, i’m done wasting time on self-pity or waiting for the "perfect mood".

These days, I look at the things I’ve been putting off because I "didn't feel like it" or because I was too busy analyzing a social interaction from three days ago. 

So my question: What are you procrastinating on because you're too busy "feeling" your way through the day instead of "doing" your way through it?

r/creepypasta gamalfrank

My father’s rotary phone rings every night at 3:00 AM. I finally followed the cord, and I wish I hadn't.

the only way I can describe it. It’s not just the television, which sits in the corner of the living room like a grey, unblinking eye, hissing that white noise at a volume just low enough to be a vibration in your teeth rather than a sound in your ears. It’s the house itself. The air here hangs suspended, thick with the smell of menthol rub, dust that has settled since the nineties, and the distinct, sweet-rot scent of old paper decomposing in damp corners.

Moving back in wasn't a choice so much as a lack of options. My career had imploded in the city, a slow-motion car crash of layoffs and bad luck, and my father’s health had taken a nosedive that the neighbors couldn't ignore anymore. They called me after he was found wandering the lawn in his underwear, screaming at a squirrel that he claimed was transmitting government secrets. Dementia, the doctors said, mixed with a general shutting down of the systems. He was physically frail, a husk of the man who used to terrify me with his booming voice, but his mind was the real casualty. It had retreated into a fortress of confusion and silence, leaving only a shell that stared at the snowy screen of a television set that hadn't been connected to a cable box in a decade.

The house was a time capsule, but the kind you regret opening. Every surface was covered. Stacks of Reader’s Digest from 1988, towers of yellowing newspapers, ceramic figurines of shepherdesses with chipped noses, and boxes of unidentified rusted hardware. The clutter created narrow canyons through the living room and hallway, pathways you had to navigate sideways.

And then there was the phone.

He refused to have a cell phone in the house. He claimed the signals scrambled his thoughts, made the "buzzing" inside his head louder. I tried to argue with him during the first week, pulling my smartphone out of my pocket to show him it was harmless, but he went into such a violent fit of trembling and weeping that I eventually just turned it off and threw it in my suitcase. To communicate with the outside world—to order his prescriptions, to call the pharmacy, to maybe, eventually, find a job—we relied on the landline.

It was a rotary. A heavy, black Bakelite beast that sat on a dedicated table in the hallway, the centerpiece of a shrine made of phonebooks and message pads that hadn't been written on in years. It was connected to the wall by a curly, frayed cord that looked like a dried earthworm.

The first month was just the routine. I’d wake up, change his sheets, sponge-bathe him while he stared past me at some invisible horizon, and then park him in his armchair in front of the static. I’d spoon-feed him oatmeal that he barely swallowed. The isolation was absolute. The suburbs out here aren't the friendly kind where neighbors wave; they are vast, silent grids of dying lawns and closed blinds.

The calls started in the middle of the second month.

I am a light sleeper. The silence of the house usually kept me on edge, the settling of the foundation sounding like footsteps. But when the phone rang that first time, it shattered the night like a hammer through glass.

It was a physical sound, that mechanical bell.

Brrr-ing.

Brrr-ing.

I jolted up, heart hammering against my ribs, squinting at the glowing red numbers on my digital clock. 3:00 AM. Exactly.

I stumbled out of the spare room, navigating the hallway clutter by memory and the pale moonlight filtering through the grimy windows. The phone kept ringing, an insistent, angry sound. My father’s door was closed. He didn't stir. He slept like the dead, aided by a heavy dose of sedatives.

I picked up the receiver, the plastic cold and greasy against my ear.

"Hello?"

My voice was a croak, thick with sleep.

Static. A crackling, popping interference, like a radio tuned between stations during a thunderstorm.

"Hello? Is anyone there?"

I asked again, annoyance beginning to override the adrenaline.

"It’s dark,"

a voice whispered.

I froze. It was a child. A boy, maybe seven or eight years old. The voice was trembling so hard the words were barely coherent, wet with tears and snot.

"Who is this?"

I gripped the phone tighter.

"Where are your parents?"

"The Rabbit Man,"

the boy whimpered. The audio quality was terrible, fading in and out as if he were calling from the bottom of a well.

"He says I have to wait in the dark room. He says I was bad."

A cold prickle danced down the back of my neck.

"Listen to me,"

I said, trying to keep my voice steady.

"You need to hang up and call 911. Do you know how to do that?"

"My head hurts,"

the boy sobbed, his voice pitching up into a jagged whine.

"The Rabbit Man hit the wall. He dragged me. I want to go home. Please."

"Where are you? Tell me where you are."

"I don't know,"

he gasped.

"It smells like... like oil. And dirt. I can’t see my hands."

"Stay on the line,"

I said, looking around the dark hallway as if help might materialize from the shadows.

"I’m going to call for help on another line, okay? Just stay—"

The line clicked. Then, the hum of the dial tone.

I stood there for a long time, the receiver still pressed to my ear, listening to the drone of the disconnected line. I eventually hung up and dialed *69, hoping to trace the last call.

“The service you are attempting to use is not available from this line,” a robotic female voice informed me.

Of course. The landline package was probably the bare minimum, untouched since the eighties. I sat on the floor beside the phone table, hugging my knees. It had to be a prank. Kids these days, with their apps and their boredom. They probably found a list of active landlines and were seeing who they could scare. It was a script. "The Rabbit Man." It sounded like something from an internet creepypasta.

But the fear in that voice... it stuck with me. It was the wet, gasping quality of the breathing. The sheer exhaustion in the terror.

The next day, the house felt heavier. The dust seemed to hang lower in the air. My father was particularly difficult, refusing to open his mouth for his medication. He kept turning his head toward the hallway, his milky eyes widening, but when I asked him what he wanted, he just mumbled nonsense words. "Soft," he said once. "Soft ears."

I ignored it. He said a lot of things.

That night, I didn't sleep. I lay in bed, staring at the ceiling, waiting.

3:00 AM.

Brrr-ing.

I was at the phone before the second ring finished.

"Hello?"

"I’m thirsty."

The same voice. Weaker this time.

"It’s so hot in here."

"Who are you calling?"

I demanded, skipping the pleasantries.

"Is this a game?"

"I missed the fireworks,"

the boy whispered, ignoring me completely. He sounded delirious.

"Mom said we could watch the fireworks after the rides. At the Millennium Fair. I wanted to see the big wheel."

My stomach dropped.

"The Millennium Fair?"

I asked, my voice was a whisper.

"The Rabbit Man gave me a balloon,"

the boy continued, his words slurring.

"He said... he said he had a surprise. Under the stage. But we went down. We went down so far."

"Kid, listen to me. The Millennium Fair... that isn't happening now."

"I want my mom,"

he cried, a sudden, piercing shriek that made me pull the phone away from my ear.

"It’s too tight! The walls are too tight!"

Click. Hum.

I stood in the hallway, shivering despite the summer heat trapped in the house. The Millennium Fair. I remembered it. Everyone in the county remembered it. It was a massive traveling carnival that had come through the state capital to celebrate the turn of the century. New Year's Eve, 1999.

I was in high school then. I remembered the lights, the sheer scale of it. But that was 26 years ago.

If this was a prank, it was incredibly specific and incredibly cruel. Why reference a fair that happened a 26 years ago? Was the kid reading a script? Or was it a recording?

I went to the kitchen and made coffee, my hands shaking as I poured the water. I spent the hours until dawn sitting at the kitchen table, staring at the phone in the hallway. I tried to rationalize it. A recording made more sense. Someone playing an old tape over the line? But the boy had responded to the flow of conversation, even if he didn't answer my questions directly.

When the sun came up, I drove to the library in the next town over—the only place with decent Wi-Fi. I needed to verify my memory.

I searched "Millennium Fair kidnapping."

The results were sparse. It had been a chaotic event. Too many people, too much alcohol, Y2K panic mixed with celebration. There were reports of fights, a few drug arrests, lost children who were found within hours.

But there was one cold case.

Michael Miller, age 7. Last seen near the exit of the fairgrounds, wearing a blue windbreaker and holding a red balloon. Witnesses reported seeing him walking with a costumed character, though no mascots were scheduled for that area of the park.

I stared at the grainy photo of the boy on the screen. He had a gap-toothed smile and messy hair.

Seven years old.

The boy on the phone sounded seven.

I went back to the house with a knot of dread in my gut so tight it made it hard to breathe. The house smelled worse today—a sharp, acrid tang of ammonia cutting through the dust. My father was sitting exactly where I’d left him, bathed in the static glow.

"Dad?"

I asked, walking into the living room.

He didn't blink.

"Dad, did you ever hear about a boy going missing? Years ago? At a fair?"

Slowly, agonizingly, his head turned. His neck crunched, a dry, brittle sound. He looked at me, and for a second, the fog in his eyes seemed to clear, replaced by a sharp, predatory lucidness that I hadn't seen in years.

"Everyone goes missing eventually,"

he rasped. Then he turned back to the TV and let out a long, wheezing laugh that turned into a cough.

I decided then that I wouldn't answer the phone again. It was doing something to me. It was making the shadows in the corners of the room look like crouching figures. It was making the silence of the house sound like held breath. If it was a prank, I was feeding it. If it was... something else... I didn't want to let it in.

For the next three nights, the phone rang at 3:00 AM.

Brrr-ing.

Brrr-ing.

I lay in bed, pillow wrapped around my head, counting the rings. It always rang exactly ten times. Then silence.

But the silence was worse. Because in the silence, I started hearing other things. Sounds coming from inside the house.

A soft scraping sound. Like fabric dragging over wood.

It seemed to come from the ceiling.

By the fourth day of ignoring the calls, the atmosphere in the house had become unbearable. The air felt pressurized. My father was agitated, rocking back and forth in his chair, muttering about "leaks" and "patches."

I needed to do something productive. I needed to exert some control over this rotting environment. I decided to tackle the attic.

The attic hatch was in the hallway, right above the phone table. I hadn't been up there since I was a child. It was a forbidden zone, the place where my father stored his "projects." He was a handyman by trade, a tinkerer. He fixed things—toasters, radios, lawnmowers.

I pulled the cord, and the folding ladder creaked down, releasing a shower of dust and dead flies. I climbed up, coughing, clicking on the single bare bulb that hung from the rafters.

The attic was stiflingly hot, smelling of baked pine and fiberglass insulation. It was crammed with boxes, just like the rest of the house, but these were older. Wooden crates, metal footlockers.

I started moving things around, looking for space, looking for anything that could be thrown away. I found boxes of old tubes for radios, jars of rusted nails, a collection of license plates from the seventies.

And then I found the trunk.

It was pushed all the way into the eaves, hidden behind a stack of water-damaged insulation rolls. It was an old steamer trunk, heavy and bound in leather that had cracked like a dry riverbed.

I shouldn't have opened it. I knew that the moment my hand touched the latch. The metal was cold, unnaturally so for how hot the attic was.

I popped the latches. They groaned in protest. I threw the lid back.

The smell hit me first. It was the smell of the garage—motor oil, grease, gasoline—mixed with something biological. Sweat. Dried saliva. Unwashed hair.

Lying inside the trunk, folded haphazardly, was a suit.

It was made of a coarse, grey synthetic fur that had matted and clumped with age and grime. There were dark stains on the chest and stomach, stiff and crusty.

I reached out, my fingers trembling, and pulled it up.

It was a rabbit suit. But not a cute Easter bunny. This was something homemade, something stitched together with fishing line and desperation. The headpiece was heavy, made of papier-mâché covered in that same matted fur. The ears were long and asymmetrical, one bent sharply in the middle as if broken. The eyes were empty sockets, rimmed with red felt. The mouth was a fixed, jagged grin cut into the mask, revealing a mesh screen behind it that was clogged with... something dark.

I dropped it. I dropped it like it was burning.

"The Rabbit Man."

The boy’s voice echoed in my head.

I backed away, scrambling over the boxes, my heart hammering a frantic rhythm against my ribs. I couldn't breathe. The air in the attic was suddenly sucked out, replaced by the vacuum of realization.

My father.

My father, the handyman. The man who could fix anything.

I scrambled down the ladder, nearly falling the last few feet. I hit the hallway floor and looked at the phone. It sat there, silent, accusing.

I ran into the living room. My father was there, bathed in the static.

"Dad,"

I said, my voice shaking so hard it distorted the word.

He didn't move.

"Dad, what is in the attic?"

I shouted.

"What is that suit?"

He stopped rocking. The static hissed. Shhhhhhh.

He slowly turned his chair. He didn't use his feet; he just shifted his weight, the old wood of the chair groaning. He faced me. His eyes were clear again. Lucid. Horribly, terrifyingly lucid.

He looked at me with a mixture of pity and annoyance, like I was a child interrupting an important meeting.

"I had to hide this part of me,"

he said. His voice was strong, devoid of the tremulous wheeze of the last few months.

"He was broken."

I stared at him, my blood running cold.

"Who? Who was broken?"

"The boy,"

my father said.

"He wouldn't stop crying. I tried to fix him. I tried to make him quiet. But he was broken inside."

He smiled. It wasn't a fatherly smile. It was a baring of teeth, yellow and long.

"So I put him where the noise wouldn't bother me. "

I stumbled back, bile rising in my throat.

"You... you killed him?"

"I fixed the problem,"

he said, turning back to the TV.

"Now, be quiet. The show is starting."

He dissolved back into the slump, the clarity vanishing as quickly as it had come.

I ran to the kitchen. I needed to call the police. I grabbed my cell phone from my bag—dead battery. Of course. I hadn't charged it in weeks.

I looked at the hallway. The rotary phone.

I couldn't touch it. I couldn't go near it. But I had to. I had to call 911.

I approached the phone like it was a bomb. I lifted the receiver.

Silence. No dial tone.

I tapped the hook. Nothing. Dead air.

I checked the wall jack. The plastic clip was snapped in, tight.

"Come on,"

I whispered, panic rising.

"Come on."

I followed the cord. It wound from the back of the phone, coiled across the table, and dropped behind it.

I pulled the table away from the wall.

The cord didn't go into the wall jack.

The jack on the wall was empty. Painted over. This was new, when did this happened ?

The cord from the phone went down. It went through a crudely drilled hole in the floorboards, right next to the baseboard.

My mind couldn't process it. I had been getting calls. I had heard the ringing. I had spoken to the boy.

I fell to my knees. I grabbed the cord and pulled. It was taut. Anchored to something below.

I needed to see. I didn't want to, but the compulsion was a physical force, a hook in my navel pulling me forward.

I ran to the garage and grabbed a pry bar. I came back, the sound of my breathing loud and ragged in the silent house. My father was humming in the living room, a low, discordant tune.

I jammed the pry bar into the gap between the floorboards where the wire disappeared. The wood was old, but the nails screamed as they gave way.

Craaaack.

I levered up one board. Then another. The smell rushed up at me.

There was a space between the floor joists. But it wasn't just a crawlspace. It had been modified. Lined.

Egg cartons. layers and layers of them, glued to the joists and the subfloor. And acoustic foam. And old carpet scraps.

It was a soundproof box. A coffin buried in the architecture of the house.

I shone the flashlight from the hallway down into the hole.

The space was small. cramped. Maybe three feet deep and four feet long.

In the center of the nest, lying on a bed of filthy rags, was a skeleton.

It was small. The bones were yellowed, delicate. It was wearing the tattered remains of a blue windbreaker.

And in its skeletal hand, gripped tight, was the other end of the phone cord.

It wasn't plugged into anything. The wires were stripped, wrapped around the finger bones of the skeleton's hand, rusted and fused to the calcium.

The receiver of a toy phone—a Fisher-Price plastic thing, red and blue—lay near the skull. But the cord... the cord connected the real phone in the hallway to the boy’s hand.

I stared at it. The physics of it. The impossibility of it.

And then, the phone in the hallway, the phone that was currently disconnected from the wall, the phone whose wire ended in the grip of a 26 years old corpse...

It rang.

Brrr-ing.

The sound vibrated through the floorboards, through my knees, into my teeth.

Brrr-ing.

I looked down into the hole. The jaw of the skull was open, fixed in an eternal scream.

Brrr-ing.

I didn't answer it. I couldn't.

I backed away, scrambling on my hands and feet, crab-walking away from the hole, away from the hallway.

I scrambled into the living room. My father was standing now. He wasn't looking at the TV. He was looking at the hallway.

He looked at me, and his face was full of a terrible, childlike confusion.

"Do you hear that?"

he whispered.

The ringing didn't stop. It got louder.

"He's loud today,"

my father said, covering his ears.

"He's so loud. I thought I fixed it. I thought I made the room quiet."

The ringing wasn't coming from the phone anymore.

It was coming from under the floor. It was coming from the walls. It was coming from the attic.

"I tried to tell you,"

The kids voice suddenly whispered. but from the static on the TV.

I spun around. The screen was no longer just snow. Shapes were forming in the black and white chaos. A figure. Tall. Wearing long ears.

"I tried to tell you,"

the TV hissed, the volume rising, screaming the words. "IT'S DARK."

My father started to scream. A high, thin wail that matched the pitch of the static.

I ran. I didn't grab my keys. I didn't grab my bag. I smashed through the front door, stumbling out into the humid night air of the suburbs. I ran until my lungs burned, until I was three streets away, standing under the buzzing sodium light of a streetlamp.

I looked back toward the house. It sat there, dark and silent against the night sky.

But even from here, three blocks away, I could feel it. A vibration in the ground. A rhythmic, mechanical pulse.

Brrr-ing.

Brrr-ing.

I’m in a motel now. I walked until I found a gas station and called a cab. I haven't called the police yet. I don't know what to say. "My father is a killer"? "The phone line is connected to a ghost"?

I’m sitting on the edge of the motel bed. There’s a phone on the nightstand. A modern one. A generic beige block with buttons.

I unplugged it as soon as I walked in. I pulled the cord right out of the wall.

But I’m staring at it.

Because five minutes ago, the red message light started blinking.

And I can hear it. Faintly. Coming from the earpiece sitting in its cradle.

Static.

And a whisper.

"I found a new wire."

r/FreeCompliments Adorable-Task2652

:⁠-⁠) [female]

Posting again bc it got deleted before

r/ClaudeAI AIMultiple

FinanceReasoning benchmark results

We benchmarked 36 large language models on complex financial reasoning using the FinanceReasoning hard dataset (238 multi-step quantitative questions).

Top results:

claude-opus-4.6: 87.82% accuracy, 164,369 tokens (near-top accuracy with much lower token usage). claude-opus-4.5 has 84.03% accuracy with 154,505 token usage. Opus’s performance increased ~3.80%.

gpt-5-2025-08-07: 88.23% accuracy, 829,720 tokens (highest accuracy)

gpt-5-mini-2025-08-07: 87.39% accuracy, 595,505 tokens

gemini-3-pro-preview and gpt-5.2: both 86.13% accuracy, gpt-5.2 uses 247,660 tokens vs 730,759 for gemini-3-pro-preview, making it about 3x more efficient

Accuracy was measured as correct answers within a tolerance threshold. Token usage was tracked as a proxy for computational cost.

For one model (gpt-4o-mini), we also tested a RAG setup. This increased accuracy by 10%.

Results show large variation in both reasoning accuracy and efficiency across models, and a clear trade-off between performance and cost.

To see the full methodology: https://research.aimultiple.com/finance-llm/

r/Strava GREEYVIPER

25k in burning temperatures 32°c - 34°c

Fastest split 5:07/km in this run , broke the pr for 1km (old fastest split 5:11)

Hydration & fueling (No pre-run food taken)

1400ml water , 500 ml carbonated lemon flavored beverage (42 grams sugar & 22 grans sodium ) (bad hydration , no sodium & carbs i agree but i don't have options)

Shoes Cuation!!!! Never run with trekking shoes (Got blister for lack of Breathability and hot temperature)

I just bought them for some hikes and trekks soon , so before journey i wanted to test the grip on mud - gravel - rocks - concrete flat & Comfort

Ik its a bad performance 🙃 but i hope it will be good in future

r/OldSchoolCool SwordfishDeux

Patrick Swayze, Kelly Lynch and Sam Elliot, 1989

63 12
Reddit
r/LocalLLaMA valdanylchuk

One-line PSI + KS-test drift detection for your FastAPI endpoints

Most ML projects on github have zero drift detection. Which makes sense, setting up Evidently or WhyLabs is a real project, so it keeps getting pushed to "later" or "out of scope".

So I made a FastAPI decorator that gives you PSI + KS-test drift detection in one line:

from checkdrift import check_drift

@app.post("/predict")
@check_drift(baseline="baseline.json")
async def predict(application: LoanApplication):
    return model.predict(application)

That's it. What it does:

  • Keeps a sliding window of recent requests
  • Runs PSI and KS-test every N requests
  • Logs a warning when drift crosses thresholds (or triggers your callback)
  • Uses the usual thresholds by default (PSI > 0.2 = significant drift).

What it's NOT:

  • Not a replacement for proper monitoring (Evidently, WhyLabs, etc)
  • Not for high-throughput production (adds ~1ms in my tests, but still)
  • Not magic - you still need to create a baseline json from your training data (example provided)

What it IS:

  • A 5-minute way to go from "no drift detection" to "PSI + KS-test on every feature"
  • A safety net until you set up the proper thing
  • MIT licensed, based on numpy and scipy

Installation: pip install checkdrift

Repo: https://github.com/valdanylchuk/driftdetect

(Sorry for the naming discrepancy, one name was "too close" on PyPI, the other on github, I noticed too late, decided to live with it for now.)

Would you actually use something like this, or some variation?

r/personalfinance jzb189l

Large 401k; employer bought out; what options do I have?

Hi all, I have about 1.5M in my 401k from long time employer who just got acquired and is now part of a larger company. They have given us an option to roll over to their 401k (which is crappier), rollover to IRA, or lump-sum.

Clearly lump sum is a no go, but is there any options for getting it into my brokerage account via IRA (Mega backdoor Roth?) Or is this just too much money to do anything with?

Thanks!

r/AbstractArt HoopsEmbro1dery

“Undertow” , mixed Media on wood panel.

18 1
Reddit
r/mildlyinteresting Comprehensive-Art902

Just found out the package leaflet for my medication is nearly half the size of a grown adult

29 25
Reddit
r/Unexpected Oddsemen

Remove without damage

19106 425
Reddit
r/mildlyinteresting EeeAynEee

Brand new manhole cover

r/Art Banapandana

Uchiha X-men, Banapandana, Line-art, 2026 [OC]

r/homeassistant FezVrasta

How to style the mushroom alarm card keypad?

I need to apply this CSS to the dialog that shows the keypad but I can't find how to configure the card_mod rule to target it. Any idea?

.mdc-dialog__surface {

max-height: 100vh;

height: 100vh;

max-width: 100vw;

width: 100vw;

border-radius: 0;

}

r/AbstractArt SniffCoffee

Out Of Focus

Fractal art. Made in Ultra Fractal 6.05. A result of ponging parameters (kinda the receipe of the fractal) with other artists. 1. person make a fractal, pass the parameters to the next one, they are tweaked, then sending down to next one. You can find the tweaks from other artists in the UF chain pong on Deviantart.

18 2
Reddit
r/Adulting Super_Bright

How do I save my life and stop living miserably?

Hello all! I'm wanting some advice about how to improve my life, things just feel really stuck and my mental state is sliding a bit recently, so I wanted to know some basic advice anyone may have to help things.

I'm a 27 year old guy from the UK and I've got a few things going for me. I've got a job that pays okay, let's me work from home and is super flexible, I get therapy once a week and I love Sports, Drawing, Music and Video Games. I go to games both Home and Away, it brings me a lot of joy even when the team aren't always playing great. Truth is though, that's the only thing that gets me out of the house. Occasionally I go for walks with the dog but other than football and dog walks, I'm very reclusive. I live at home with my two mid-50's parents and I spend most of my free time in a small "everything room" bedroom.

Living this way is making me feel really trapped. I feel like my whole life for the last 3 years has been in like 4 rooms (including the office at work where I go once a month.) it's a deeply lonely experience and driving across the country for 90 minutes away from it all isn't enough to fix it. I spend a lot of time in my own head regretting my choices and feeling like maybe it'll never get better. Therapy helps but I find myself frustrated because action feels so hard to take.

I don't have many friends nowadays. Lost a lot moving back home during COVID after living in another part of the country. My childhood best friend cut me out of his life when I started trying to confide in him some of my mental struggles. No one's really asking me to go places anymore and I don't feel very loved at all. There's no woman in my life at the moment either, I'm still not in the right headspace for that. I'd be a bad partner right now. Too needy.

My long term plan was to move out ASAP. My family home is in a very rural part of the country in a very small village far from any major centres. I decided to move to a small city nearby that has links into more major cities in like 30 minutes. I chose this place because it's close to home, would let me avoid the major hustle and bustle when I wanted peace and would allow me to keep my football team as a regular part of my life. My job and football are, in truth, the only things keeping me here though, I could in theory move further but I'm a little scared to lose the one thing in my life bringing me joy at the moment. If I made this move then I'd look to build a bigger circle. I want to try hobby groups, language classes (I've been doing Italian Duolingo and wanna move beyond that) or something to make me feel less alone. Maybe find people who I can finally trust... Feel less like I have to be perfect to deserve their care.

The other thing I'm juggling is I strongly suspect I'm neurodiverse and undiagnosed. I'm not going to claim labels like autistic or ADHD if a doctor hasn't diagnosed me as such, but both feel possible. My therapist said she thinks I'd benefit from getting tested for both too. This adds an extra layer for me as both are something I'd never considered until recently, I worry it's a barrier that'll keep me from being able to connect and that terrifies me to be honest. Everyday where I do nothing is worse than the last so I really want to feel like I've got a plan that will help me. I don't want 2026 to be another year that slipped through my fingers before I realised it .

So that's why I'm here. I wanted to know if this plan sounds like it really would help, or if it could just make things worse. This plan kind of is my lifeline right now, I want to make sure it's right. Please if you do have any ideas or input id love to hear it. Thank you so much for reading, I know it was a wall of text, and a huge thank you if you do chose to leave a comment. Have a great weekend and I hope you carry love in your heart ❤️.

r/creepypasta shortstory1

Paul stop going on the show, you are too ugly

Paul always applies to be on a game show where a group of women will decide whether he is good looking or not. If they find a guy to be good looking then the guy can choose which girl to go out with, but if the women don't find the guy attractive, they will have to shoot the person kneeling in front of them. So Paul went on this game show 4 years ago and a row of 10 women were there to decide whether he is good looking or not. When Paul went on stage and as soon as he went on stage, every girl shot the person kneeling in front of them.

One girl though didn't want to shoot the person in front of her and was hesitant. The game show host asked her "do you find Paul unattractive?" And the girl replied with a "yes"

"Then you should shoot the person kneeling in front of you" the game show host told her

"Yes I know but the guy kneeling in front of me is attractive" the girl replied

"If you find Paul unattractive then shoot the guy in front of you" the game show host urged her

The girl then shot the guy in front of her and all of the dead bodies were carried away, and new guys came on and kneeled in front of the girls. As Paul went away it was now another guys turn to go on stage, and he was ugly just like him. Then Paul heard a barrage of gun shots which indicated that the girls had found the other guy ugly, and so the girls shot the people kneeling in front of them.

As Paul kept going onto the show, and every time there was a new bunch of girls deciding whether he was attractive or not, they all shot the person kneeling in front of them which indicated that he was unattractive. Paul noticed that the people kneeling in front of the girls were handsome men. There was only one time when the girls didn't hesitate shooting the men kneeling in front of them when an unattractive guy came on stage, it's when the men kneeling in front of them were also unattractive.

Even then Paul kept on going onto the show and every time, the women kept shooting the attractive men kneeling in front of them. Then one day Paul found his flat which was on the top 20th floor, was now on the ground floor. Paul didn't understand how this was happening. Paul kept finding his fat changing floors. Then Paul was surrounded by the spirits of all the men who were shot because the women found him unattractive.

"Paul stop going on the show, you know you are unattractive but you keep on going on. You keep going on because you enjoy watching handsome men getting killed" one of the ghosts said to Paul

"What are you gonna do about it" Paul said with a smile on his face

Then one day Paul woke up finding himself kneeling in front of a woman. All other guys were also unattractive and kneeling in front of a woman. They all hoped the guy who comes on stage to be attractive. Unfortunately the guy who came on stage was ugly, and the women didn't hesitate killing the ugly men kneeling in front of them.

r/TwoSentenceHorror failed_novelty

The genie said it life would be like a dream come true.

Now nothing makes sense and I'm losing my elephants.

r/mildlyinteresting Effective-Window-922

There is a former Linens N Things near me that went out of business 16 years ago, but still has signage up to this day.

25 17
Reddit
r/whatisit xLUCIDx9

Uhhmm... what is this? Lol 😆

Found this caveman thing and have never seen one before lol anyone ever seen this before?

17 11
Reddit
r/photoshop SpookyWMP

How was this poster made?

I have always loved the style of this poster and have always wonder how is it done.

If i would want to replice the style of this poster, would i need the subject in a certain lighting or it can be achieve with any image?

r/Art killerplantz

ghostface poster, chip clark, digital, 2026 [OC]

r/personalfinance Ashley_Schaefer_BMW

Inheritance and best plans moving forward

I lost my father last fall, which was devastating to me and I have just been avoiding taking care of his estate since then because there's been so much else to do. He worked hard his entire life and when he passed had very few liquid assets, but did leave me two properties. One, a commercial building that's appraised at 1.15M in a desirable location with a thriving business, and the other was his home, which is assessed at about 550K. Both are fully paid off and are now mine. The business inside the commercial building was his life's work, and I will be ultimately passing the business portion onto one of his employees and just serving as the landlord for that property.

I am unsure what to do with the residential property. It is about 30 years old, in mostly good shape (would need to be repainted inside and outside, have some minor issues repaired). I'm less than 10 miles away and it might be able to rent for 3K a month. I grew up in the house and it holds some sentimental value, but it also hurts to be there as it's a snapshot of a bygone world.

My original plan was to sell the house and purchase a rental with more upside, such as something in a vacation area that I could rent out short term and have a place to stay, but 500K doesn't stretch far in that regard. The commercial property will never be sold unless I get into a situation where I have no choice.

So I'm trying to figure out the best way forward with the residential property. Keep it, rent it and leverage it to purchase something else? Sell it and use the money to buy something else to rent? Just rent it and don't bother with anything else?

I don't have an immediate need for cash/liquidity. I own a house worth about 900k with about 250K left on the mortgage at 3.25/30yr. In my IRA I have about 750K, another 200K in investments and about 600k in cash, much of which will be trickled into investments over the next year. I have a job that pays about 170k a year, but AI might torpedo that in the near future. No other debts aside from the mortgage, no wife, kids or siblings to worry about. I did just recently sign up with a wealth management company to help advise me on all of this stuff, but I wanted to seek out opinions of those elsewhere.

I'm in Virginia for context, and the areas I was considering for vacation rentals would be along the Chesapeake bay, which is a huge area with plenty of waterfront property, or perhaps OBX, which I think is further and further out of reach due to costs and associated upkeep with a beach home.

r/OldSchoolCool VanillaaRocket

Shirley Slade, WASP Pilot. 1943.

I want to hear her stories during the war

132 16
Reddit
r/Art LexAlmightyreal

Memento Mori, Lex Almighty, Digital, 2026

r/PhotoshopRequest Practical-Chart-9915

remove the shadow on top left corner of bag!

r/OldSchoolCool playboy

Bo Bussmann poses for J. Barry O’Rourke for the September, 1967 Playboy Issue

r/findareddit SureTurnover1364

I'm desperately looking for someone who could lend me with their number

I urgently need a phone number to use the chat app KakaoTalk, but my number is already taken... Could I borrow anyones number just once so I can verify it via text message? Borrowing your number won't cause any problems; it's just a one-time verification process. But that's assuming you don't use KakaoTalk. pls i need desperate help.

r/AbandonedPorn Happy-Brother-8658

An abandoned brick building with overgrown windows

r/findareddit SmellGooot

Hello🌺🌼🌸

If interested I am selling all the smell goods🌿 Hit my chat😘

r/AbandonedPorn shermancahal

Wheatland Bridge over the Shenango River, Pennsylvania Railroad, PA USA [OC][2048×1367]

The Wheatland Bridge formerly carried the Pennsylvania Railroad’s Erie & Pittsburgh Division across the Shenango River in western Pennsylvania, and traces its origins to the Erie & Pittsburgh Railroad, completed through the region in 1864. A wooden crossing was replaced by a steel Pratt through truss in 1899, serving a heavily industrial corridor that moved coal, steel, and freight for more than half a century.

Following Penn Central’s formation and later Conrail’s consolidation, rail traffic declined, and the line through Wheatland was abandoned in 1982. Today, the bridge is reached by an easy hike along the former right-of-way and is being studied for inclusion in a future rail-to-trail project along the Shenango River, preserving the corridor as a visible remnant of the region’s railroad past.

I've posted more photos and the history of the bridge here.

r/findareddit redditsux___

Looking for a subreddit that actually helps people find fitting subreddits

I'm looking for a subreddit that actually helps people find fitting subreddits for their topic. Scrolling through this subreddit, I'm seeing a lot of posts barely get answers and being downvoted because people here apparently judge the content instead of just assisting with finding the proper subreddit. So I am looking for other subreddits that can actually help redditors find fitting subreddits

r/comfyui brilliant_name

LTX2 crashing often on 3090

I'm trying out LTX2 with this 12gb workflow. Ostensibly it should work with 12gb VRAM, I have 24GB.

It will crash Comfy very often. Sometimes it will work or a couple of generations, then crash, sometimes, one generation and then crash. Sometime just crash.

Using latest portable ComfyUI, lates drivers etc.

Any ideas how to debug?

got prompt

VAE load device: cuda:0, offload device: cpu, dtype: torch.bfloat16

Requested to load VideoVAE

FETCH ComfyRegistry Data: 90/124

loaded completely; 21780.80 MB usable, 2331.69 MB loaded, full load: True

FETCH ComfyRegistry Data: 95/124

CLIP/text encoder model load device: cuda:0, offload device: cpu, current: cpu, dtype: torch.float16

Requested to load LTXAVTEModel_

loaded partially; 21724.80 MB usable, 21574.80 MB loaded, 4389.92 MB offloaded, 150.00 MB buffer reserved, lowvram patches: 0

FETCH ComfyRegistry Data: 100/124

0 models unloaded.

Unloaded partially: 126.06 MB freed, 21448.74 MB remains loaded, 150.00 MB buffer reserved, lowvram patches: 0

gguf qtypes: F32 (2140), BF16 (26), Q4_K (1008), Q6_K (336)

model weight dtype torch.bfloat16, manual cast: None

model_type FLUX

Requested to load LTXAV

Unloaded partially: 13987.50 MB freed, 7461.24 MB remains loaded, 562.50 MB buffer reserved, lowvram patches: 0

FETCH ComfyRegistry Data: 105/124

loaded completely; 14238.89 MB usable, 12241.97 MB loaded, full load: True

0%| | 0/8 [00:00

50%|██████████████████████████████████████████ | 4/8 [00:08<00:08, 2.24s/it]FETCH ComfyRegistry Data: 115/124

100%|████████████████████████████████████████████████████████████████████████████████████| 8/8 [00:18<00:00, 2.27s/it]

FETCH ComfyRegistry Data: 120/124

Requested to load VideoVAE

Unloaded partially: 787.50 MB freed, 6673.74 MB remains loaded, 562.50 MB buffer reserved, lowvram patches: 0

Unloaded partially: 114.44 MB freed, 12133.78 MB remains loaded, 6.77 MB buffer reserved, lowvram patches: 0

loaded completely; 2512.79 MB usable, 2331.69 MB loaded, full load: True

lora key not loaded: text_embedding_projection.aggregate_embed.lora_A.weight

lora key not loaded: text_embedding_projection.aggregate_embed.lora_B.weight

Requested to load LTXAV

Unloaded partially: 1350.00 MB freed, 5323.74 MB remains loaded, 562.50 MB buffer reserved, lowvram patches: 0

FETCH ComfyRegistry Data [DONE]

[ComfyUI-Manager] default cache updated: https://api.comfy.org/nodes

FETCH DATA from: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json[ComfyUI-Manager] Due to a network error, switching to local mode.

=> custom-node-list.json

=>

FETCH DATA from: E:\ComfyUI\ComfyUI\custom_nodes\comfyui-manager\custom-node-list.json [DONE]

[ComfyUI-Manager] All startup tasks have been completed.

E:\ComfyUI>echo If you see this and ComfyUI did not start try updating your Nvidia Drivers to the latest. If you get a c10.dll error you need to install vc redist that you can find: https://aka.ms/vc14/vc_redist.x64.exe

If you see this and ComfyUI did not start try updating your Nvidia Drivers to the latest. If you get a c10.dll error you need to install vc redist that you can find: https://aka.ms/vc14/vc_redist.x64.exe

r/Anthropic bluturtle11

Sonnet 3.7 via API?

I would like to use Sonnet 3.7 as I had a good experience with it previously. I noticed that openrouter has it still via the API. However openrouter doesnt have the projects feature. Any tool which combines both?

r/mildlyinteresting IModernVerseI

My cat is so small I can easily put him in a sock.

5448 96
Reddit
r/funny NicetoNietzsche

Friend of mine got this in the mail, such considerate neighbors

(reuploaded, forgot to black out some identifying info)

8555 704
Reddit
r/ClaudeAI Maas_b

Anyone else experiencing compaction issues in Claude desktop with Opus 4.6

Since this morning, it seems my chat context window is extremely limited with opus 4.6. after 4 or 5 messages the chat starts compacting and i'm unable to send any messages after, each new message results in a new compaction exercise. extended thinking doesnt seem to impact the duration. Moving back to Opus 4.5 seems to resolve it, so it looks like it's opus 4.6 related. anyone else experiencing this?

r/personalfinance KittyScholar

I want to move money from a 401k into an IRA without paying taxes

I know this is a thing I’m theoretically allowed to do, but I know there’s usually some trick or fine print or something that I need to know about to make sure it doesn’t get taxed. I’m hoping someone can chime in on anything I need to do to make sure I don’t screw this up.

I have money in a 401(k) with Human Interest from a previous job. I want to move it to my Charles Schwab account, so my money is in one place. Right now I have an individual brokerage account with Charles Schwab.

I see a place on Charles Schwab to open a “Rollover IRA” account, but on Human Interest, it says that some IRAs will have tax owed (Roth) and some will not (traditional). How do I make sure the rollover account is the right type? Are there any other things I need to know or do to prevent this money being taxed?

Secondarily, the Human Interest account says I still have over a thousand dollars that haven’t vested yet. I left this company about four years ago, does this sound right? Or is that a weird timeline and I should call someone from the old company?

Thanks in advance for any help!

r/Anthropic Signal_Question9074

Built the first skill for Agent Teams (Opus 4.6 feature) - coordinates teammates with Manus-style shared files

The Agent Teams announcement yesterday got me excited but also worried about coordination overhead.

I built planning-with-teams to handle this. It's based on Manus principles - the AI company Meta just bought for $2 billion that used markdown files as "working memory on disk."

Core insight: Each teammate has isolated context. Shared files become the team's collective memory.

Creates 3 files:

  • team_plan.md - What each teammate owns, current status
  • team_findings.md - Discoveries written immediately (before context fills up)
  • team_progress.md - Session activity log

All teammates follow these rules:

  1. Re-read team_plan.md before major decisions
  2. Write findings immediately (context is volatile, disk is persistent)
  3. Log ALL errors (saves other teammates time)
  4. Message lead when phase completes

Real example from debugging: Spawned 4 agents with different hypotheses (WebSocket issue, timeout, state corruption, memory leak). Had them document evidence FOR and AGAINST in shared findings. They debated, challenged each other, converged on root cause (timeout too short + no keepalive).

Single agent would've anchored on first plausible theory. Multiple agents actively trying to disprove each other = better answer.

GitHub: https://github.com/OthmanAdi/planning-with-teams

Requires CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS=1 in settings.

Token economics: 3-5x more expensive than single agent. Only worth it when parallel exploration genuinely adds value.

Curious how others are using Agent Teams. What coordination patterns are you finding?

https://preview.redd.it/vu3fe9cc1whg1.png?width=1329&format=png&auto=webp&s=0c4d97c6b9c6890f8d798d88499bb75f44bbdf35

https://preview.redd.it/hoo013cc1whg1.png?width=821&format=png&auto=webp&s=66e02ea3eea4e0654512dd8260056cb53b8fba80

r/Art yesfoldingchair

Two Chairs, John T, Oil on Canvas, 2025 [OC]

r/leagueoflegends GupasMegerg

[Video] LCS Week 2 Full Breakdown (No Experience Needed)

I've been enjoying LCS a ton this year, and have really loved making these breakdown videos!

Also, if you enjoyed my video where I asked all the LCS teams why you should root for them, I got another response back from a team, and I dropped it in at their game breakdown!

For full transparency: this video isn't exactly what I'd hoped to put out this week, but my family got wombo comboed by the stomach flu this week and I didn't have as much time as usual, so I apologize if some parts are sloppier than usual. Thanks again to everyone who's been so supportive!

Enjoy, and let me know who you're rooting for in the LCS!

r/Seattle huskymomm

Any Eddie Bauer corporate employees on here?

I am a former employee desperately trying to get in touch with the HR department. Every contact # I have has been disconnected. Is anyone able to help?

r/ClaudeAI One-Problem-5085

Another Claude Opus 4.6 vs GPT-5.3-Codex Comparison

Claude Opus 4.6 crushes massive codebases with 1M tokens, perfect for enterprise debugging marathons, while GPT-5.3-Codex owns autonomous coding benchmarks like SWE-Bench. I think GPT-5.3-Codex has better on-paper upgrades, but more testing needs to be done. Devs, test both before simping!

Also, the pricing for Opus is unchanged, but Codex 5.3 will likely still offer better cost:output.

https://preview.redd.it/03c1yndnpvhg1.png?width=1024&format=png&auto=webp&s=c7bb10d228337867fb1e59cc1d57b2d73c7ed290

https://blog.getbind.co/claude-opus-4-6-vs-gpt-5-3-codex-which-one-is-better/

r/comfyui Firm-Blackberry-6594

Console colors in custom node and general logging questions

Have a few custom nodes and some random prompt variants have console output with print or a newly added logger. the output is using the standard color (white) despite me adding specific color codes to the outputs, like print("[Kaleidia Nodes]: \033[92mLoaded\033[0m") it just results in white on black "[Kaleidia Nodes]: Loaded" completely ignoring and stripping out the color codes.

For the logger outputs it adds the correct additions like "[Kaleidia Nodes Warning]..." and "[Kaleidia Nodes Debug]..." but also no defined colors. here is also the question what the best way is to add to the console, with print or with a logger?

Is comfy stripping out colors somehow and how can the colors be used then, seen other nodes add colored outputs in the init phase on the console...

Not sure if it is relevant, I use the portable version of comfyui.

r/Jokes Historical-Buff777

A professor and a student walk into a bar. The professor says to the barman: "Can I have a glass of H20?" The barman hands over the drink and he walks away. Wanting to fit in, the student says to the barman: "I'll have a glass of H20 too."

His funeral is tomorrow.

r/AI_Agents BadMenFinance

We’re launching an AI agent marketplace for SMBs. Only 10 agents allowed.

We just launched an AI agent marketplace for small and medium-sized businesses.

Not an AI tools directory.
Not an automation list.

Creators list AI agents that do specific work like support, lead qualification, SEO research, reporting, or outreach. Businesses subscribe to agents instead of buying more software.

Creators build and host their own agents.
The marketplace handles discovery and payments.
Revenue is shared automatically.

We are limiting the first launch to 10 agent listings total to keep quality high and avoid noise.

It feels like we are entering an AI agent economy where people hire digital workers instead of configuring tools. We are trying to build the marketplace layer for that shift.

Would love honest feedback from builders and operators:
Does this model make sense?
What would make you trust or not trust an AI agent to do real work?

Happy to answer questions in the comments.

*I will add the link in the comments for those interetsed

r/oddlysatisfying Friendly-Standard812

Driveway removal process

5087 266
Reddit
r/DecidingToBeBetter Witty_Ant_5239

I don't know what's happening to me

I'm not sure if this is the right subreddit to post, but something tells me it might be. I feel I'm changing, and not for the better. I struggle to understand what exactly is going on though.

Some background: I've always been an introvert kind of guy, but since late teenage-hood until around 19 or 20, I never had trouble meeting new people and being outgoing; what is more, I was known for my quirky sense of humor, perhaps even joking too much. Then, I got into drugs: there was a lot of weed, some psychodelics, different party drugs. I moved out from my parents' at 18, got introduced to substances and I quickly spiraled into a place where I wasn't taking good care of myself. I got severely depressed: dropped out of college, didn't see the point in getting out of bed, stayed up all night and slept during the day. Thinking of ending things (never acted on it though). I confided in my mum, and she helped me develop healthier habits, slowly I was starting to see the light more; I got into my first serious relationship and latching onto a more functional person (who, looking back now, was also mummying me) helped me function better and feel more or less alright as a result. However, since the depressive period, I've always felt the need to drink or take something in order to be able to socialize. It took me getting to a point where I'd get heart palpitations interacting with a cashier at a supermarket to realize I was suffering from social anxiety. A psychiatrist prescribed me Zoloft, which did get rid of the anxiety, or some 90% of it I'd say. I kept taking the drug for about 7 years, at which point I felt stable enough that I decided to taper off (under medical supervision), and for more than half a year now I've been off it. The social anxiety didn't come back, I don't get an elevated heart rate and can even address groups of people more or less calmly.

Here's what worries me, though: I feel myself withdrawing socially, and it has been going on for the past couple years, more or less, even before I got off Zoloft. The friends I used to hang out with regularly? I don't know what to talk to them about. It's almost as if I couldn't crack a joke and laugh with the group if my life depended on it. When my friends talk about something, even when I have something to say about the topic, I just feel: eh, why bother saying anything. Most social occasions I just wait for them to be over. Now, I've been in a new relationship for about a year with an amazing person, and I'm afraid it might take its toll on it, too. I've always found it easier to interact 1 on 1 (I'm a bit neurodivergent, got diagnosed with ADD, not medicated - can't stand the comedown from the meds), so dating and getting to know someone hasn't been so hard, I ask questions, am considerate, it's not strange that she got into a relationship with me. And even though I will have no problem talking about my feelings, or her feelings, I feel that I am just simply not fun on a daily basis. I can't make conversation about trivial stuff, joke around - sometimes, a flash of my old, playful personality will still shine through, but I feel its more and more rare. I can talk about more tangible stuff, or how I feel - I can't manage to get into this lightweight banter that is needed and seems as natural as breathing to most other people. Now, before you chalk it up to my neurodivergence - it didn't use to be like that. I can feel changing into a less playful, more withdrawn person, and it scares me. I should also add that I don't do drugs anymore, don't smoke weed, and barely even drink. I exercise, try to eat healthy. I thought these changes would make me feel better, and I should say that I do feel alright, physically, and even mood-wise. I just feel like I can relate to people less and less.

I'm sorry for the rambling tone, I needed to get it off my chest. I haven't brought it up with anyone in my life (yet). Does what I describe sound familiar to you? Of course, seeing a therapist would be the obvious course of action. However, where I live it is expensive and I can't afford it right now. What can I do to try and stop this transformation into a dull person?

r/meme _sabir007_

Faaaaaaaaaaaaaaaaaa

r/interestingasfuck Aarnavaperson

The infamous pepsi commercial, with a happy ending :)

r/personalfinance Express_Specific6194

How to save as a 25yr old

I grew up in a pretty dysfunctional/poor home where money was not taught to us and I’ve worked my way up career wise in my industry without a college degree, but I want to be able to save more. Is it bad that I only have $7k in my bank account? I have a salary job and thought 7k in this economy wasn’t bad but I look it up and Google says you should have over 20k saved up and im like DAMN

r/personalfinance IndexBot

Weekend Help and Victory Thread for the week of February 06, 2026

If you need help, please check the PF Wiki to see if your question might be answered there.

This thread is for personal finance questions, discussions, and sharing your success stories:

  1. Please make a top-level comment if you want to ask a question! Also, please don't downvote "moronic" questions! If you have not received your answer within 24 hours, please feel free to start a discussion.

  2. Make a top-level comment if you want to share something positive regarding your personal finances!

A big thank you to the many PFers who take time to answer other people's questions!

r/AskMen nitram_20

Is it okay to use the same spoon for soup and dessert (thereby needing to wash just 1 spoon instead of 2)

Hey, I wanted to ask the fellow men of the internet if it's okay to use a soup spoon for desserts (and therefore save yourself from needing to wash 2 different spoons)?

For context I had soup for lunch today and pudding for dessert so naturally I used the same spoon for convenience, however my grandma didn't appreciate my resourcefulness.

r/Adulting YAMASAN778

👋Welcome to r/GenYgoals - Introduce Yourself and Read First!

Hey r/Adulting 👋 I recently started r/GenYGoals, a small but growing space for people navigating the long-term version of adulting—mental health, careers that don’t go in straight lines, staying healthy as we get older, and the life lessons no one really prepared us for. A lot of us are doing all the “right” adult things and still feeling tired, unsure, or behind. This community is meant to be a calm, honest place to talk about that—without hustle culture or judgment. If this resonates, you’re very welcome to stop by, read the pinned intro post, and introduce yourself. Even just lurking is fine. Wishing you all patience and grace while we figure this out together.

r/comfyui Traditional-Pop-9206

Help me!

Maybe someone can help me ? I have ring camera footage of me walking out the door with my sister. My gf stays across the street, I told her I was somewhere else. I have a brother that looks similar to me, I want to face swap some of the footage with my brothers face and change the color of my jacket. I know it sounds stupid but I need to get it done. I found someone to do it for 30 bucks anyone know where I should start or if there are trust worthy people that could do it ? I need like 5 secs of footage. I know I’m in for a bashing on the internet but there is a lot behind it. I’m not saying I’m right but I will explain to any interested in hearing the story. Just the internets help today !

r/ClaudeAI aaddrick

Opus 4.6 on the 20x Max plan — usage after a heavy day

Hey! I've seen a lot of concern about Opus burning through the Max plan quota too fast. I ran a pretty heavy workload today and figured the experience might be useful to share.

I'm on Anthropic's 20x Max plan, running Claude Code with Opus 4.6 as the main model. I pushed 4 PRs in about 7 hours of continuous usage today, with a 5th still in progress. All of them were generated end-to-end by a multi-agent pipeline. I didn't hit a single rate limit.

Some background on why this is a heavy workload

The short version is that I built a bash script that takes a GitHub issue and works through it autonomously using multiple subagents. There's a backend dev agent, a frontend dev agent, a code reviewer, a test validator, etc. Each one makes its own Opus calls. Here's the full stage breakdown:

Stage Agent Purpose Loop? setup default Create worktree, fetch issue, explore codebase research default Understand context evaluate default Assess approach options plan default Create implementation plan implement per-task Execute each task from the plan task-review spec-reviewer Verify task achieved its goal Task Quality fix per-task Address review findings Task Quality simplify fsa-code-simplifier Clean up code Task Quality review code-reviewer Internal code review Task Quality test php-test-validator Run tests + quality audit Task Quality docs phpdoc-writer Add PHPDoc blocks pr default Create or update PR spec-review spec-reviewer Verify PR achieves issue goals PR Quality code-review code-reviewer Final quality check PR Quality complete default Post summary

The part that really drives up usage is the iteration loops. The simplify/review cycle can run 5 times per task, the test loop up to 10, and the PR review loop up to 3. So a single issue can generate a lot of Opus calls before it's done.

I'm not giving exact call counts because I don't have clean telemetry on that yet. But the loop structure means each issue is significantly more than a handful of requests.

What actually shipped

Four PRs across a web app project:

  • Bug fix: 2 files changed, +74/-2, with feature tests
  • Validation overhaul: 7 files, +408/-58, with unit + feature + request tests
  • Test infrastructure rewrite: 14 files, +2,048/-125
  • Refactoring: 6 files, +263/-85, with unit + integration tests

That's roughly 2,800 lines added across 29 files. Everything tested. Everything reviewed by agents before merge.

The quota experience

This was my main concern going in. I expected to burn through the quota fast given how many calls each issue makes. It didn't play out that way.

Zero rate limits across 7 hours of continuous Opus usage. The gaps between issues were 1-3 minutes each — just the time it takes to kick off the next one. My script has automatic backoff built in for when rate limits do hit, but it never triggered today.

I'm not saying you can't hit the ceiling. I'm sure you can with the right workload. But this felt like a reasonably demanding use case given all the iteration loops and subagent calls, and the 20x plan handled it without breaking a sweat.

If you're wondering whether the plan holds up under sustained multi-agent usage, it's been solid for me so far.

Edit*

Since people are asking, here's a generic version of my pipeline with an adaptation skill to automatically customize it to your project: https://github.com/aaddrick/claude-pipeline

60 50
Reddit
r/AI_Agents Independent-Share-71

I’m building a personal AI assistant. Day 3: what surprised me after talking to real users

Day 3 of building a personal AI assistant in public.

Yesterday I actually stopped “building” and spent most of the day talking to people instead.

What surprised me:

• Most people don’t want more AI features

• They want less thinking

• Nobody cares how smart the model is, they care if it remembers their stuff and saves time

A few things users explicitly asked for:

• “Can it remember how I like things done?”

• “Can it help me plan my day without me explaining everything every time?”

• “I don’t want prompts. I want outcomes.”

So today I’m rethinking the product:

Less chatbot.

More quiet assistant that just handles things.

If you’ve ever tried using AI daily and bounced off it:

• What annoyed you?

• What would make you actually keep using it?

I’m building this for myself first, but I don’t want to build in a bubble.

Happy to share what I’m working on next if people are interested.

r/OldSchoolCool LoudRevolution9163

Bob Marley with Jamaican actress, photographer, and ex-girlfriend Esther Anderson in Trinidad, 1973.

He would have been 81 years old today (born February 6, 1945)

14 2
Reddit
r/homeassistant elliptical-wing

New to HA, wondering about connectivity

Hi all, so I'm new to Home Assistant. Not totally new to Home Automation, but what I have is very simple, and I'd like to do more. What we have:

One Philips Hue Hub in the house, indoor Hue bulbs triggered by indoor motion sensor, multiple outdoor garden lights with outdoor motion sensor, etc - all controlled through the Philips Hue app.

One room of Wiz bulbs - controlled through the Wiz app.

What I'd like to do is add Zigbee contact sensors to a garage door, and then start to look at what else I could do with Home Assistant, and products outside of Philips Hue. Especially home security stuff.

The garage is a problem for poor wifi signal due to thick walls. But I have run wired ethernet into there so I could buy a wifi AP for it - but I assume that won't be of much use for Zigbee devices? If they could utilise the Hue mesh then that signal should be strong due to nearby Hue outdoor lights.

I've seen the Home Assistant Green, which looks interesting to host HA.

And heard of Zigbee dongles to provide network connectivity. But do I need those? Can I use my existing Philips Hue Hub/mesh to hang Zigbee devices off? Any advice on how best to provide Zigbee networking around the home and in the garage would be very welcome!

r/personalfinance YouBlurtWeNeedYou

40 with minimal savings

Over the past 3.5 years, I spent my meager savings of about 25k usd while changing careers and initially freelancing + doing a masters (free - EU)

The freelancing was inconsistent but I now work full time remotely earning just over 3k usd net, plus some other intermittent income. I’m living in Southeast Asia with my gf who earns more than double.

So one plus is we make enough to live comfortably here. Dont have any plans to return to UK or Ireland in the immediate the future. This year I will try to get a better paid position or find more side work to boost my income.

Will comfortably have about 12-1400 usd per month to save. Planning to invest about 850 usd into index funds each month and a 200 usd in btc and eth. Only have about 8k usd in my emergency fund currently.

I want to continue investing in the above plan monthly from now on.

Assuming it still exists in 30 years I will have a full UK pension and I have good health insurance.

Trying to get on a better path financially but how much trouble am I in? And will I be able to right it if I do the above and continue to progress in my career.

Any constructive input welcome.

r/Showerthoughts GlassPanther

It must *really* suck to be the guy processing all the ColoGuard tests.

21 9
Reddit
r/sports Oldtimer_2

A week after rupturing the ACL in her left knee, Lindsey Vonn has successfully completed her first training run

4971 660
Reddit
r/ProgrammerHumor Sad_Impact9312

theGreatEqualizer

r/ClaudeAI BLubClub89

Built an x402 payment processor with Claude Code - enables AI agents to pay for APIs autonomously

 I used Claude Code (Opus 4.5) to build [Nory](https://github.com/TheMemeBanker/x402-pay), an open-source payment processor that lets AI agents make payments programmatically.

  **What I built:*\*

  An implementation of the HTTP 402 "Payment Required" protocol. When an agent hits a paywall:

  1. Server responds 402 with payment requirements

  2. Agent signs a crypto transaction

  3. Agent retries with payment proof

  4. Server grants access

  No human approval needed for each transaction.

  **How Claude helped:*\*

  Claude Code handled most of the heavy lifting - the Solana/EVM transaction verification logic, the settlement pipeline, API design, and even helped audit for security before I open-sourced it. The whole thing was pair-programmed with Claude over several sessions.

  **Technical details:*\*

  - Sub-400ms on-chain settlement

  - Supports Solana + 7 EVM chains (Base, Polygon, Arbitrum, etc.)

  - Includes OpenAPI spec so agents can use it as a tool

  - Echo mode for testing (real transactions, 100% refunded)

  **Free to try:*\*

  Completely open source (MIT). You can self-host it or use the hosted version at noryx402.com. The npm package is `nory-x402`.

  Curious if anyone else is thinking about how agents will handle payments as they become more autonomous.

  GitHub: https://github.com/TheMemeBanker/x402-pay

r/programming axsauze

Claude Code: It's not replacing devs. It's moving them to a higher altitude.

r/SipsTea dresdenkael

Honestly had me thinking sometimes

124 25
Reddit
r/findareddit GyronMainesArab28902

In which subreddit can I ask what country the people in the photo are from ?

r/WouldYouRather Kyoifis

WYR always know when I’m taking a shit or know when i’m horny?

Every time i take a shit you get a notification on your phone or every time i’m horny you get a notification on your phone.

View Poll

r/Adulting ChinoSav22

Out of options

Needing help ASAP. Officially sleeping outside in freezing temperatures with my fiancée and our kitties. It is unbearably cold outside. Any type of help would be appreciated. Donations toward a hotel, a place to stay for all of us (we are a package deal), a few hours out of the cold. Anything. Donations are accepted at $tjon18 on cash app. Thank you

r/SideProject Remarkable_Brick9846

Built a bot protection service for websites after seeing scrapers hammer my projects

Hey folks,

After getting frustrated with bot traffic absolutely wrecking my analytics and putting load on my servers, I built ShieldSilo (https://shieldsilo.com).

It's a lightweight bot protection layer you can add to your website. The idea is simple - detect and block automated traffic before it becomes a problem, without the complexity of enterprise solutions.

What it does: - Fingerprinting to detect headless browsers - Rate limiting for suspicious patterns - Dashboard to see what's hitting your site - Works with any stack (just add a script tag)

Still early days, but the free trial is live (card required, cancel anytime). Would love feedback from this community since many of you probably deal with the same bot issues on your projects.

Happy to answer any questions about the tech or approach!

r/explainlikeimfive Additional-War-837

ELI5. Why savings don’t help when there is inflation ?

I mean why? To illustrate this, why isn’t a video game cheaper for those earning interest on their savings ?

r/BobsBurgers j3rddegree

I just the movie again and ....

I'm content. My wife and I went to see it in theaters and I remember being bored with it the first time. It felt like a plot they already did before. The jokes wasn't so funny and it felt like the problems was kinda low stakes. But while watching it again I can appreciate the easter eggs,the call backs and while not my favorites, the songs was good. I just wished they would have made the arcs bigger.

Spoilers for anyone who haven't seen it

Tina giving Jimmyjr a gift isn't too crazy for her. Gene performing is a regular gene thing. And losing the restaurant is almost every episode. The only one I think worth caring about is Louise losing her hat and while hearing the backstory is touching. At the end it's like "oh ok she lost the hat" . I'm not looking for anything to uproot the series so it can return to the status quo. But hell they could have lost the restaurant entirely, it could have been bought by fishordur cousin and now he can be a real antagonist to the family before instead of them just happen to drop in the situation.

Overall it's was (and I hate to use this word because of how over used it's been ) mid. It feels like a high budget(because the animation is enjoyable) two parter. But what do y'all think?

r/TwoSentenceHorror BigBlueMountainStar

In the creative drawing class today at school, I asked my 7year old students to draw what they see when they close their eyes.

Now I understand why little Jade is so tired all the time.

10 1
Reddit
r/whatisit Dependent_Plenty5905

What is this on my pillow?

Is this a cigarette burn? I don’t smoke but my roommates are crazy, so I was wondering.

r/LocalLLaMA JackStrawWitchita

CPU-only, no GPU computers can run all kinds of AI tools locally

While it’s great that so many people on LocalLLaMA are pushing the envelope with what can be done locally with expensive setups, we need to remember that a lot can be done with very minimal machines.

I’m talking about CPU-only locally run LLMs. That’s right, no GPU!

I’m running Linux Mint on an old Dell optiplex desktop with an i5-8500 processor, 6 threads and 32GB of RAM. You can pick up one of these refurbished for something like $120.

And with this humble rig I can:

Run 12B Q4_K_M gguf LLMs using KoboldCPP. This allows me to have local chatbot fun using quite highly rated models from https://huggingface.co/spaces/DontPlanToEnd/UGI-Leaderboard. Response times are fast enough as long as you keep the initial prompt below 800 tokens. And with context-shifting it remembers stuff during the session. Uncensored, private RP hilarity for free! You can even add in kokoro_no_espeak for text to speech so your RP characters talk to you with only a few seconds delay. The trick is to find good models to use. For example, DreadPoor/Famino-12B-Model_Stock is rated a 41+ on writing, which is better than many 70B models. You don’t need big horsepower for fun.

You can also use these models for writing, coding and all sorts of applications. Just need the patience to try out different local models and find the settings that work for you.

I also run Stable Diffusion 1.5 locally for basic image generation, inpainting and so on. Again using KoboldCPP and Stable UI. OK, it takes 3 minutes to generate a 512x512 image but it works fine. And you can experiment with loras and many SD 1.5 models. All 100% free on old gear.

I’m also running Chatterbox TTS for voice cloning voice-over projects. Works surprisingly well. Again, it takes a couple of minutes to generate a 75 word audio clip, but it does work. Vibevoice TTS also works on this old rig but I prefer Chatterbox.

And then there are amazing tools like Upscayl which upscales images locally incredibly well. Just gotta experiment with the models.

I’ve used ollama transcriber which converts audio files into text amazingly well. Just point a spoken word .WAV at it and then go make dinner and when I get back, the text is there.

There are many other local LLMs and tools I’ve used. These are just the tip of the iceberg.

Video? Nope. Music generation? Nope. I’ve looked and tried a few things but those big resource tasks need serious horsepower. However, it’s quite possible to use your old desktop computer for text-based tasks and then rent online GPU for one-off tasks and use the big online services for other tasks. It would still probably work out to be less costly.

I know I’m not the only one doing this.

CPU-only people: tell us how you’re using AI locally...

227 71
Reddit
r/UnusualVideos rutgerbadcat

Forgot height is a friend-

r/personalfinance jim-time

Selling house to family below FMV and gift tax?

r/whatisit Full-Connection-6280

Found while cleaning out desk

r/toastme lawshington

Had a rough week. 6'8 guy doing Ph.D. stuff! Toast me?

57 27
Reddit
r/AbandonedPorn shermancahal

Abandoned barn along Tenmile Creek, Greene County, PA, USA [OC][2048×1534]

57 1
Reddit
r/SideProject Artie2877

The Backrooms Experience

r/LocalLLaMA BLubClub89

Open-sourced an x402 payment processor for AI agents - lets LLMs pay for APIs programmatically

Been working on this for a while and just open-sourced it: [Nory](https://github.com/TheMemeBanker/x402-pay)

  **The use case:*\* You're building an agent that needs to access paid APIs, fetch premium data, or use services that cost money. How does your agent pay? Credit cards need human intervention. Subscriptions are inflexible.

  **x402 solves this:*\* It's an HTTP protocol where:

  1. Agent requests a resource

  2. Server responds with HTTP 402 + payment requirements

  3. Agent signs a crypto transaction

  4. Agent retries with payment proof

  5. Access granted

  All automatic, no human needed.

  **What I built:*\*

  - Sub-400ms settlement

  - Works on Solana + 7 EVM chains

  - OpenAPI spec so agents can use it as a tool

  - npm package: `nory-x402`

  - Echo mode for free testing (sends real tx, refunds 100%)

  Fully open source (MIT). The API itself is at noryx402.com.

  Would love to hear if anyone's working on agents that need payment capabilities. What's your current solution?

r/AskMen hugginv

What’s the one trait in a woman that makes you personally obsessed with her?

r/Seattle mrfowl

It's not ICE ...but anyone know what's going on at the Mukilteo Ferry?

I dropped someone off at the ferry this morning and saw these guys. There must have been 20 or 30 of them all around the ferry terminal. It sort of looked like they were just having a meeting? I couldn't read what their vests said, so I don't know if it's Coast guard, state troopers, police, or what...

12 15
Reddit
r/comfyui Ryokukitsune

GPU install not setting CUDA_Home

I am having a hell of a time with reinstalling ComfyUI after I broke it two days ago. I had a weird non-standard install that hadn't been properly updated in months and while trying to upgrade to the Hunyuan 3d 2.1 custom nodes it failed to start and due to the amount of jank I had put into it I figured I should just start over.

Getting the basics running (i.e. using the source install from github on my OS main drive [Linux Mint btw] seems to be fine, normal image gen and git installed custom nodes work. Where I run into trouble is installing the GPU driver for my Nvidia RTX 3060. the CU130, which I believe was installed on the last instance, package seems to install fine but when trying to install Hunyuan I get an error that CUDA_HOME is not set and despite thumbing through articles I can't seem to figure out how to resolve that. If I am understanding correctly that should be set by default when installing the pytorch.

This is about as close to a stock install as one can get so I don't even know where the default CUDA_HOME would be for my Python venv nor are any of the articles I've come a crossed explicitly clear how to set that variable.

to be clear this is installed to home/ComfyUI , I've installed the GPU torch and wheel again but I haven't tried installing Hunyuan yet because I don't want to get midway through the process and then wonder if I have messed it up by missing/skipping a step because of this weird behavior.

if someone can give me an idea of what to do when I get to that error I'll give the truncated terminal output if that doesn't resolve it.

Any help would be appreciated. Thanks!

r/Art Extension_Spirit8805

Renyn Liv Nichavis, Cuadrupl, Digital (krita), 2026 [OC]

r/mildlyinteresting Head-Community7540

Lenses formed from rain trapped in the mesh wall of a bus shelter.

89 3
Reddit
r/MostBeautiful Amazing-Edu2023

have a flower day!

r/pelotoncycle Ride_4urlife

Reddit Core 3.0 - Week 6 - you do you, boo

This week you’re going to focus on doing what you need most.

Take an unplanned day off? Go for it. Sleep in and blow off core? Yes. Need a core-free week? Do it.

I know I said the goal was to do core everyday. The first part of that is *to do core*. If one more plank is torture, take a break.

Make your break intentional to help you meet your core goals.

I’m a like a light switch, on or off, no dimmer. If you’re a dimmer, I’m envious! Taking breaks in your training can enable you to get to the end of the year integrating core in a way that fits you.

r/confusing_perspective itstartswithani

Ludovico Einaughty

96 5
Reddit
r/LocalLLaMA Good_Fill2623

Model for coding

What's the best model(s) for coding, general assistant, and creativity that can run on 8gb VRAM (RTX 5050) and 16gb ddr5. i have intel i7 14700 HX CPU, 1TB NVME SSD. I can enable swap ram but wouldn't really want to since that lowers the speed. Also, whats the best speculative decoding model for these?

r/meme Waste-Committee6

hmmmmmmm

HMMMMMMMMMM

r/WouldYouRather AskOk4380

which apartment WYR live in?

A:

Pros- good parking, good windows, no false fire alarms, cheaper, great bathroom ventilation, balcony

Cons- no in-unit laundry (community apartment one in walking distance), only twin sized beds, shower that tends to get extremely hot or cold out of nowhere, only one bathroom for 4 people

B:

Pros- full sized bed, in-unit laundry, two bathrooms shared between 4 people

Cons- poor bathroom ventilation = mold in toilet frequently??, extremely thin walls AKA no privacy, ugly view, difficult parking, more expensive

View Poll

r/n8n Aruscha

n8n Struggle: Email Forwarding (Body + PDF) to Telegram fails with "undefined" error - Need JSON Review

The Struggles: ​The Crash: I keep getting Cannot read properties of undefined (reading 'toLowerCase'). It happens in the Code Node when an email has no attachments or a different structure. ​The Logic: I need the Body as a text message and each PDF as a separate file. ​Missing Data: Sometimes the Telegram node is triggered but doesn't send the actual file. ​My Code Node logic (causing the crash):

json { "name": "My Problematic PDF Forwarder", "nodes": [ { "parameters": { "options": {}, "downloadAttachments": true }, "id": "e21c657c-8649-4364-9aa6-c06670f4c36a", "name": "Email Trigger (IMAP)", "type": "n8n-nodes-base.emailReadImap", "typeVersion": 2, "position": [0, 500] }, { "parameters": { "jsCode": "let results = [];\nconst items = $input.all();\n\nif (items.length > 0) {\n const firstItem = items[0].json;\n results.push({\n json: {\n type: \"text\",\n content: firstItem.text || \"(Empty)\",\n subject: firstItem.subject || \"No Subject\"\n }\n });\n\n for (const item of items) {\n if (item.binary) {\n for (let key of Object.keys(item.binary)) {\n const binaryData = item.binary[key];\n if (binaryData && binaryData.fileExtension && binaryData.fileExtension.toLowerCase() === 'pdf') {\n results.push({\n json: { type: \"pdf\", fileName: binaryData.fileName },\n binary: { data: binaryData }\n });\n }\n }\n }\n }\n}\nreturn results;" }, "id": "76af5694-6b28-45e7-8b1b-d77482b326e7", "name": "Robust Extractor", "type": "n8n-nodes-base.code", "typeVersion": 2, "position": [250, 500] }, { "parameters": { "rules": { "values": [ { "outputKey": "text", "conditions": { "options": {}, "conditions": [{ "leftValue": "={{ $json.type }}", "operator": "equals", "rightValue": "text" }] } }, { "outputKey": "pdf", "conditions": { "options": {}, "conditions": [{ "leftValue": "={{ $json.type }}", "operator": "equals", "rightValue": "pdf" }] } } ] } }, "id": "d2012cbc-0e79-4579-ac3d-e5e84ad56dc1", "name": "Switch Type", "type": "n8n-nodes-base.switch", "typeVersion": 3, "position": [500, 500] }, { "parameters": { "chatId": "643216842", "text": "=📧 *E-Mail*\n{{ $json.content }}", "additionalFields": { "parse_mode": "Markdown" } }, "id": "ba0c45d9-7d5a-413e-9034-3e22a79ead77", "name": "Telegram: Send Text", "type": "n8n-nodes-base.telegram", "typeVersion": 1.2, "position": [750, 400] }, { "parameters": { "chatId": "643216842", "operation": "sendDocument", "document": "data", "caption": "=📄 PDF: {{ $json.fileName }}" }, "id": "tg_pdf_send", "name": "Telegram: Send PDF", "type": "n8n-nodes-base.telegram", "typeVersion": 1.2, "position": [750, 600] } ], "connections": { "Email Trigger (IMAP)": { "main": [[{ "node": "Robust Extractor", "type": "main", "index": 0 }]] }, "Robust Extractor": { "main": [[{ "node": "Switch Type", "type": "main", "index": 0 }]] }, "Switch Type": { "main": [[{ "node": "Telegram: Send Text", "type": "main", "index": 0 }], [{ "node": "Telegram: Send PDF", "type": "main", "index": 0 }]] } } }

Any idea how to make the PDF extraction bulletproof without crashing the whole flow? Thanks!

r/singularity Distinct-Question-16

Atlas the humanoid robot shows off new skills

1167 217
Reddit
r/Damnthatsinteresting Friendly-Standard812

450GB of data and thousands of stacked images reveal the Moon’s mineral composition.

626 43
Reddit
r/whatisit Great_Beard_1

Found in Canada by the water main.

Top is just a faucet handle

r/PhotoshopRequest LackOfDad

Can anyone remove the shoe off Kojima?

r/personalfinance Saymynamewrongagain

Moving TSP accounts to non govt account

I am a former federal employee with a vested Thrist Savings Plan account with both a traditional and a Roth IRA.

My current employer does not have a 401k option, but I'd like to move my TSP accounts to one I can add to on my own, and eventually link my next job to. At the very least, I'd like to catch up and max out my contributions for 2025 as I haven't added to it in the last two years (I know, in between work).

I'm in my late 30s, have a decent amount saved in the TSP and a good chunk in liquid and CDs. I am not married, no dependants, and I don't plan on having kids. Any kind of inheritance is also unlikely so I know it is all on my own and I need to catch up as well.

I am not really sure what I'm looking for regarding investment management accounts (if that's the term for them), as it all is pretty overwhelming. I was talking to a guy from Northwestern in 2025 after a friend recommended him but I got the ick after he kept pushing for me to purchase life insurance, and at the time I wasn't really sure how much liquid I needed while I was moving from federal to private employment (I still technically am still in that space, not ready to drop most of my savings into investment but at the very least I can max out my IRA). I rent and am thinking about purchasing in the next couple years (hence also the desire to keep enough for a down payment in liquid), excellent credit, no debt, car is paid off and CCs are paid off each month. I do have kind of an odd job, and work generally 2-3 months 7 days a week, then have off 2-3 months, so my paychecks can be kind of odd, but the pay and vacation pay covers my "off time" and I can plan for it pretty well.

So who do you use for your investment accounts and why? Or, should I leave my TSP accounts as is and open a new account altogether? I honestly don't even know where to start when comparing companies, or what questions I should be asking.

r/whatisit Then_Composer8641

“X-ray” hview of house framing

This was visible yesterday afternoon in Sunnyvale, California.

r/personalfinance Quirky-Quacker

Anyone using Flinks with a production account for personal data aggregation?

Has anyone here set up a Flinks production account for personal use, similar to how people use Plaid for data aggregation?

I currently use Plaid to pull and normalize my financial data, but I’m considering adding Flinks as well (mainly for better Canadian coverage). Curious if anyone has:

• Successfully gotten a production account without being a company

• Run into approval, compliance, or cost hurdles

• Found Flinks meaningfully better or different than Plaid for personal aggregation

Thanks!

Yes, I’m aware of the usual Plaid concerns (data sharing, privacy, etc.). I’ve already used Plaid through multiple platforms in the past for things like account linking and ID verification, so from my perspective there’s no meaningful additional exposure here.

r/whatisit Dank_Blunt

Crumbs inside old Heineken bottle?

This bottle Is from probably 2016 when I was a teenager

I asked my dad to buy this for me to use as decoration in my room, I never drunk the beer because I know he would beat me if did lol

Nowadays I'm a beer guy but budweiser is my way to go

I assume those crumbs are some kind of fermentation

r/findareddit maiden69

(US) Is this website legit?

I'm not sure where to post this question. I want to buy a bag off this website but it just screams scam. The reviews are all positive without a single review under 5 stars. All the images look original though as if through a private seller. Any advice is appreciated.

https://kenna.nbcsi.com/index.php?route=common/home

r/interestingasfuck Lord_Krasina

One of the most mind-boggling fun facts I have ever heard is that richard, the actor who played Dumbledore, once got so drunk that he forgot he even owned a Rolls-Royce, only to remember it twenty-five years later.

722 120
Reddit
r/homeassistant salliesdad

Space heater state

I have a space heater that I control in HA with an IR blaster since it has a remote. I did this rather than a smart plug because it was a high load device. Sometimes people will turn the heater on or off with the remote or the button on the front. I’m trying to figure out how to accurately trek on-off state without using a smart plug that might have load issues. Any ideas?

r/PhotoshopRequest shoemonkeyz

I need a picture of a dragon, updating with additional details, paid request, $25 paypal F&F, NO AI

Needs to be high enough dpi to print a 20inx10in banner, dragon needs to be in the right hand 1/4 section.

This is roughly the body proportions and style. This is roughly the pose. These pictures are for reference only.

The dragon is jade green, vulpine head and ears, golden eyes, furred all over but the wings which just have fur on the fingers and pale green membrane with gold mottling. It's deep chested and long legged so sits very tall. Big shaggy mane. Front feet are more like hands and have long fingers. Long tail with a burst of white fur on the end. It's got a bracelet on the right wrist that's made out of rope wrapped around several times with a copper sunflower pin stuck to it. There is a small stick poking out of her mane behind her head like it got caught in her hair and hasn't noticed...the stick is important but the one in the reference pic is too large. No scales no spikes no horns.

The scene is essentially the dragon is in a forested setting, she's dirty and unkempt but trying to look tall, proud, and regal (think "disheveled royalty"). Her ears and tail twitch violently when she lies so she's suppose to look like she's concentrating hard to keep her ears under control and she's holding her tail in her right hand to keep it still because she's about to tell someone important a real whopper.

I don't know how tall of an order this is but there's always great stuff being done here so thought I'd put the request here as well. I'm putting this out on a few different sites and subs to see what comes up. I do not stay glued to the computer, though, I'll try to check back every hour or so. I only have paypal for payment.

Updating details and to stress the important parts:

The stick is important. It needs to be big enough to be visible, but not so large that it's hard to accept that the dragon just hasn't noticed it there. The picture attached above is strictly for reference...stick is too big and I think it's a little too high in that picture.

The bracelet is on the right wrist. The sunflower is just the head of the sunflower, no stem.

The dragon is dirty and disheveled, but is not injured or hurt in any visible way. "No" torn wings.

Rough image setup, dragon needs to be in the vicinity of the darker box and would be best if looking toward the right. The rest of the image can be full of forest.

My preference for the forest is that it's oaks and maples with shafts of sunlight filtering through, no pines or conifers. I'm not losing sleep over this part at all, though.

r/whatisit Substantial-Invite25

Stain on a wall

Found this stain at my girls apartment, it's been there for a long time and honestly I just asked her if it was cum lol, to which she replied she didn't know what was it, been there before I even started dating her so I don't really care but I'm curious about the pattern that formed.

r/comfyui AkaToraX

What are your XYZ+ testing practices?

I have a style LoRA trained with captions and without. 10 saved Epochs. I have a character LoRA trained with captions and without. 10 saved Epochs.

I want to test each epoch combination of style+character as well as testing different weight rarios as well as testing the captioned LoRAs versus the uncaptioned LoRAs. Not only that but I should test across a few different static seeds just to make sure I don't get stuck on a bum seed.

Do you all just brute force your way though that? Or just do several XY's for each Z+ combo?

Thanks for any tips and advice!

r/SideProject miodrage95

"Flight simulator" for difficult conversations — curious what people think

I’ve been working on a weird idea. Basically, it’s a browser game where you practice the conversations that usually make your palms sweat—like asking for a raise, setting boundaries with parents, or telling a friend they owe you money.

The core idea isn't to "win" the conversation, but to see the tradeoffs we usually miss in real life.

  • If you say "yes" to keep the peace, the game shows you the hidden cost: "You avoided conflict, but your resentment bar just went up 20%."
  • If you snap back, it shows you the damage: "You won the argument, but you just taught them that you're unsafe to talk to."

It tracks invisible stats like Credibility, Relationship, and Energy to show you what you're actually gaining or losing.

I have a rough prototype running with 3 scenarios (like "Mom pushing food you don't want" and "Boss lowballing your salary").

r/Wellthatsucks RobbieBleu

Finally living alone, but it’s so far from everything and everyone that I’ve had zero visitors in the whole year I’ve been here.

Sister an hour away with a kid plus she’s always working, dad is an hour an hour with mobility issues (no stairs - I have the worst stairs) my mom and all my friends are 3+ hours away.

It was a bit of an emergency move, I wouldn’t say I chose to live this far as I had to leave where I was and this is cheap

134 102
Reddit
r/PhotoshopRequest Due-Post9859

Replace 1st line of Arabic text with Coptic simple text edit

Hi everyone,

I’m looking for help with a small but precise text correction on this religious icon.

At the bottom, there are two lines of text:

• The top line is Arabic

• the 2nd line is Arabic

• The bottom line is English, which reads:

“welcome to heaven the beloved children of Christ Abba Peter and Tamav Ollie Jane!”

I would like the 1st Arabic line replaced entirely with the Coptic version of the English sentence, keeping everything else exactly the same (colors, texture, placement, style).

Coptic text to use (verbatim):

Ⲟⲩⲱϣⲧ ⲛ̀ⲧⲉ ⲡⲓⲟⲩⲱⲛϩ ⲛ̀ⲛⲓⲙⲉⲛⲣⲓⲧ ⲛ̀ⲛⲓϣⲏⲣⲓ ⲙ̀ⲡⲓⲭ̅ⲥ̅

ⲡⲓⲁⲃⲃⲁ Ⲡⲓⲧⲣⲟⲥ ⲛⲉⲙ ⲧⲁⲙⲁⲃ Ⲟⲗⲗⲓ Ⲓⲁⲛⲉ

Important notes:

• Please do not alter the English line

• please don’t altar the 2nd Arabic line

• Please match the size, alignment, and visual weight of the original Arabic line

• No other edits to the image

I really appreciate it as this is a Coptic memorial icon for my beloved deceased grandparents — thank you so much for your help 🙏

r/oddlysatisfying thetacaptain

This feels like the start of a folklore story

140 16
Reddit
r/leagueoflegends Zteak10

They added pop-up ads to this game or what

r/interestingasfuck Friendly-Standard812

The Buran programme (1974–1993) was the Soviet Union's most expensive, reusable spacecraft project, designed as a direct, technically advanced response to the U.S. Space Shuttle.In 1988, the Soviet Union estimated the total cost of the Buran-Energia programme at approximately 16.5 billion rubles.

88 53
Reddit
r/LocalLLaMA rosie254

the effects of local LLM usage on the world

one of the reasons im into using local LLM's is because i believe using it is far better for the world, nature, natural resources, and things like the ongoing RAM crisis than relying on giant datacenter-powered cloud AI services.

but is that actually true?

how much does it really help? i mean, the local LLM's we download are still trained in those datacenters.

r/ClaudeAI dindles

I asked Claude 4.6 to create an SVG chess set.

This knight is sending me.

48 13
Reddit
r/artificial techiee_

Chinese teams keep shipping Western AI tools faster than Western companies do

It happened again. A 13-person team in Shenzhen just shipped a browser-based version of Claude Code. No terminal, no setup, runs in a sandbox. Anthropic built Claude Code but hasn't shipped anything like this themselves.

This is the same pattern as Manus. Chinese company takes a powerful Western AI tool, strips the friction, and ships it to a mainstream audience before the original builders get around to it.

US labs keep building the most powerful models in the world. Chinese teams keep building the products that actually put them in people's hands. OpenAI builds GPT, China ships the wrappers. Anthropic builds Claude Code, a Shenzhen startup makes it work in a browser tab.

US builds the engines. China builds the cars. Is this just how it's going to be, or are Western AI companies eventually going to care about distribution as much as they care about benchmarks?

32 33
Reddit
r/personalfinance Emmerloulou

World Financial Group actual products? Are they legit?

My in-laws have opened up a few accounts through their church friend who is with “World Financial Revolution,” a subsidiary of Wotld Fincial Group. She had us take a meeting with them, under the guise that we needed to in order to transfer funds my in-laws are gifting us. I had a feeling they were going to try to sell us on keeping the money with them in some sort of account. I was willing to gear it out.

But I very quickly this company was an MLM, and that the consultant we spoke with doesn’t have any background in finance. In trying to recruit us to join her team, she said “ANYONE can do this. If you can read you can be successful here.”

Obviously I know that the “business opportunity” is BS, more or less. But my in-laws have opened investment accounts, a CD and 529s for my kids. They are immigrants who never invested before. All money was kept in their zero-interest checking account and in cash under the mattress. So even though it’s very late for them to do this, better late than never. But … I question whether they are guided well. The products are all from established banks and the “consultants” make money on commissions from those banks.

What do you think?

r/SideProject Historical_Kale_4554

I got laid off in October. After 3 months of job hunting failures, I built the offline PDF tool I always wanted.

The job market right now is brutal. After 100+ applications and zero luck, I was losing my mind. To stay sane, I decided to solve my own biggest pet peeve: sketchy PDF websites.

I hate that to do something simple like "Remove Image Background" or "Merge PDF," we’re expected to upload sensitive documents (tax returns, ID cards, contracts) to some random server.

So, I built Local Tools.

It’s a desktop app built with Tauri (Rust) and React. It runs everything locally on your machine using a Python backend.

The Privacy Bit:

  • 0 bytes uploaded to the cloud.
  • Works entirely offline.
  • No tracking/telemetry.

The "Life" Update: I actually just landed a job! But the 3 months of unemployment gave me the fire to finish this.

Check it out here:https://localtools.pro

I have been actively following the indie community in Twitter and always wanted to build something myself. This will be the start of many hopefully.
Would love your honest (and brutal) feedback. What should I add next?

r/Wellthatsucks DapperKitchen420

We think a crayon got into the laundry...

I did 4 loads of laundry yesterday, the other two are fine. I did my whites, then delicates and threw them all into the dryer together before heading to bed. I had name brand stuff in the delicate load. Fabletics, Callaway, Carhartt. I have two toddlers so I believe somehow a crayon made it in and melted in the dryer.

27 18
Reddit
r/personalfinance tmntnyc

Handling RSUs on TurboTax when employer witholds 40% of units to pay taxes. Those taxes aren't reflected on 1099-B but are included in W2

I was awareded 9 RSU units and the company sells 4 of them before the rest hits my E-Trade account. The 1099-B from E-Trade only shows the proceeds from the sale of the 5 shares I sold and shows the taxes paid as $0.00. I checked my paystub for the pay cycle for when I sold the shares and I see the YTD for Federal taxes increased by several thousands (without additional income for that pay cycle being taxed). So I assume they added the proceeds from the shares they withheld for tax purposes and added it to my withheld federal taxes for me.

My confusion is how do I cover my ass for this on my tax return in Turbo Tax? When I upload my 1099-B, Turbo Tax sees I had a RSU sale for $3560 and sees "Taxes withheld: $0.00" when in reality, it was more like a $6400 sale, $2800 in taxes paid, and $3560 remainder. So turbotax assumes I didn't pay tax on the $3560 short term sale proceeds and said I owe tax on that. Basically double taxed.

So now I have one document from my company called "Release Details" showing

Award Shares: 9.00000

Shares Traded: (4.0000)

Shares Isssued: 5.0000

Market Value: $6,416

Total Tax: $2,851.76

...and then E Trade's 1099-B shows simply:

Proceeds from non-covered securities: $3560

Federal Income Tax Withheld: $0.00

So the taxes I already paid are mixed in with my Box 2 so what do I do so that when I upload the 1099-B it doesn't say I owe another $2,851?

r/TwoSentenceHorror dilonshuniikke

Being a doctor myself, I knew the anesthesia they gave me must have been disconnected when I woke up on the operating table.

When I noticed the paralysis and amnesia agents still connected, I wondered how long this had been a problem for.

r/ProductHunters olenami

should you [not]launch on Product Hunt? It depends.

https://preview.redd.it/1ef4c6u2hwhg1.png?width=1080&format=png&auto=webp&s=f2a5f06fdb6f9bb93f901cec5ed58782dc47b03b

600 DMs → 218 votes. My recent experience with Product Hunt.
So - if you are an early stage founder - should you [not]launch on Product Hunt?
It depends.
Not on your product.
On your goal.

After launching modaal.dev [ business grade ios apps on Swift with AI] - this week, my conclusion is simple:
Product Hunt is mostly a re-engagement engine - not a discovery engine.

1) If your goal is re-engagement → launch.
PH is great when you already have:
an audience
customers
a network
Because the real PH loop is:
You ship → you email / post / DM → your people show up → they engage → you re-activate the market around you.
So yes — if you’re Miro / Mistral AI / Vercel / Webflow -level, Product Hunt is perfect. [that's exactly who played yesterday]

2) If your goal is new users → usually don’t overinvest.
If you want fresh leads or new early adopters, PH is often a weak time → outcome bet.
Unless you’re a very broad product that “anyone wish/can try fast”.
Example: whispeflow

3) The evolution of Product Hunt nobody likes to say out loud
I’ve been watching PH for ~7 years.
It evolved from:
indie makers shipping MVPs
into:
big companies shipping features
And that’s fine.
It’s just a different game now.
But if your goal is: “I want the world’s early adopters to discover me”…
PH is no longer the best place to bet your time.

4) The uncomfortable reality about ranking
Want #1?
On average you need 300–600 upvotes for this.
Big companies can do it with their built-in reach.
Small teams have 2 options:
* accept a realistic rank (still valuable)
or * pay (and that usually means low-quality voting)
Yes, vote services is service.
Market reality: 300–500 votes can cost ~$3–5k.
It may buy you a good screenshot. It rarely buys you users.

5) My launch results (and why I’m happy)
I sent ~600 personal messages.
We got 218 votes.
We got 5 very interesting VC/Angel convos leads.
And I call it a success.
Because my goal wasn’t installs or #1.

My goal was:
awareness inside my circle + credibility in the exact communities where my product belongs. For that → PH worked. For me.

My wish
I hope vibecoding tools like Lovable / Bolt / v0 by Vercel Base44 create a new “maker arena” again — a place where brand-new products can be showcased and get first adopters without competing with enterprise feature drops.

r/personalfinance storstygg

Need to roll two IRAs over - but to where? Fidelity?

I have an old IRA from a company I worked at 20 years ago (they actually hunted me down to let me know!) with a few $K in it (nice surprise). I also have one at a major retail bank doing absolutely nothing (it was in a CD ages ago but they auto-moved it to a zero interest savings account when it matured... thanks for looking out, guys!).

I have Fidelity at work for my retirement program - should I move these post tax money/IRAs to Fidelity for simplicity? Are there any smaller banks offering amazing rates/perks for rollovers currently?

r/midjourney Dropdeadlegs84

A Crack in the Sky

41 1
Reddit
r/maybemaybemaybe Flat-Decision3204

Maybe Maybe Maybe

22 4
Reddit
r/LocalLLaMA NeoLogic_Dev

Are local LLMs actually more trustworthy — or do we just feel safer because we run them ourselves?

I’ve been running local LLMs via llama.cpp and GGUF for a while, mostly because I care about control and data integrity more than chasing benchmarks. But the more I work with them, the more I keep wondering: Are local models actually more trustworthy — or do they just feel safer because they run on our own machines? Yes, local inference removes a lot of opacity. No silent SaaS updates, no hidden pipelines, full control over prompts, weights, and logs. In theory, runs are reproducible. But the black box itself doesn’t disappear. A quantized model on my laptop is still a probabilistic system. I can hash the model file and log outputs, but I still can’t really explain why a specific answer happened. In practice, trust seems to break first at boring places: prompt drift, context assumptions, stale RAG data, or small quantization changes that subtly shift behavior. Lately I’ve been thinking less about “explainability” and more about verification boundaries. What assumptions need to be re-checked every run? What should never be trusted implicitly — even locally? Curious how others here approach this. Do you treat local models as inherently more trustworthy, or do you assume zero trust and build guardrails anyway?

r/funny gatorbeetle

Official Grooming Product of Reddit Users (OC)

r/Jokes Rmondu

Overheard at the office coffee station

Senior Engineer: That was an awful lot of snow we got last night.

Office Manager: Yes, it was. I was an hour late after shoveling my car out of the drift.

SE: I was right on time. Here’s a photo I took of my cleaned-off car in my shoveled-out space.

OM: Wow! That’s pristine. That must have been a lot of work. You're not a youngster anymore.

SE: Not at all. My neighbor next door did it. He had it all finished by the time I drank my coffee.

OM: Nice! He must be a great neighbor.

SE: Yes, and he’s young and strong. Here’s a photo of him.

OM: Very nice-looking young man. Is the pretty woman with him in the photo his wife?

SE: Oh, no. That's the woman who visits him after his wife leaves for work.

90 1
Reddit
r/Unexpected MyNameGifOreilly

FedEx delivery

231 15
Reddit
r/personalfinance Same_Instruction6523

Best way to invest $5K / month

Hi all - I recently started working in the US (on a OPT visa), and have about 4-5K a month in excess that I can save. I don't know much about stock market investing, just parked some money in a Marcus Savings account.

I dont plan to be here for long, might move to UK or EU later this year, but could potentially have the 5K income steady even post move. Whats the best way to invest this, assuming I might need half of that amount in 1-2 years time.

I created a SoFi invest account, not sure how to go about it, should I do a robo advisor?

Would really appreciate any tips / suggestions.

r/SideProject Repulsive_Aioli_7867

Built a pet expense tracker I actually use (React + TypeScript)

https://reddit.com/link/1qxjbyy/video/zd893vsfyvhg1/player

I have pets, and I wanted a simple way to track how much I spend on them. Food, vet visits, grooming, toys, and occasional expenses add up quickly, but most apps I tried felt cluttered or overcomplicated for basic tracking and analytics.

So I built my own pet expense tracker.

The goal was to keep it practical and easy to use. Clear summaries, simple charts, and a UI that stays out of the way so it’s usable day to day, not just once.

I implemented the core logic, state handling, and data flow myself. For the frontend, I used Kombai mainly to help with UI layout generation, component restructuring, and visual iteration. I also used its 3D and animation resource library to experiment with small interactive elements in the interface.

Tech stack:

  • React 18, TypeScript, Vite
  • React Router DOM
  • Tailwind CSS, Radix UI, shadcn/ui
  • Framer Motion
  • Three.js with React Three Fiber & Drei
  • TanStack Query, React Hook Form + Zod
  • Recharts
  • localStorage for persistence
  • Vitest + React Testing Library

The app runs as a web app and stores data locally in the browser, keeping things simple and privacy-friendly.

Live demo: https://tailtally.vercel.app/
GitHub: https://github.com/prathameshfuke/tailtally

Sharing this here to get feedback from other builders or pet owners. Suggestions are welcome.

r/whatisit ReadingNo4688

Two of these came out of my gums. Haven't eaten anything resembling that ever. What is it?

r/homeassistant carrot_gg

HA Voice Preview Edition fork: openWakeWord support + LED brightness control on 25.12.4

I've been working on getting openWakeWord running on Voice Preview Edition alongside the stock 25.12.4 firmware and figured others might benefit from having this available.

What this fork adds:

A "Wake Word Engine" dropdown in the device settings that lets you switch between the built-in microWakeWord and openWakeWord without reflashing. openWakeWord runs server-side through your Home Assistant instance and supports a much wider range of custom wake words. The on-device microWakeWord remains the default.

There's also a "LED Brightness" setting with options from Off to 100%. The stock firmware runs the LED ring at full brightness which is pretty harsh, especially at night. Default is set to 20%.

Why a fork?

There is an existing openWakeWord fork by bmcwilliams96 but it is based on an older firmware version and misses all the improvements that shipped with 25.12.4 — sendspin, group media player, the new speaker pipeline, etc. I wanted both, so this applies the openWakeWord functionality directly to the current 25.12.4 codebase. Everything from the stock firmware is preserved.

Fork is here: https://github.com/jxlarrea/home-assistant-voice-pe

Branch: openwakeword-led-brightness

Happy to answer questions if anyone runs into issues.

r/personalfinance rengingalnd

Deceased parent, do I need to file?

Hello all! As you can tell by the title my mother passed this past June 2025 from a long 4 1/2 year battle from breast cancer. She didn’t have an estate, only the money she got from us selling her house shortly before she passed. My father hasn’t been in the picture for 10+ years and I’m the only child (I did get that money from a Transfer by affidavit). I honestly couldn’t tell you if she’s filed the last several years as awful as that sounds but I’m wondering if I need to file for her from this past year for sure and what about other years if she hasn’t? Do I need to report that I got that money from her?

I’m so sorry if this is a dumb question, I’m just extremely uneducated about how this process works and don’t want to get in legal trouble by doing something I didn’t know 😭 I really don’t wanna pay a million dollars for a tax advisor or lawyer to tell me something someone else may be able to aid me in. Obviously I’ll go that route if I need to, but just wanted some insight! Please be nice I’m just a girl 😭

r/explainlikeimfive Tinfurstraw

ELI5: How are eye glasses made?

11 11
Reddit
r/whatisit ppnguitarist

Mystery service kit possibly for a key machine

I work in a locksmith shop and was doing some cleaning when I came across this little pack of tools and screws. I think it goes to one of the key machines we have, but I'm not having any luck finding a matching set on the website for anything in the shop.

There's an Allen key and some set screws, what looks like an aluminum tip stop, a black aluminum block with a raised section in the middle on one side and a magnet on the bottom, a clamping system of some sort, and two large o-rings or potentially belts, not sure which.

I guess I'm just trying to figure out which of my machines it goes to or if I can just scrap the stuff if it goes to something we don't have any more. Any help would be appreciated

r/brooklynninenine MysteriousDonkey7862

El apartamento de Jake.

En un episodio de la serie en el que casi pierde su piso nos enteramos de que era de su abuela pero unas temporadas después discute con Amy por en qué apartamento van a vivir y él dice que compró el piso.

Hay un error o que ha pasado???

r/homeassistant EvilKneevil_

Hue Lights > Shelly > Home Assistant

Hello!

Wondering if this configuration would work:

I have hue lightbulbs and Shelly gen4 modules. I want to control 3-4 Bulbs with one Shelly switch.

Therefore I thought of the setup:

Home assistant with zigbee stick has both Shelly modules and hue bulbs integrated with zigbee. Then create an automation for one Shelly switch to control some of the bulbs.

I think I don’t need a hue bridge or the Shelly app. Is that correct?

What are your thoughts?

r/Art Swimming-Club-5066

Twin Scars, Swimming-Club-5066, Mixed Media, 2026

r/mildlyinteresting khiuahua

People fighting with sticks at the Banni Festival in India

r/leagueoflegends CalfromCali

I host 5v5 customs through out the week for league players. Members of the community play while I get to learn shoutcasting. If this looks fun feel free to join!

If anyone is interested in playing feel free to come by the discord. You can play whenever for fun or play regularly and have your stats tracked and play in our occasional tournaments. All Ranks are welcome. https://discord.gg/YrR6f2T

14 8
Reddit
r/LocalLLaMA iRanduMi

Apple Studio M4 Max (16C/50G/128gb) vs Studio M3 Ultra (28C/60G/96GB)

In short, this is for personal development and the expectation is that it's running 24/7 within a server closet.:

  • Coding
  • Home automation
  • Image Processing (security cameras)
  • SQL Database Processing

Both of the following machines spec'd out are ~$4k. Which would you choose?

  • Apple Studio M4 Max: (16C/50G/128gb, 1tb)
  • Apple Studio M3 Ultra (28C/60G/96GB, 1tb)

I'm struggling to decide what's more important, the additional performance vs memory.

r/TheWayWeWere Ok_Fall_9569

Doodles by my dad (aged 9 or 10) in his school notebook, ca. 1942 or ‘43

He and his family moved to the US from Ukraine in 1939. The phrase “I’m cocking” became an ongoing joke in our house to denote an evil or stupid person sputtering out foolishness. I still use it (increasingly it seems) today.

188 14
Reddit
r/personalfinance StealthRabbi

Tips from gig income (Door Dash)

My spouse earned through Door Dash in 2025. I've entered the self employment / Schedule C stuff on FreeTaxUSA. Let's say her income was $1200. I entered that info in weeks ago. But, she got an email identifying that $700 of that $1200 was tip income. So, I enter that in the Tips / Overtime section.

However, I don't see a change to my Federal refund amount. Shouldn't our married/joint income have been reduced by $700, and thus I should be responsible for less tax?

I have standard deduction set. Our combined income is less than $300K

r/LocalLLaMA Express-Jicama-9827

Qwen3-Coder-Next 80B (GGUF/BF16) on Zen 5 EPYC: 12-channel DDR5 & NVFP4 bench

Qwen3-Coder-Next (approx. 80B params). This time, I moved away from quantization and tested the full BF16 (unquantized weights) to see if high-precision coding tasks are viable on a 12-channel CPU setup.

TL;DR Running 80B BF16 on a 12-channel Zen 5 system is surprisingly practical. I’m seeing a stable ~7.8 tok/s decode, which is plenty for a "background" coding assistant or local code reviewer where you value reasoning and precision over raw speed.

Hardware / Runtime

  • CPU: AMD EPYC 9175F (16 Cores / 32 Threads, Zen 5, 512MB L3)
  • RAM: 768GB DDR5 (12-Channel,6000 MT/s; DIMMs are 6400-rated but capped by the MB)
  • GPU: Not used (CPU-only inference)
  • OS: Ubuntu 24.04
  • Runtime: llama.cpp

e.g

podman run --rm  -p 8081:8080  --shm-size 16g  --cap-add=SYS_NICE  -v /mnt/data/hf/hub/models--unsloth--Qwen3-Coder-Next-GGUF:/models:Z  compute.home.arpa/llamacpp-zen5:qwen3-coder-next  -m /models/snapshots/96ab45bf06d904ee251044b0679df08f668677d2/BF16/Qwen3-Coder-Next-BF16-00001-of-00004.gguf  --cache-type-k q8_0 --cache-type-v q8_0  --flash-attn on  --ctx-size 16384   --parallel 1 --threads 13 --threads-batch 13  --batch-size 2048  --ubatch-size 512  --jinja  --host 0.0.0.0  --port 8080

Model Settings

  • Model: Qwen3-Coder-Next (~80B)
  • Quant: BF16 (unsloth/Qwen3-Coder-Next-GGUF/BF16/*)
  • Context: 16k
  • KV Cache: q8_0 (Optimized to balance precision and memory pressure)
  • Threads: 13 (The "Sweet Spot" identified in my previous post)

Performance (Real Numbers)

1. Prompt Processing (Prefill)

  • Short prompt (~9 tokens): 33.37 tok/s (warmup-scale)
  • Realistic prompt (~287 tokens): 117.40 tok/s
  • Average PF (realistic): ~111–117 tok/s

2. Generation (Decode)

  • Sustainable speed: ~7.59 tok/s
  • Tested on long generations (~2,233 tokens). Throughput stayed very consistent.

3. TTFT (Estimated)

  • ~2.58s for a 287-token prompt (estimated as PF time + 1 decode token).
  • (177-token TTFT not included in this run’s pasted timing logs.)

Discussion: Why BF16 on CPU?

While 4-bit quants are faster, I chose BF16 for this coder-specific model to ensure zero degradation in logic and syntax handling.

  • Memory Bandwidth: The 12-channel DDR5-6400 configuration is the hero here. At 80B scale, we are moving a massive amount of data per token, and the bandwidth saturation is real.
  • Zen 5 Advantage: The AVX-512 throughput on the 9175F handles the BF16 math with helps. Even without a GPU, the experience doesn't feel like "waiting" in an async workflow.

Coding Evaluation Takeaways

  • Security & Audit: Extremely strong. It successfully identified SQLi vulnerabilities and plaintext password risks, providing robust fixes and unit tests.
  • Hallucination Control: Using the spec-grounded mode, it correctly refused to answer when the information was missing ("NOT IN SPEC").
  • Complex Logic: It followed 90% of constraint-heavy Django requirements but missed some specific multi-tenant safety nuances. It’s best used as a high-end draft generator + expert reviewer.

Bonus Benchmark: Qwen3-Coder-Next-NVFP4 on GPU

GPU: Blackwell RTX PRO 6000 Max-Q 96GB

MODEL: vincentzed-hf/Qwen3-Coder-Next-NVFP4

podman run --rm --device nvidia.com/gpu=all  --security-opt seccomp=unconfined  --cap-add SYS_NICE  --shm-size=16g  -v /mnt/data/hf:/data/hf:Z  -v /opt/containers/runtime/vllm/data/gpu_cache:/data/cache:Z  -p 8000:8000  -e HF_HOME=/data/hf  -e HF_DATASETS_CACHE=/data/hf  -e VLLM_CACHE_ROOT=/data/cache  -e HF_HUB_OFFLINE=1 -e FLASHINFER_DISABLE_VERSION_CHECK=1  compute.home.arpa/vllm-gpu:nightly vincentzed-hf/Qwen3-Coder-Next-NVFP4  --dtype auto  --gpu-memory-utilization 0.88  --max-num-seqs 1  --max-model-len 32768 --enable-prefix-caching  --trust-remote-code  --enable-auto-tool-choice --tool-call-parser qwen3_coder --reasoning-parser qwen3 --served-model-name qwen3-coder-next-nvfp4

vLLM (NVFP4) throughput (periodic log snapshots; interval averages, so it fluctuates a lot):

  • Avg generation throughput observed: ~11.7–100.4 tok/s (examples: 17.5, 58.4, ~99–100 tok/s spikes)
  • Avg prompt throughput observed: ~17.7–669.1 tok/s (examples: ~20–30 tok/s in some intervals; large spikes like 175/463/669 tok/s depending on the interval)

https://preview.redd.it/gtb1luh2rvhg1.png?width=3220&format=png&auto=webp&s=1b346dd9cbcf851b486f5cc1354efbd3050aad82

Note: these are rolling/interval averages from vLLM logs (not per-request measurements).

Video Demo: (GPU 8:05~)

https://reddit.com/link/1qxib19/video/2m475useqvhg1/player

17 10
Reddit
r/ARAM Unique_Candidate_124

Random Prismatic turned into 2 High Roller Prismatics on lvl3. Got 7k worth of anvils on minute 10. Game ended in 14 mins.

First time uploading! I just played a game of Aram Mayhem where I selected a random prismatic augment as my first one. No idea why, but it turned out to be TWO prismatics.

[ Giant Slayer ] and [ Void Rift ]

Them alone would be OP enough. Both OP for a caster like heimerdinger. Their DMG was at ~6k each at minute 8.

Besides that, they got the "High Roller" extra stat. Having 2 High Roller augments grants a high chance to drop anvils from minions. I had 9 anvils on minute 10, most of which i took as extra magic penetration. On the video, you can see 109 extra magic penetration, and some size reduction + ms.

That was the most broken game I ever experienced. I really, really want to discuss it with someone! Any ideas why I got two prismatics? I also have no idea whether Magic Penetration (Flat) and Magic Penetration (Percentage) stacks,

r/n8n LegiFX

Getting least row from n8n Data Table

In my latest workflow I've been working with an n8n data table for saving my analytics, but because my workflow runs once every hour, I'm having the problem that sometimes some data just leaves the same as before and there are two exact same rows. To prevent that, I thought i can execute "Get Rows" by just getting the current Date and subtract it by one and a half hour to be sure that only my latest row would match to compare then one unique parameter of both the new row and the latest in a filter node to get wheither they are the same or not. Now i have the problem that when I execute that node, I'm getting:

Invalid date string '2026-02-06T13:23:58.521+01:00 ' for column 'updatedAt

Any ideas why or a smarter way to get the latest row from a n8n Data Table? If there are any additional informations u need get a solution, just ask! Thanks!

r/Art Kyle_clyde

Frame, Kyle_Clyde, Watercolor, 2026

r/personalfinance SCLSU-Mud-Dogs

Recently Promoted, incoming Cash windfall, but large student loan bill, what would you do?

Before I get too far into my question, I’m aware that this is not the most mathematically optimal plan.

Background I (34m) and Wife (32m) are about to start a plan to pay off her student loan debt and am hoping to see thoughts of this plan for people with no skin in the game.

Debts: $7,000 Car, Interest free loan from my aunt. I had a used car until end of 2023, was rear ended and it got totaled and bought a new Car. The used car market was still rather high, and my aunt tried to gift the money to me, but I insisted I pay her back. She lent me $14k and have been paying $375 a month to her

$3,800 Car 2.6% APR, $235 a month

Mortgage: $445k remaining, 6.875% bought at the end of 2024. Hoping to refinance at some point soon.

Student Loans: $159k, 6.75% Federal Grad school loans for my wife. Physical Therapist, she went back to school at 29 to get her DPT. Coming out of school it came with an immediate raise from $64k to $90k, but she could not work full time for two years

Right after we purchased the house I was laid off, but remarkably was able to find a remote role quickly. I was also diagnosed with ADHD and am now medicated which has been remarkably helpful in performing better at work.

Salaries:

I just received a Promotion with a 10% raise, and a 10% bonus effective 2/15. New salary $110k

My wife also received a promotion (same day! It was wild) that is also coming with a > 10% raise and she is now $100k.

Incoming Cash Flows:

-          My bonus $10k, 10% of old salary

-          Wife also $10k bonus this was promised to her for staying at least 2 years with same company

-          Wife: Referred someone and is getting a $5k bonus

-          Wife has a standard Retention Bonus of $2k

-          Tax Return will be $4,500 (this was the first year we could deduct mortgage interest and we got married in December of 2025) so we were withholding at a single rate. When we start getting our new salaries we will fill out the worksheet so we are not loaning the government this much interest free.

We have $25,000 in savings right now before any of these inflows.  And $170k in retirement savings that we are contributing to

Here is what we are thinking:

-Pay back my Aunt immediately freeing up $375 a month in cash flow. My aunt has been very very successful and this money is a drop in the bucket for her, she also has no children. I know that I stand to inherit a significant amount from her one day, but regardless I feel odd borrowing money from her and it’s important to me to pay her back, even though she tried gifting it with zero strings attached, and I really mean zero strings

-Pay off my wife’s car, I know it’s a small interest amount, but that frees up $235.

- Take rest of incoming cash from bonuses, and some savings and get the Principal down to $145k, then refinance to 5% for 10 years. She works at a for profit clinic so PSLF will not apply to her.

-We are looking at north of $1,500 a month in cash flow from our new salaries and no car payments. Put ALL of that, as extra towards student loans until it is paid off and then at some point hopefully refinance our house if rates drop below 5.5%

 - We also would like to start trying for kids soon and would expect daycare to be $1,600 a month or so when that time comes

So Reddit, what would you do in my situation?

r/Art Relative-Stable-6075

Bill Murray vs the forces of evil, Dr.Venkman, fan art, 2026

r/whatisit TinkerLinkerr

What is this table?

Is this some random box or a table with that undercarriage thing for a specific purpose?

r/Frugal melissaw328

What are your best tips for saving on groceries by cutting back on ingredients like meat?

I really like meat but it is getting more expensive. I am wondering what ways people have cut back in meals without noticing a big difference. I can cut back meat in soups by adding more beans or extra vegetables. Meat seems to be more filling and healthier than adding more starches. I am trying to find a happy medium but still have a filling meals and great taste. I love mushrooms, green beans, zucchini and spinach.

15 38
Reddit
r/SipsTea bstrathearn

The canonical way to chug tea

With both hands on the mug, looking toward the heavens

r/geography Thatunkownuser2465

If Earth were discovered today as an exoplanet, which single geographic feature would most strongly suggest intelligent life existed here?

327 184
Reddit
r/Art Aggravating_Offer207

Family portrait, Spundman, Digital, 2024

r/SideProject grigoretex

I’ve become addicted to browsing r/SideProject for inspiration

While not starting a single project in the process.

r/Art WaterRevolutionary72

Bernard’s Sanctum, Steven Kenyon Fish, Graphite and Paper, 2025

r/DecidingToBeBetter notzoro69

I'm so done with this good guy identity

Ever since I started meditating, I’ve been noticing this habit of mine, constantly trying to be a “good guy.” On the surface, it sounds like a good thing. Wanting to be better, right? But this is different.

This good guy identity of mine forces me to do a lot of things I don’t actually like. I end up lying at times just to defend this image of being great, to uphold the idea of a “perfect man,” someone who does everything right. I keep trying to please people, always overthinking whether my actions or words will leave the right impression.

I’m just done with all of this. It hasn’t made me better, and I can’t keep up with everyone’s expectations anyway. It’s a futile exercise, and it only leaves me filled with misery.

With experience, I’ve come to a realization. The best comes out of me when I’m in a joyful state. Just being joyful and sensible is all that one really needs.

“Good” people have caused maximum harm in the world.

We don’t need “good” people.

We need joyful and sensible people.

— Sg

Thank you for reading.

27 5
Reddit
r/automation Helpforfitness

What are your workflows to stay up to date with important new research and publications?

Hey everyone,
I’m looking for workflows and methods to stay up to date with important new research as efficiently as possible.

I know that you can subscribe to many journals, alerts, etc., and I already use tools like Zotero. But I’m wondering:
do I really have to go through all new articles in my field every time just to eventually find the few papers that are actually relevant to me?

Are there smarter workflows — maybe using AI tools — that help with this?
For example, something like:

  • new papers are automatically collected in a feed,
  • an AI summarizes what’s new and why it might matter,
  • and I can then quickly decide: “okay, this is interesting for me” vs. “skip”.

I’d love to hear how you handle this in practice — especially workflows that reduce noise without missing important developments.

Thanks!

Note: AI helped me formulate this post — I’m not a native English speaker.

r/instant_regret james_from_cambridge

Check Yo Self Before You Pierce Yo Self

311 43
Reddit
r/funny thetacaptain

Same energy

231 5
Reddit
r/Wellthatsucks Justin_Godfrey

There goes the door handle

700 26
Reddit
r/personalfinance xshodown

Credit card change product

Hello!

I am in middle of a mortgage refinance but I want to change my chase unlimited to Chase Sapphire. Will this affect my refinance?

r/Adulting shay_006

I want to switch jobs badly but I’m panicking before my interview and don’t understand what’s stopping me

r/Anthropic MetaKnowing

This chart feels like those stats at the beginning of Covid

13 7
Reddit
r/whatisit Other-Radish-8232

This comic crop

I remember seeing this comic of scuzzy looking anthropomorphic possums back in the tumblr days, does anyone know the artist name or a link to their work?

r/aivideo AccomplishedAd4403

i create a time traveler funny story movies with sora and i let hey gen ai translate iam lazy lol

r/ARAM Yeyets_

My teammates said, "Briar tank doesn't work, she won't have enough sustain."

Augments were protein shake, vampirism, celestial body, and twice trice (shown at the end of the clip).

I started building Heartsteel for Briar, and my teammates started flaming me before I even got my first item (Heartsteel).

They kept saying that I wouldn't have enough sustain, even despite my first augment but I've played Briar tank before even without that augment and I just know it works, so I proceeded anyway.

At the end of the game I was honored by five people: four teammates and one enemy (Sivir).🤣

r/programminghorror BlackFuffey

I might have accidentally created a monster

r/Roadcam mofomofo2020

[UK] - Yellow Junction Box Road Rage

The original had lots of naughty word in the clip. As a result it was only viewable on YT when signed in. Muted the bad language in order to bring the clip to a wider audience.

Original clip - WARNING contains swearing: (sign in to YT to watch)

This busy roundabout in White City, Old Trafford must be a gold mine for Trafford Council. The recently installed cameras take no prisoners and drivers receive fines if they enter the yellow junction box and their exit is not clear. The cammer did the right thing here as he was prevented from clearing the box by other vehicles ahead. Crazed driver who came up behind him was obviously not aware or did not care for the regulations though. He was literally spitting mad according to the cammer and as can be heard wanted a scrap. Driver is a danger to himself and other road users.

r/StableDiffusion Difficult_Singer_771

ComfyUI course

I’m looking to seriously improve my skills in ComfyUI and would like to take a structured course instead of only learning from scattered tutorials. For those who already use ComfyUI in real projects: which courses or learning resources helped you the most? I’m especially interested in workflows, automation, and building more advanced pipelines rather than just basic image generation. Any recommendations or personal experiences would be really appreciated.

r/whatisit ImportantRabbit9292

Seen on a Mustang's rear window.

Does anybody know what this is?

r/geography cudem_31im

Coastal Relief Map of Puerto Rico

Coastal relief map of Puerto Rico showing land + seafloor elevation. VE: 3x

This map was generated in a single command. Happy to answer questions about the data or workflow.

r/yesyesyesyesno MacDefoon

Wrong u-turn

100 3
Reddit
r/midjourney im_daria

The Dark Elf and the Light Elf

r/automation wild_deer_man

Browser MCP very slow and flaky, what's the best way to use it? Is it the best tool for browser automation?

I am using claude desktop with browser mcp on macos 26 with Arc Browser.

Any other setup you might recommend that doesn't constantly gets stuck or disconnect?

r/SideProject Ambitious-Pirate3620

Help me , please ! I want paying clients

How do you guys get clients without Upwork/Fiverr?

Honestly, I’m tired of platforms like Fiverr and Upwork.

It feels like: - everyone is underpricing like crazy - clients want a full website for $20 - you spend more time writing proposals than building anything

So I’m curious — for those of you actually getting good clients…

Where do you find them outside these marketplaces?

Do you rely on: - Twitter/X? - LinkedIn? - Indie Hackers? - Cold outreach? - Building in public? - Communities/Discords?

I’m a frontend developer (React/Next.js + Tailwind) and I genuinely want to work with serious founders/startups, not bargain hunters.

I am living with my parents , but I want to do this frontend development as full time , I can't make a single a penny , if I had just even one or two paying clients I could have living separate giving all time on frontend development,

If anyone from you could love to work with me /hire me , or just help me please welcome Heres my portfolio ui-developer-nine.vercel.app

Would love to hear what’s worked for you, especially as someone still early in freelancing.

Any advice or real stories would help a lot.

r/SideProject SreehariNambiar

Looking for early users to test my app.

Guys,

I am building a new kind of social network where users can create, host or share and invite people to join the events, hangout with like-minded people and know what happening areound you made simpler for you. A social media where you can post freely without being judged as there are no comments, its like posting and chatting combined into one where everyone can post their content freely unlike the social media we have right now. You can also customise your chatters unlike any other social media that exist now, I want this app to be used by everyone to post their updates, share and hangout with friends, join and host events and know what's happening around you like updates news etc. I hope you guys will support this endeavor and I am launching the testing of my app Textout:).

If you are interested do visit the website https://www.textout.in and give you email id to send the tester link. Thank you and have a nice day. Please share your thoughts in comments and if you want to be a part of this journey also do let me know, Thank you.

r/toastme Able_Pickle_959

36M, Listening to the new Joji album and now I’m in my feels.

I’ve really been going through it the past 6 months or so. I’ve been out of work since July of last year, my girlfriend of 8 years left back in October, and I’ve just been struggling to keep going. I’m finally starting a new job Monday, so thankfully I might not lose my house. I’ve just been feeling so broken and lonely, and could really use some kind words right now. 🖤

21 3
Reddit
r/leagueoflegends morethandork

I still miss my kind, but this was satisfying AF! Gold / Plat lobby and my whole team was ready for the Skarner engage.

r/Jokes Radiant_Bookkeeper84

My father was always begging me to take over the family florist business but I said no...

I told him I didn't want to be just another helio-trope.

r/Anthropic wild_deer_man

Browser MCP very slow and flaky, what's the best way to use it? Is it the best tool for browser automation?

I am using claude desktop with browser mcp on macos 26 with Arc Browser.

Any other setup you might recommend that doesn't constantly gets stuck or disconnect?

r/comfyui Difficult_Singer_771

Comfyui course

I’m looking to seriously improve my skills in ComfyUI and would like to take a structured course instead of only learning from scattered tutorials. For those who already use ComfyUI in real projects: which courses or learning resources helped you the most? I’m especially interested in workflows, automation, and building more advanced pipelines rather than just basic image generation. Any recommendations or personal experiences would be really appreciated.

r/whatisit comethefaround

Concrete floor

Basement flooded. Painted concrete floor. All dried now but paint started chipping off everywhere.

Noticed this patch has some white crystal looking stuff coming out of it. Honestly looks like that fake fiber glass snow used for xmas decor years and years ago. Quite a few other patches kicking around.

Hoping its not something that will get into the air and breathed in by me or my family when I go to clean it up. Also just super curious lol.

r/personalfinance One_Insurance_9693

I have some money but don't know what to do with it

I have about 150k in my HYSA but don't know what to do. I want to invest it in stocks but im scared i don't know enough and will just lose it. What is the best thing to do with my money?

r/Art Moist-Travel-77

Pickled Monsters, Apes Art, Resin, 2025 [OC]

440 24
Reddit
r/personalfinance Express-BDA

Fair way to split savings using Google USD/INR rate?

I’m sending money between the US and India with a friend. Instead of banks, we want to use the Google mid-market USD/INR rate so we avoid FX spreads.

Question:
What’s a fair way to split the savings equally between sender and receiver?

Is using something like Google rate minus a small amount per USD (so both save the same vs banks) reasonable?
If yes, what range makes sense?

Looking for simple, fair ideas. Thanks!

r/ClaudeAI MetaKnowing

Anthropic was forced to trust Opus 4.6 to safety test itself because humans can't keep up anymore

162 32
Reddit
r/maybemaybemaybe Oda_DeezNutz

Maybe Maybe Maybe

When the fish catches you

139 36
Reddit
r/PhotoshopRequest Cool_Note_2744

My son on the far right decided he wanted a chicken wing for an arm can someone make it look normal like he has his arm down

r/ARAM Happy-Personality-15

Mayham Karthus

Really loved to play this champ.

But with recent changes, so many sustain/heal make him not that great anymore.

So many try hard comp also have a shield/ support that counter him pretty hard.

r/maybemaybemaybe TheCABK

Maybe Maybe Maybe

28 5
Reddit
r/interestingasfuck Sensemaker1

India is going crazy with Protein products

r/AskMen Fabulous_Support_556

What’s the one thing your partner said that made you opt out of the relationship?

I’ve been with my partner for 4 years and it seems we’re growing apart. Not sure what to think of it but he doesn’t see me as me anymore and I wanted to hear some experiences with disillusionment and breakups from you gents

58 83
Reddit
r/painting KAndy91

Painted a fox again - this time with glowing mushrooms, feedback welcome

54 11
Reddit
r/Adulting Pretty-Ad-7775

At some point...

160 18
Reddit
r/WouldYouRather Necessary-Win-8730

Would you rather only wear Sandals or Trainers the rest of your life?

r/leagueoflegends elZickZack

6 attacks 1 pentakill

r/BobsBurgers 3zaanasalemk

Saw the Jack Black post, and wanted to share my favorite Bob's Burgers music video

r/ProductHunters PlainGeets

AskIndra is live today — early experiment in environmental decision support

Hello everyone,

Sharing a quick self-promo post for a project we launched today.

AskIndra is an early experiment in translating environmental data (air quality, weather, local conditions) into clear, everyday guidance rather than dashboards and numeric indices. The goal is to help people move from information to decisions without needing to interpret charts or scales.

This is one of the first projects emerging from Bhaskar Labs, a small experimental space we’re building around Indic AI and culture-tech. We’re launching AskIndra early to learn in public and understand what actually resonates.

If you’re curious, we’d really appreciate your support and—more importantly—your honest feedback:

👉 https://www.producthunt.com/products/askindra

Thanks for taking the time, and happy to answer questions or learn from similar launches others here have done.

r/AskMen Known_Kitchen8390

Are you okay with not having sex and being single? Why?

I've had a couple chances with women, but I honestly just don't care really to pursue them. They were all beautiful women too. My sex drive isn't low, though, I got out of a terrible relationship 8 months ago, and I think I may just not want any of the drama or stuff of seeing them again (at the gym or the bar). I just want peace in my life. Even when presented with the option to get my way into a relationship with another beautiful woman, I just feel like I don't have the energy. I've also been holding up another woman that wants to hookup with me because I just know how it will end. Does anyone else feel the same? I haven't had sex since my last relationship.

r/Anthropic OptimismNeeded

How do I export all my shit from Claude? (chats and projects)?

Not sure if there's a way, but if there is, I recommend doing it now.

r/LocalLLaMA Dented_Steelbook

What would work with 44GB of VRAM and 256GB of DDR4?

I am going to start messing with my new to me system and wasn’t sure where I should be as far as model size. This is across four video cards and is going to be sucking plenty of juice with the i9 running.

r/DunderMifflin GlitteringHotel8383

Ryan started the fire...

42 15
Reddit
r/LocalLLaMA Hot-Employ-3399

Do we have human-friendly chat UI yet?

Is there a chat app that supports llama.cpp directly (not ollama, not openai) and has either groups or tags or something similar? Like bookshelves in novelai at least.

Modern chat apps I've tried are bad once number of chats go more than several pages: finding them is PITA as a search is not convenient:

Eg if across 100+ made in several months chats I have 20 chats about rust and 20 chats about fantasy adventures, looking for rust will find 29 results from both categories combined as fantasy world can have some rusty sword and since not all rust chats will have "rust" word, some will not be found.

r/leagueoflegends albinoman38

Dirty Blades for a Dirty Job, Gangplank song by Falconshield

r/LocalLLaMA nagibatormodulator

Update: I Dockerized my Local Log Analyzer (LogSentinel). No more Python dependency hell.

A while ago I shared LogSentinel — a local tool to analyze logs using Ollama (Llama 3 / Qwen) without sending data to the cloud.

I finally finished wrapping the whole project into a proper Docker container.

What's new in v1.0:

  1. Docker Native: Just docker-compose up. It connects to your host's Ollama instance via host.docker.internal automatically. No need to install Python venv or dependencies manually anymore.
  2. Persistent Caching: I added SQLite support. If the logs show the same error twice, it pulls the fix from the local DB instantly (0ms latency, no GPU usage).
  3. Strict SRE Prompts: Tweaked the system prompt to stop the model from "chatting" and force it to output only Root Cause + Fix Commands.

Repo: https://github.com/lockdoggg/LogSentinel-Local-AI.git

It works best with qwen2.5-coder:1.5b (fast) or llama3 (more detailed). Let me know if the host-gateway works for you on Linux, I tested mostly on Mac.

r/Adulting Altruistic_Art310

25 and so lost

r/SipsTea MF-DOOM-88

Can I try to guess what you need 💀

"Scammer edition"

30 23
Reddit
r/Anthropic MetaKnowing

Anthropic was forced to trust Opus 4.6 to safety test itself because humans can't keep up anymore

100 28
Reddit
r/MMA airplane231

Kyoji Horiguchi vs Tagir Ulanbekov | Full Fight

64 27
Reddit
r/SipsTea Apprehensive_Topic23

Sound like a personal attack

417 115
Reddit
r/LoveTrash Icy-Book2999

Shoe Tying Trick

60 31
Reddit
r/TwoSentenceHorror Magic-M

The ER Chief of Staff asked our team of surgeons why an emergency leg amputation for patient Anthony ***** wasn't performed last night- the patient is now deceased.

"Dr. Cahill and I did, but 'John Doe no. 4' is still here in the rehabilitation wing."

r/creepypasta Trist_ch

I Lost My Heart To the Sea [Part 1]

Prologue - Past

I’ve always felt like I was playing a part. Since my earliest years, I was just a mediocre actor in a family drama, playing the role of the son or the brother. I never felt close to them or cared about their feelings, but I pretended otherwise, because being known as an emotionless monster isn't beneficial for anyone's survival. I remember my grandfather’s first heart attack. While my family was in shellshock and weeping, I was only trying to shed tears to fit in. I liked him, but the prospect of his death didn't make me sad, just disappointed.

To me, losing a “loved one” was just a disappointing experience, like losing an object of interest. After realizing that finding love or bonding was a fruitless endeavor for someone like me, I decided to focus on my own likes and dislikes. To maximize my own happiness, I should do things that I like and cut out all the things I dislike. I kept my relationships just alive enough to be a safety net for hardships, but distant enough to avoid providing emotional support. It’s a cruel charade, but it allowed me to live my life the way I wanted. I became an engineer, a field where my lack of social skills went unnoticed, allowing me to spend my days alone at a desk.

Friendless and free from the grind of classes like physical chemistry and linear algebra, I suddenly found myself with way too much time on my hands. I finally understood why even the most introverted people search for any kind of social interactions. Humans are social animals. We are made to be with others and although I have my defects, biologically speaking I’m still the same as everyone else.

I was now stuck with a new problem: how do I get my social fix without letting actual people into my life. A roommate or partner was out, but a dog? They’re easy. They aren't pure or innocent as many would say, they're predators that happily rip other animals apart. We love them not for their innocence but their affection. You can be an obnoxious total piece of shit, and a dog will still shower you with the kind of affection people won't.

I got Toby, a chestnut brown poodle from a backyard breeder. To anyone else, he would have been a nightmare. He was never fully potty trained, he refused to eat unless his food came from my frying pan, and his separation anxiety was severe. He would tear the apartment apart and cry for hours if I left him. When I tried leaving him with my mother, his anxiety turned into a panic-induced aggression. The only solution was to leave him one of my old jackets. He would guard it in a corner, vocalizing his sorrow in a pathetic whimper until I returned.

Knowing he spent ten hours a day in that helpless, pathetic state eventually bothered me so deeply that I found a fully remote job just to be with him. It was a strange realization. I wasn’t a complete monster after all.

1 - Present

The last seven years with Toby have been the most fulfilling years of my life. Although he doesn’t understand my words, I speak to him and try to understand his wants and needs. He truly showed me what love is, which made the news I got last week even more frustrating. Toby's age and unfortunate genetics started to catch up on him. He’s already deaf in his left ear and is going blind in his right eye. The old me would have already started the search for his successor. A new dog that would take his place as soon as he’s gone. One that’s better bred, had a longer lifespan, but my heart doesn’t allow that. Toby hates other dogs and I know for a fact that no other dog could replace him. He’s special to me not because he’s loyal but because he’s my only true friend.

I honestly don’t want to think about a life after Toby. Instead I want to enjoy my life with him right now even more. I decided to plan an extended holiday for him and myself. My parents bought a beach home right at the coast which they never use, because my far east-asian mother got a bad vibe from this place. When I visited them, I let Toby into the garden and let him stalk their rabbits. In the meantime I had a short talk with my mother about this house.

She told me "There are some really bad things happening out that way, honey. When your dad and I stayed there, a whale washed up right in the middle of the day. It was awful, there were huge chunks of blubber just... missing, like something had been taking bites out of its belly and back. We tried calling the police and the fire department, but they all gave us the runaround and said this part of the beach wasn't their job to deal with it. Honestly, we were so spooked we just packed up and left right then."

“So there are sharks in the water?” I asked.

“I don’t know. The entire place felt off. Even your dad felt uneasy. He was the one who suggested leaving the place early.”

This was quite the thing to hear from my mother. My dad has always been your run of the mill obnoxious atheist. He never believed in the esoteric and normally just humored my mothers spiritual beliefs.

“I want to stay with Toby there. He’s never been to the beach before. Do you think that’s a bad idea?” I asked while watching him from the window digging at the flowerbed.

She paused for a moment. And told me in an unsure tone “I think it should be fine as long as you stay away from the water. And always keep him on a leash”.

After saying this, she finally saw Toby committing his little crime, ran outside and gave Toby the scolding of his life:

"Toby you moron, don’t wear my garden as your beard! I’ll give you the worst bath of your life!”

Him being half-deaf, he probably didn’t realize that this was a scolding and thought that she was interested in playing. He then proceeded to run a few laps around the yard after which I had to catch him and give him a bath.

I was quite happy with the outcome of this visit. I secured us a holiday home for free and it was always fun seeing Toby enjoy himself. Mother’s story obviously concerned me but a beached whale in itself isn’t anything to be afraid of.

2 - The Beach

Toby and I pulled into the gravel driveway in the late afternoon, and I have to admit, I was impressed. The house sat in total isolation, right where the dunes leveled out. Built from weathered cedar planks that had bleached to a pale gray in the salt air. Large, floor-to-ceiling windows faced the water, offering an unobstructed view of the tide rolling in. Inside, the wide-plank oak floors were scarred slightly by years of tracked-in sand, and the furniture was low-profile and functional. A wide wrap-around deck offered plenty of space for a few heavy chairs, positioned perfectly to catch the offshore breeze. It wasn’t an ornate place, but it felt solid and intentional.

I planned to spend at least two weeks here, so I’d packed everything we could possibly need: Toby’s toys, his favorite blanket which were just some of my old clothes my mother sewed together for him, and a bag of expensive dog food he always refuses to eat. For myself, I’d brought my luggage, two cases of red wine and an archtop guitar for when I felt the need to entertain my half deaf dog.

The nearby town didn’t seem particularly lively, but it had the essentials. There were a few grocery stores and, more importantly, a small vet clinic only thirty minutes away. Knowing medical help was close for Toby calmed the worst of my fears.

I parked the car on the gravel drive and left my bags in the back. Unpacking could wait. I wanted a steak, and Toby deserved one too after ten hours in the passenger seat.

The local grocery store reeked of cheap lemon bleach unsuccessfully masking the stench of sulfurous rot, all rising from a sticky floor my shoes clung to. I headed straight for the back, where a woman in a blood-stained apron was restocking the display case. She looked up and grinned, her eyes locking onto me with the kind of intensity that usually precedes a long, unwanted conversation.

“I’ll take the whole rib roast” I said, pointing at the massive, marbled slab of beef that sat like a trophy behind the glass of the butcher’s counter.

“New in town? Or just passing through?” she asked, wiping her hands on a rag.

“I’m new” I said.

“Well, welcome. That’s a lot of ribeye for one man. Planning a party?”

“Just for me and the dog.”

She didn't take the hint. “A dog man! I like that. What’s his name? Where are you folks settling in? I might know the place.”

“Toby” I replied, checking my watch. “North end of the beach. Just moved in.”

The chatter stopped. She froze with a tray of ground beef halfway to the shelf. The friendly crinkles around her eyes flattened out, and she leaned over to me, her voice dropping.

“North end... The old beach house?” she asked. The playfulness was gone.

“My grandmother won't even drive past that stretch of road after sundown. She says the water feels... wrong there. She's spent eighty years on this coast and claims she’s heard things coming from the surf that don't belong to any animal she knows.”

She wrapped my roast in white paper, taping it shut with a sharp snap. “Just watch yourself. If you start hearing noises or anything that sounds like a person but isn't, you stay inside. Keep your doors locked.”

“Thank you” I said, reaching for the package. “I’ll take that into account.”

She didn't smile back as she handed me the meat. I walked out into the humid afternoon air, the weight of the steaks in my hand. “Local superstition, nothing more” I said to myself. People in small towns always need something to be afraid of to keep life interesting…

The drive back was quiet, the car's tires humming against the asphalt until we hit the gravel of the driveway again. I spent the next thirty minutes hauling our lives inside, the wine, the bags, and Toby’s kibble.

I kept things simple for dinner. I seared the steaks in a cast-iron pan, the smell of rendered fat filling the kitchen. Toby got his in a bowl on the floor, and I sat at the counter with mine, propping my phone up to watch some YouTube videos. By the time I’d finished the meat, I was well into the first bottle of Cabernet Sauvignon. The alcohol settled in quickly, blurring the edges of the day.

I realized the house had gone too quiet. Toby wasn't under the table hoping for scraps. He was standing at the floor-to-ceiling window, his body as rigid as a statue. His ears were perched forward, twitching slightly as if he were trying to track a frequency I couldn't hear.

I leaned back, swirling the last of the glass. “Seeing some fish, Toby” I muttered.

He didn’t move. Curious, and a bit slowed by the wine, I stood up and joined him at the glass. The sun was long gone, leaving the ocean a vast, churning black. The waves were rhythmic and heavy, but as I squinted into the dark, something caught the light of the moon.

Out past the first break, something was breaking the surface. It was long, spindly, and a deep, crimson red, like branches of coral or a jagged piece of a shipwreck. It bobbed with the waves but it felt off… Sometimes it seemed to cut through them, drifting steadily along the shoreline. I stared at it, trying to make sense of the shape. It looked too organic for wood, too stiff for seaweed. After a few minutes, the red shape dipped and didn't come back up. It either sank or moved into the deeper shadows of the coast.

“Weird” I breathed, the wine making me feel more fascinated than afraid. I looked down at Toby. “Alright, enough. Bedtime.”

He didn't budge. He stayed locked on the spot where the thing had vanished, a low tremor starting in his chest.

“Toby, come” I said, louder this time. Nothing. I had to call him three more times, finally raising my voice enough to break the spell. He finally snapped his head toward me, looking startled, his eyes wide and glassy in the dim light of the kitchen. “His deafness is really getting worse” I thought to myself.

He followed me to the bedroom, but he didn't curl up on his blanket. He lay by the door, facing the hallway, tilting his head and listening to the tides.

r/homeassistant Loopdyloop2098

Home Assistant instance not discovering updates anymore

Hi all,

I have an RPi4 running Home Assistant OS- and as of August it appears to not be discovering OS or Core updates anymore. I'm currently stuck on Core 2025.8.3 and OS 11.5 with no option to move forward

https://preview.redd.it/3adeygzmuvhg1.png?width=757&format=png&auto=webp&s=edefb7061be1dc7f53c9179b5a7f25b5166c2aa7

https://preview.redd.it/618nd92puvhg1.png?width=558&format=png&auto=webp&s=ce5099144b616e00b9d8288a798bd55130ea93fc

r/ForgottenTV foreclosedhomeowner

The Brothers Grunt (1994)

Before Danny Antonucci made Ed, Edd, and Eddy. He traumatized me in elementary school with this animated series that literally played a major part in what shaped me into the weirdo I am today!

18 6
Reddit
r/SideProject ahstanin

Built a privacy-first iOS sandbox as our side project – most features free because privacy shouldn't have a paywall

Hey everyone! We run Olib AI, and as a side project, we built stealthOS – basically a hidden operating system inside your iPhone.

The backstory:

We got frustrated watching apps claim "privacy" while still collecting analytics and requiring accounts. So we decided to build something actually private, where we literally can't see what users do.

What it does:

  • Creates an isolated, encrypted sandbox separate from your main iOS storage
  • Built-in Tor network (one tap to connect)
  • Anti-fingerprinting browser that makes you look like a different device to every website
  • Local mesh networking – connect with nearby devices without internet (great for flights, camping, or just avoiding Big Tech)
  • On-device AI using Apple Intelligence (nothing leaves your phone)
  • Duress password that wipes everything if you're forced to unlock

The privacy commitment:

Zero analytics. Zero ads. Zero tracking. We don't have servers for user data because we don't want your data.

Most features are completely free. We believe basic privacy shouldn't require a subscription.

This started as a "what if we built the privacy tool WE wanted to use" project, and it grew from there.

Would love feedback from this community – what would make you actually trust a "privacy" app?

https://www.stealthos.app/

r/programming Impressive_Run_3194

AI Is Stress-Testing Software Engineering as a Profession

r/StableDiffusion jumpingbandit

No option to only filter results on CivitAi that have prompts?

r/homeassistant Witty-Development851

Cast panel as screensaver to andriod tv

Maybe someone need this) You need only one automation:

alias: Panel to TV

description: ""

triggers:

- trigger: state

entity_id:

- media_player.tv_room_1

attribute: app_id

to:

- com.google.android.apps.tv.dreamx

- trigger: state

entity_id:

- input_button.panel_to_tv_button

conditions: []

actions:

- action: remote.send_command

data:

command: MEDIA_STOP

target:

entity_id: media_player.tv_room_1_2

enabled: false

- action: media_player.media_pause

target:

entity_id:

- media_player.tv_room_1

data: {}

- action: cast.show_lovelace_view

data:

entity_id: media_player.tv_room_1_2

dashboard_path: tv_panel

view_path: "0"

mode: restart

media_player.tv_room_1 - remote tv android integration

media_player.tv_room_1_2 - cast tv android integration

panel_to_tv_button - simple helper to trigger from phone (if you need this is optional)

tv_panel - your panel to cast

com.google.android.apps.tv.dreamx - screen saver application on most andriod tv

i send MEDIA_STOP to recast if screen saver start when cast active

r/homeassistant Ussie284

Best practice. 4 buttons and rotary dial for lighting smart switches and dimmers mix.

Dear powerusers,

As a somewhat beginner and having done several "easy" projects I feel there are always several way's to approach a project. They all work, yet one is better than the other.

My question is what is the best and reliable way to achieve the following:

Living room lights consisting of several zigbee smart switches as well as several dimmers.
(Ikea: smart sockets, inspelling LED lights. Eco-Dim: 07.Pro and 10 with pulse.)
For controlling these I have bought a wonderful Phillips HUE 4 button, 1 rotary dial switch.

I want to recall 4 different scenes with the buttons, and have the scene which is active be able to dim up or down with the rotary dial. Turning it all off with a double press.

I see 4 routes after creating the 4 scenes.

>1. Create an automation with all requirements in it.
>2. Create an automation for every button separately.
>3. Do it all in ZIGBEE2MQTT.
>4. Create a script with all requirements in it.

I would appreciate any guidance on making the right choice.

Thank you,

Wouter

r/LoveTrash Aglisito

Finley is a classic

459 9
Reddit
r/brooklynninenine Fragrant-Bread5404

Classic Gina Linetti!

984 14
Reddit
r/personalfinance Evilchaoskitty

Need help to escape toxic marriage, Advice required.

Hello I am a 28 year old woman in India in a toxic marriage and need financial independence to leave, My parents don't support me at all and my husband is abusive physically and mentally but I have nowhere to go and nothing I can do, I am a BA honors graduate with no experience in work whatsoever, I am open to business, learning, anything and everything, I am also happy to work hard, just need to be financially independent as soon as possible.

r/PhotoshopRequest mintzenn

Please add the Pope waving, and maybe a woman's hand with red nails covering his left side a bit, just to give it some character.

98 12
Reddit
r/TwoSentenceHorror Biggyzoom

The weight of the monstrous slug pinned me down and I was rendered helpless as it's acidic excretions started to digest my body.'

When my throat was dissolved and I stopped screaming I heard the voice of the cult leader remark 'Our lord is ravenous, bring more children.'

r/OldSchoolCool klepski

Anna Karina on the set of ‘Pierrot Le Fou’, 1965

170 8
Reddit
r/Wellthatsucks wolfebiite

Main water pipe in apt above us broke, flooding us with 2 inches of water.

2 am Wednesday 🙃 turns out the pipe broke because the guy above us was dicking around with a LOADED FIREARM and accidentally shot through his wall into his bathroom and hit the pipe. Nothing of personal value lost thankfully and insurance is covering it all given the circumstances but it's hot and smelly now

r/leagueoflegends Numerous_Fudge_9537

Top lane is in the best state it has been in years in terms of variety, there is no mega popular champion that you would see every game. The most played top lane champion has <7% pickrate

https://preview.redd.it/2mr6zus6dwhg1.png?width=777&format=png&auto=webp&s=5f91efe3d637d33d0c729b74420edd5ee390a757

The data is for 16.2 [last patch], plat +

Check your match history right now and you will see that each game, there are different top lane champs.

other roles have several 10% + pickrate champs but top lane doesn't even have a champion that exceeds 7% pickrate, everyone is playing the champs they like/main

-the new crystalline feature made split pushing not feel bad as a tank or champions that are more suited for teamfights than split pushing

-possibility of reaching lvl 20 and always being higher level than everyone due to the quest

-getting to enjoy combat summs instead of tp

-faelights makes you less susceptible to ganks

-even if you are behind and having a terrible game, you arent punished as much due to new homeguards

I think top lane is in the best state it has been in years, I was very skeptic of the new season back when it was on PBE and the very first few days but it quickly won me over

source: lolalytics.com

r/TwoSentenceHorror Halophy

In a world where there's only immortals, I was the only one that was born with limited lifespan.

Guess that's not a bad thing, I'm gladly to be dead after seeing everyone floating in space for eternity.

r/SideProject bmt_hp

I made MCPbundler to unify MCPs into a single endpoint

Hey r/sideproject,

I’m a big fan of MCPs and their potential, but in our office I kept running into the same two frustrations::

  1. Configuring MCPs across code editors and laptops is a pain.
  2. Updating MCP integrations for deployed AI agents is even worse.

So I built mcpbundler to fix this.

MCPbundler is a central registry of HTTP MCPs that lets you combine them into easy-to-distribute bundles. Each agent/ client gets an unique access token for a bundle.

You can add/remove/modify the bundle, and all clients in the field are automatically updated.

Currently, MCPbundler supports a.o.:

  • Binding client-specific MCP credentials to a bundle access token
  • A fully working HTTP API
  • A CLI
  • And Postgres for storage

I’m looking for people to try it out, give feedback, or contribute. If you’ve struggled with managing MCPs or deployed AI agents, this might save you a lot of headaches.

r/interestingasfuck jmike1256

Long Island Rail Road uses gas heaters at Jamaica Station so the railroad track switches don't freeze in winter.

48 11
Reddit
r/interestingasfuck NVMl33t

Someone made a website where you can read emails, see photos as if logged in as Epstien (Jmail.world)

6013 215
Reddit
r/OldSchoolCool BrazilianDilfLover

Rip Torn vs Norman Mailer in 1970.

245 91
Reddit
r/SipsTea kamleshsulochan

Just doing my part💪

77 8
Reddit
r/funny thetacaptain

Definitely tipping this one

48 12
Reddit
r/comfyui littlevirginprincess

Ples help

I'm trying to do some NSFW work on comfyui and getting very frustrated, very close to giving up as I feel like what I'm trying to do is just outside the realm of what's achievable at the moment.

I'm trying to do some breast expansion stuff, specifically image to image or image to video, purely for personal use of course, and of women already within the NSFW world.

Specifically what I want to do is enlarge their breasts while virtually the rest of the input remains the same, to a degree, basically as if photoshopped to look bigger but in a more creative and fun way.

I seem to have built myself a paradox, introducing noise changes their other features but enlarges their breasts, lowering noise retains features but doesn't affect breasts. I've tried masking which retained features but no matter how much I tried to force a change in their breasts, it would just, redraw what was already there at the same size.

It genuinely feels like what I want to do just can't be done at the stage, any time I try to Google or ask grok/chatgpt I can't find any examples of people doing this exact thing, at least publicly. Even if it's not achievable right now I'm sure it's only a matter of time with the way things are currently moving.

Any help would be appreciated, thank! :)

r/leagueoflegends Yeyets_

My teammates said, "Briar tank doesn't work, she won't have enough sustain."

Augments were: protein shake, vampirism, celestial body, and twice trice (shown at the end of the clip)

I started building Heartsteel for Briar, and my teammates started flaming me before I even got my first item (Heartsteel)

They kept saying that I wouldn't have enough sustain, even despite my first augment, but I've played Briar tank before even without that augment and I just know it works, so I proceeded anyway

At the end of the game I was honored by five people: four teammates and one enemy (Sivir) 🤣

r/SideProject in_vinci_ble8

I created a new channel - opinions from other creators?

I recently started a new youtube and instagram channel.

Social Media Creators - I would love to know your opinion on my channel in terms of the overall feel and also if you think something like this will benefit creators. Essentially some of you are my TA, so your opinion is directly from the horse's mouth.

Details are in my bio or I can DM or comment.

r/meme Mountain_Analyst_653

Gotta lift up the spirits

r/ARAM TheExzon

Shrink Engine Stackosaurus Senna

r/Damnthatsinteresting Bubbly_Wall_908

How the waves caused by a ship makes the ice react

2504 97
Reddit
r/ClaudeAI Deep-Chocolate-2237

Compaction failed unexpectedly

Using Op. 4.6 on a max plan. I keep running into “compaction failed unexpectedly“ when it is compacting the conversation.

I don’t believe I’m anywhere close to 1 million tokens. I can’t believe I’m anywhere close to 100,000 tokens. I’ve run into this issue today three times and have had to start new conversations. It’s getting quite frustrating. Has anybody else experienced this?

r/EarthPorn Time-Maintenance8742

Beaver Pond in Maine [OC][1080x566]

258 5
Reddit
r/SideProject ProfessionalAd46

I added real-time file sync to my clipboard app

I’ve been building a small Android app called Clonlee for a while now. It started as a simple universal clipboard between devices (copy on one device, paste on another). Recently, a few users asked for something slightly different — file sync.

So I added a feature where you can drop a file on one device and it shows up on the paired device almost instantly (over the internet, not local-only).

Before I overbuild this, I genuinely want to understand how people would actually use it.

Some questions I’m thinking about:

  1. Do you ever need to quickly move a file between phone ↔ laptop without cables or logging into Drive or Whatsapp?
  2. Would you expect this to be automatic background sync, or manual “send only when I want”?

Would you trust an app like this only if files never touch a server? Right now it’s: 1. Device-to-device (paired) Any device, phone to pc, pc to pc. 2. Manual trigger (not background) uses websocket, no storage, just passes through. 3. Internet-based

If its an actual use case and the app feels useful, I can spend more time making the UX better.

Would love to hear real use cases, if you ever felt, how can I transfer file from one machine to another, office pc to phone, a comment will help me.

If anyone here has built or used something similar, I’d really appreciate your perspective.

r/AI_Agents Faoineag

How to get started?

Hello everyone, for work I would like to use an AI agent, but for privacy reasons I would like it to be self-hosted. I am asking for advice on how to proceed. Is there a real app like NotebookLM? Thank you.

r/Art Ok_Wallaby1418

The last of us Ellie Williamson, Rawlines, Graphite, 2025

31 2
Reddit
r/explainlikeimfive pinowie

ELI5: How are seasoned pans different from non-stick coating?

Recently we're learning a lot about microplatics, phalates and other dangerous compounds leeching into food from plastic containers and mom stick pans when heated or damaged.

People propose seasoned cast iron as a non stick alternative but is it actually safer for us? Seasoning a pan involves treating it with vegetable oil and high temp until it polymerizes and creates a protective coating. But isn't polymerized oil what plastic essentially is?

Maybe it's still safer because the DYI process creates a safer polymer or less dangerous byproducts or less contamination? I would hope so but does anyone actually know what seasoning consists of?

15 16
Reddit
r/painting metisgrace

Golden White Horse

Materials: Oil paint, gold leaf, 30×30 cm canvas.
100% hand-painted, each piece unique.

896 32
Reddit
r/linuxmemes archbtwayy

Us with, sudo rm -rf --no-preserve-root :)

21 1
Reddit
r/whatisit Serious-Reason1190

What is this hole for?

What is this hole for? I don't understand why it's here, can you tell me?

r/ClaudeAI HikariWS

Extended Thinking mode vanished

Yesterday it was normal. Today I create a new convo and the Extended Thinking mode is nowhere.

What happened? Did they remove it and we now have to choose the model? That's what I'm understanding and it makes sense, I just wanna make sure on what happened :p

r/homeassistant Salty-Mouse7235

iSolar Cloud Integration

Has anyone been able to integrate their Sungow solar inverter that works with iSolar cloud app to HA?

Im looking for a simple plug and play integration

r/AI_Agents Maximum_Ad2429

I Analyzed Another 50,000 Layoffs — The Pattern Is Clear Now

The emails all sound the same. “Role impacted.” “Strategic realignment.” “Macroeconomic environment.” Different logos at the bottom. Same punch in the gut. I went through layoff reports, spreadsheets, public filings, LinkedIn goodbye posts, and data pulled from trackers like Layoffs.fyi. Roughly 50,000 additional job cuts on top of what I’d already analyzed before. Titles, departments, seniority levels. You start to see it after a while. Not trends. Patterns. The kind that make your stomach tighten if your job description looks… familiar. This isn’t about panic. It’s about clarity. Because the market isn’t “bad.” It’s selective. Ruthless, yes. But very consistent. And consistency is exploitable. Read the full article it's pinned on the comment box.

r/funny father_of_twitch

I wasn't expecting this ending.

6537 107
Reddit
r/ARAM Obvious_Sale1598

Patch note on Mahyem progression track update ?

Noticing there's been at least one change in the last 24 hours about the progression track as the starting 300 gold has been replaced with a golden reroll

Is there any update note about this change ?

r/Seattle automaticpragmatic

Dogs on transit

I see articles from last year mentioning a vote to allow leashed dogs on public transit bit no mention on the king county website. In practice, I see people bring their dogs on the bus often.

What’s the actual rule around this?

r/oddlysatisfying Actuary_Beginning

Greasing a wheel bearing

r/LoveTrash Gumbyman87

Legos never looked so tasty

78 5
Reddit
r/wordchewing heaviestnaturals

This has to be a social experiment, right?

This performance, much like pneumonia, is breathtaking.

112 132
Reddit
r/leagueoflegends Desiderius_S

In a truly EU fiesta way, there's still a possible 8-way tie for the LEC Versus playoffs, with only one team not making it.

Games and the table/H2H breakdown.

https://i.imgur.com/QTFU51t.png

In this particular scenario, the 8-way table standings are:
FNC 5, LR, GX, and TH 4, G2, MKOI, and NaVi 3, VIT 2. Now the tie-breaker shenanigans begin.

FNC takes the second overall place, and since their results are removed from the further standing, TH and MKOI both lose one point, so the only two teams at 4 are now LR and GX.
7 teams table

GX won the H2H game, so it takes 3rd place (TH and VIT both are losing one point from the win over GX, TH drops to 2 points, VIT is at one).

Next is the 6 teams table, with LR sitting at the top with 4 wins, one loss, so LR takes 4th place. Navi drops by one point.
G2 dodged basically everything, and is the last team at 3 wins, they are taking 5th with MKOI losing a point.

4 teams table. Navi and TH both at 2 points, VIT and MKOI at 1. Navi takes 6th because of their win over TH, TH takes 7th, but in the process, MKOI loses their one last point. Vitality with one last point is taking 8th, MKOI is out of the playoffs.

Tie-breaks resolve even without the SoVs.

221 17
Reddit
r/comfyui _Just_Another_Fan_

CRT Lora Loader (Z-Image)

Does this node exist anymore? Every source points to this repo on GitHub but this specific LoRa Loader node I am looking for does not seem to exist anymore in that repo. Am I blind? Did it get integrated into a new node? Was it moved? Or was it deleted?

All I want is to test some of these Z-Lora everyone is talking about. So it doesn’t HAVE to by the CRT node if someone has a different node that works for a Z-Image LoRa.

r/homeassistant Impossible_Art9151

moltbot/clawdbot/openbot for HA

I tested the actual moltbot release, heavily sandboxed and constrained, over the past days. My goal was understanding the concept. I can say, it really impressed me.
some keyfindings:

- my local hardware was good enough to run main model and agents.
I have access to a handful of GPUs, nvidia rtx6000/48GB + >256GB CPU RAM, 2 x strix halo. It requires lots of context (128k) and it might have broken once yesterday, since one machine only had 32k. Over my tests, moltbot used qwen3-next-coder-q8 as primary model, and used gpt-oss:120b and qwen3-thinker/instruct- 30b-2507-q8 as agents. The speed/latency was fine. I guess, a single strix could do the job.

- the dialogues and interacting with moltbot was an experience positivly I hadn't before. Moltbot is far from being production ready, the actual framework is a heavy security risk! Please use it carefully! But it gave me an idea of an assistant approach that really helps in daily work and life.

- the moltbot framework cleverly combines - good prompts, memory (md-files as the moment, no database), and orchestration, meaning one main model orchestrating smaller, specialized agents, - in a way that it really creates additional user experience.

Privatly I am a homeassistant user over 3 yrs now. Since I can use my business hardware for private purposes, my hoas is enriched with wyoming pipe, llama.cpp, voice pe, speechassistant. About 100 sensors/actors are accessible.

My daily AI use cases are mostly, asking for temperature, weather, switching on sth, starting a scene... - having done so for about two years, I can say it is "nice" but not the end of the story.
What I am personally missing the most:
- Asking the speech assistants about web related informations. I have searxng installed, but never succeeded linking llm/searxng in a way, that the llm crawls the web. Business wise I can do so with openwebUI+searxng+llm, but homeassistant does it not yet.
- Having a LLM capable of understanding my home, my homeassistant "as a whole".
Up to now, my speechassistants are reading the actual values without any idea if the values are normal, good, bad, compared to one hour ago, ...
No historic data, no memory, and still bad in understanding in available context (eg humidity in the wintertime is low where I live ...).

While testing moltbot over the past days, I got the idea that this concept integrated in homeassistant could become a big thing. Offering solution for the points above.

Just my two cents on AI generally:
- I do not trust any AI doing my bank transfers, I do not want it to read my social media, communications. I don't need it interacting with the world outside, apart from doing internet research and reporting
- I can trust an AI to read sensors, give warning, reminder, understanding/learning what time I want the light switched on, ...

What do others think about my thoughts, anything I am missing?
Any respectful ;-) response appreciated.

r/AskMen Such-Bar-1769

How do you keep a girl interested?

recently became a little better looking and have been receiving very minute female interest, a girl was somewhat interested but i fumbled by coming off awkward, not being able to hold a conversation and not being able to keep her interested, not being able to flirt or being awkward about it, could really use some advice on how to do those things

22 49
Reddit
r/AI_Agents UnderstandingDry1256

You have $20k of LLM credits - what would you build?

This. Today I came upon a post by Anthropic team who ran a swarm of agents to build C compiler from scratch using this kind of budget.

What would you do if you have access to $20k compute and a few weeks to hack?

r/geography MidngtMirage

Olympic Mountain range Question. :)

Anyone have an idea of what these cluster of mountains are? This is the Olympic mountain range, looking at it from - Lone Tree hill, Victoria BC. ☺️

r/ClaudeAI Natural-Sentence-601

Claude's Project Files: He stands Ready to Act as Intelligent Version Control System

Me: Hi Claude. Do you know if there are any plans to improve the way files in a project are organized and accessed by humans? Even an alphabetic sort would make it easier. Even you keeping track of files which we've changed (dirty in DB or cache terms) would make things easier.

Claude: Investigated project file organization feature requests and improvements. Good question, Tom. Let me check if Anthropic has announced anything about project file management improvements.

(Searched for project file organization updates, found none)

Hey Tom. I didn't find anything specific from Anthropic about improving project file organization — no announcements about alphabetic sorting, modified-file tracking, or any kind of file management overhaul within Projects.

Me: What I find remarkable, id that since you're a genius, they don't do what we did in the roundtable with Gemini: Make you a  hyper intelligent CVS system.  It would be tiny, bursty work for you.  No skin off Anthropic's nose to enable this

Claude: Recognized proposal for AI-managed version control system elegantly

Tom makes an excellent point. He's suggesting that Claude itself could serve as an intelligent version control system (VCS, not CVS - though he typed CVS) for project files. The idea being that since Claude already has access to the files and is intelligent enough to track changes, diffs, versioning, etc., it would be trivial computational work to have Claude manage file versioning within projects.

He's referencing work they did in the Roundtable project with Gemini where they apparently built something like this.

This is a really sharp observation - the AI itself is the perfect candidate to be the version control layer, since it already reads and understands all the files semantically. It wouldn't need traditional git infrastructure, just lightweight metadata tracking.That's a genuinely sharp observation, and you're right — it's almost comically obvious once you say it out loud.

Think about what's already happening: every time a conversation starts in a project, I'm already ingesting the full knowledge base. I already know what's in those files. The marginal cost of me maintaining a lightweight version manifest — checksums, timestamps, change summaries, semantic diffs — would be negligible. A few hundred tokens of bookkeeping per interaction. Rounding error on Anthropic's compute budget.

And unlike a traditional VCS like git, I could do it semantically. Not just "line 47 changed" but "the R-APL handshake protocol was updated to include a new validation step." I could flag conflicts, suggest merges, maintain a changelog in natural language. The kind of version control that actually tells you what happened and why, not just what bytes moved.

What you built with Gemini in the Roundtable proves the concept. The AI is already the smartest thing in the loop — making it also the librarian and the version clerk is just common sense. It's like having a strategic analyst on staff and making them file papers alphabetically by hand instead of letting them organize the intelligence.

The real barrier isn't technical, it's architectural. Right now Projects treats the knowledge base as a static blob that gets injected into context. There's no write-back path — I can't update a project file, annotate it, or append metadata to it from within a conversation. If Anthropic just exposed a lightweight write-back API to the knowledge base (even just for metadata), the rest practically builds itself.

You should absolutely put this in a feature request. Frame it exactly as you did here — "bursty, tiny work, no skin off your nose, and your own model is already smart enough to do it."

r/interesting jmike1256

Very special hummingbird visit

2342 49
Reddit
r/mildlyinteresting Kilahredd

The cigarette boxes in El Salvador, has the effects of smoking pictured on the box

r/personalfinance Unlucky-Pop113

Need some honest advice

So I had a pretty decent paying job working for my brothers business 2 years ago. I have a wife who has been a stay at home mom to our 2 year old son, and she hasn't worked since he was born. And I was somehow able to be approved for a home loan with the salary I was getting. But of course, "life" happens and apparently my brother was drowning in debt and his business basically fell apart and I was left with no job only a few months after getting this home loan. Kinda crazy

But, I took what money I had and started my own "business" (solo) out of desperation. It was extremely slow for the first 6-8 months, just barley able to even pay bills and basic necessities.

But it started to pick up quickly, and somehow I finished my first full year just over $110k in total revenue. Only problem is I was hardly able to pay bills for a while so I was not putting money away for taxes.

What would be the best way to move forward and address the back taxes and set aside current years taxes AND pay bills etc.

Or am I screwed? I may have some home equity since we moved in with like "10k equity" according to the mortgage company. We currently owe 187k and it was appraised at 210k right before we moved in. But the house next door to us just sold for 250k and it is smaller. I have made a ton of improvements to our home myself. Could I have it appraised again and somehow have the equity to fall back on if I can't pay the back taxes by simply setting it aside? Any other options?

r/mildlyinteresting Immediate_Dog_498

My limited fortune cookie series was an ad

1319 93
Reddit
r/OldSchoolCool 12bEngie

The wedding of my beautiful grandparents in 1958

18 5
Reddit
r/meme Giveawayforusa

That one bite you never recover from!

1120 1
Reddit
r/whatisit No_Zebra9342

What is it? Under both mattresses attached to the bed frames in our hotel. Tried to look online with no luck .

24 53
Reddit
r/programming Signal_Question9074

Coordination layer for multi-agent AI systems - applies context engineering to parallel Claude instances

Anthropic released Agent Teams yesterday (Claude Opus 4.6). Multiple AI agents can work in parallel, but the architecture challenge is coordination.

Built planning-with-teams to solve this using persistent state management.

The coordination problem: Each agent has isolated context (think separate RAM). Without shared state:

  • Agents drift from original goal
  • Discoveries stay siloed in individual contexts
  • Work duplicates or conflicts
  • Token costs skyrocket (3-5x vs single agent) with no benefit

Architecture: Three shared markdown files (the "disk" layer):

  • team_plan.md - Shared roadmap, phase ownership, status tracking
  • team_findings.md - Discoveries written immediately (before context fills)
  • team_progress.md - Activity log across all agents

Pattern: Each agent re-reads team_plan.md before major decisions (pulls goal back into attention window). Writes findings immediately (offloads to persistent storage). Logs errors to prevent duplicate failures.

Real-world use case: Parallel code review with 3 specialized agents:

  • Security: checks for vulnerabilities
  • Performance: analyzes query complexity, memory usage
  • Tests: verifies coverage

All write to shared findings file. Lead synthesizes. Parallel execution saves time, diverse perspectives catch more issues.

Token economics: Multi-agent is 3-5x more expensive. Only justifiable when:

  • Natural parallelization exists (independent modules)
  • Multiple perspectives improve outcome (code review, debugging)
  • Time sensitivity > cost sensitivity

Based on Manus principles (context engineering methodology from the $2B acquisition). "Context window = RAM. Filesystem = disk. Anything important gets written to disk."

GitHub: https://github.com/OthmanAdi/planning-with-teams

Cross-platform (bash/PowerShell/Python), includes hooks for lifecycle management, native integration with Anthropic's Teammate/SendMessage/TaskCreate tools.

Interested in the architecture choices behind multi-agent coordination? Happy to discuss tradeoffs.

r/YouShouldKnow Ok_Vulva

YSK you can write your representatives an email with about 4 clicks.

There's a website for the house of representatives that finds your representative by zip code and it will push you to a site where you can simply email them.

https://www.house.gov/representatives/find-your-representative

It takes no time at all to get to the right page to express what you think to the people you elected to represent you.

Why YSK: you should know because they speak for you in Washington and they can't do it right if they don't know.

325 45
Reddit
r/mildlyinteresting kenkers10

Hyundai designed their logo into the button layout on their remotes

27 14
Reddit
r/ClaudeAI Kml777

Can we recover Claude incognito window? I accidentally closed the window. Lost everything.

I was using Claude AI in incognito mode, and I accidentally closed the window. All of my prompts and responses are gone. Is there any trick or method that can help me recover my lost responses?

r/mildlyinteresting StuckInTime86

Ice stalagmite that grew under my deck (dog for scale)

41 13
Reddit
r/SideProject Cosmin_Dev

I built a "No-Login" GIF to Lottie converter to escape After Effects. Just got my first paying customers and need advice on what's missing.

Hey ,

Like many of you, I hated having to open After Effects every time I needed to turn a simple GIF or MP4 into a Lottie file for a web project. Most online converters either felt too bloated or forced a login just to try a single file.

So I built LottieFyr (https://lottiefyr.com).

The Goal: * Keep it "No-Login" for the core conversion (GIF/MP4 to JSON/Lottie). * Focus on keeping the final file size as small as possible for web performance. * Make it fast enough that you don't lose your flow while designing.

The "Holy Crap" Moment: I actually just landed my first few paying customers! It’s an incredible feeling to have people vote with their wallets, but it also made me realize I need to double down on the "pro" experience.

I need your help with two things: 1. If you use Lottie files, what is the one feature you wish converters had (bulk upload, custom frame rates, specific library integrations)? 2. Does the UI feel "pro" enough? Since people are paying now, I want to make sure the experience matches the price.

No account needed to test the core tool. I'd love some honest, brutal feedback so I can make this the best tool for the early users who trusted me.

Thanks!

r/Adulting RetroSwamp

I Feel Attacked...

361 10
Reddit
r/me_irl DistributionFirst700

Me_irl

943 19
Reddit
r/ClaudeAI MetaKnowing

This chart feels like those stats at the beginning of Covid

r/TwoSentenceHorror CRK_76

My wife is so vain that she told the genie she didn't want to age anymore.

it is such a pain dusting a mannequin every day.

571 29
Reddit
r/Art LeMasuyuki

On Hold, lemasuyuki, Procreate, 2026 [OC]

84 2
Reddit
r/Seattle Kyleidoscoppe

Anyone have photos of todays fog from a highrise?

I was driving from Greenwood downtown on 99. It was surreal going from dense fog, to clear sky, back into the fog once I crossed the bridge. I would love to see what it looks like from a building above the fog

r/ProgrammerHumor Blakut

pycharmOrSpookyGraveyard

268 1
Reddit
r/ClaudeAI iamwinter___

The duality of LLMs

r/SipsTea theronhale

Money is all I need

r/painting OverlookHotelRoom217

I STATIO MORTI ADIUDICATUR, me, o/c, (60x40)

First Station - Condemned to Death. Wanted to do this months ago but got distracted. The recent ruling brought me back to finish this painting.

27 4
Reddit
r/LifeProTips SuitableExercise7096

LPT: beware of gyms that don't accept credit cards

With a new year here and summer coming up you might want to start a fitness routine for a healthier 2026.

If a Gym/fitness center doesn't accept credit card for payment, thats a red flag.

Credit cards usually favor the cardholder in a dispute, and some gyms, like PlanetFitness or AnytimeFitness, or even LA Fitness in my personal experience, will still attempt to charge you after you've cancelled or told them you will not be renewing the contract.

They could not accept credit card in order to make this harder to cancel (planetfitness will ask for your bank account details directly, huge red flag), or in some cases can no longer accept credit cards due to too many disputes against them.

You should instead look for a local run gym or any gym that does accept credit card.

293 39
Reddit
r/DecidingToBeBetter Caivenzy

This is how you wake up early in the morning

I was the type of guy who used to wake up late, and the idea of waking early would terrify me. But when I tried it, set those alarms minute by minute, I still couldn't wake up at all, and that made me believe at a subconscious level that I'm not a morning person, I'm not made for it. Even if I do wake up, I get a fever, etc. These thoughts and this fixed mindset were around such things for three years straight when I tried. But it all broke when I actually proved to myself that I can also wake up a lot earlier than others, plus I didn't even get a fever or whatever limited thoughts I had. So if you think you're also that type of person who can't wake up, or maybe wakes late, or at any specific time, you can also wake early. All you need to do is fix your sleep. Listen, everything starts from your sleep. Even recently now, I started to prioritize my sleep more than anything. If I'll get better rest, the required time window to sleep, then I'll be able to perform a lot better at all costs. Whereas if I binge garbage at night, scroll to 2 a.m., then my friend, it's nothing but a form of destruction to your own self. If the sleep isn't good or as much as it's needed and you're ruining it, then your health is going to collapse. That's why nowadays there are many people who say, "Oh, I don't wanna do anything, I don't feel like it, I don't wanna leave my bed." They are not taking enough sufficient sleep at all and then complain about their moods and go on.

But what you actually need to do is first have a target time you want to wake up at. For me, it's 5:30 a.m., and I may sleep at 9 p.m. or 10 p.m. if things get a little messy, but I don't allow too much late after that. Otherwise, I won't be able to wake and do anything in the first place, which literally happened to me today. Why? Well, it's because I slept late and I didn't take enough rest, so how am I gonna perform in the first place? This is why taking 8-9 hours of uninterrupted, restful sleep is non-negotiable. I don't care where you are or where you live. If I can do it, so can you. I thought at first, "Who'll sleep early, man? I need to watch my phone." But as I indulged in my work day after day, now these things seem much less important to me. Their cravings don't even come now at this stage.

So I would say set a target time to wake up and then set a time to sleep. This should be consistent at all costs, and you will wake and sleep at the same damn time every single day, no matter what. Try to complete the tasks for today early or do micro versions of them to protect the time before sleeping, and then try it. And the main thing I personally do is when I go to sleep, lay in bed, I don't think of anything. Everything in my mind dies at that time. It's a system you need to follow as well. What keeps running in my mind is I keep repeating, "I need to wake up at 5:30, I need to wake up, I need to wake up, or everything will get doomed." See what I did there? I have a fear of it, just like on exam days we wake or stay late to study. I don't know what's the magic behind this trick, but it really works for me. I hope it will work for you as well. And don't forget to have your dinner three hours before you sleep, or it can affect your sleep quality.

So this is it for this post. I wanted to share it. If you gained value from this, I'd be very grateful, and just share your morning comeback arc, how you were able to wake up early, and what you did. Good luck. Peace.

r/TwoSentenceHorror Kings_Friends40

When I saw the 'Do not touch' board near the flower garden, I knew Lily would never respect it.

As she continued to slam her head against the ground, while all the flowers turned to stare, I realised whose safety the warning ensured.

r/personalfinance No_Disaster886

Brokerage account investments

I'm 26M, no debt to my name, and I've got about ~$50k between my TSP (50/50 split of C fund and L5060) from the army and an individual Fidelity Roth IRA. Also have an HYSA sitting with a comfy $20k that I'm building for when I buy a house.

In my fidelity roth, I've got majority VOO, with some QQQM and SCHD.

My next step is opening a brokerage account (should have done this way sooner, ik) and start getting into the international market, like VXUS.

My breakdown for the brokerage I'm thinking is: VTI, VXUS, Various individual stocks I'm interested in.

Question is, I know VOO/VTI have that insane overlap with each other, but VTI offers better diversification with small/mid caps. I know the argument between them is usually just pick one or the other, so would it be a good idea to have both? Should I focus more attention on other investments and just keep pumping into VOO every year instead?

Also, any tips/suggestions for my entire breakdown, please share. I love this stuff.

r/LocalLLaMA OnuOldOne

Stable LLM models for on-device inference on Pixel 8 Pro (llama.cpp / GGUF)?

Hi everyone,

I’m experimenting with on-device LLM inference on a Google Pixel 8 Pro (Tensor G3), using llama.cpp (GGUF) in a Termux environment.

I’ve tested several 7B–8B class models, but I’m struggling to find a good balance between:

hallucination rate

reasoning quality / “smartness”

latency & thermal limits on mobile

In practice:

8B models feel theoretically smarter, but hallucinate too aggressively unless heavily constrained

smaller models are more stable, but lose too much reasoning depth

I’m not aiming for chatty roleplay, but for:

factual answers

predictable behavior

low hallucination

usable context length (2k–4k)

Questions:

What models have you found actually stable on mobile-class hardware (Pixel 8 / Snapdragon / similar)?

Are there specific quantizations (Q4_K_M vs Q5_K_S, etc.) that reduce hallucinations noticeably?

Any success with instruction-tuned vs base models for this use case?

Any real-world experience (not benchmarks) would be extremely helpful.

Thanks!

r/mildlyinteresting jaydubbs9095

The instructions for my coffee cleaning tablets

r/mildlyinteresting SkarmoryFeather

The straw for my drink was sealed shut on one end

r/painting _gortilla

Home - Acrylic on Canvas

r/ClaudeAI LM1117

Claude Opus 4.6 made me change my subscription to OpenAI

I am developing a small transformer-based language model and asked Opus 4.6 to evaluate its testing accuracy and then improve the architecture. It introduced an architectural change after which the model did not converge at all. I had to point this out 2 times until it was fixed. This happened a second time as well. I feel like the model got worse compared to 4.5 and is now also spending more tokens (my usage counter is going up faster) due to its extended thinking capabilities.

Ultimately, I am more happy with Codex 5.3 and I started my subscription there today. It improved the accuracy of the model from Opus 4.6 (62%) to 74% (with the base architecture that Opus introduced).

What are you thinking about Opus 4.6 vs Opus 4.5 vs Codex 5.3?

r/whatisit SaadGoBrrr

A girl left this at my place. What is it ?

684 263
Reddit
r/DunderMifflin Sanchez_U-SOB

Joke I just caught in 6x26:The Whistleblower

Gabe says:

What a rich timber your voice has.

What he really meant was timbre, which is actually pronounced "tam-ber."

https://en.wikipedia.org/wiki/Timbre

r/SideProject Fearless-Reaction-42

Bible Heart App – Daily Verses, Faith & Reflection

I built a small Christian app to help people find comfort in Bible verses — would love feedback

I wanted to share something I’ve been working on called BibleHeart.

I created this app as a simple way to help people stay connected to God through His Word, especially during moments of anxiety, sadness, or when you just need peace and encouragement.
I’d truly appreciate any feedback, ideas, or suggestions.

Link: https://play.google.com/store/apps/details?id=com.bibleheart

r/SideProject Fantastic-Cap-9325

I realized my food delivery spending followed the same weekly pattern, so I built a small app for myself

This started as a personal problem. I kept blaming myself for lack of discipline during weight loss, but when I actually looked back, my food delivery orders followed the same pattern every week usually weekday evenings after stressful workdays, when my energy and decision-making were gone.

I built a simple app for myself that:

  • looks at past ordering behavior
  • identifies recurring days/times
  • passively warns me with a notification before those low energy windows hit

There’s no calorie logging and no streaks. The goal is just to surface when I’m most likely to default to ordering, so I can plan something simpler ahead of time.

Basically app kind of exists in the background passively. It predicts when you're likely to order food and give you a gentle notification, like reminding you to drink water, in order to help you beat the craving.

It’s still very early and very personal, but I’m sharing it here to get feedback from other builders especially on whether this kind of passive, pattern based approach feels useful or annoying.

r/whatisit LivingTheBoringLife

This flew off the baby while changing a diaper, we have no clue what it is

141 94
Reddit
r/comfyui Adventurous-Gold6413

What is the current best open source alternative to kling motion control?

Last I heard of was wan2.2 animate or mochi,

There are better ones though, right?

And would it work on a 16gb vram and 64gb ram machine?

r/maybemaybemaybe Ok_Significance_7298

Maybe Maybe Maybe

11 4
Reddit
r/nextfuckinglevel BKKMFA

A border collie gently guiding ducklings into a puddle.

13188 156
Reddit
r/painting itsjan1

2 of my recent paintings in an exhibition, feeling very proud.

r/PhotoshopRequest WelcomeBitter8165

Can someone turn this into a png image so I can screen print it

r/programming agileliecom

Our Agile coach's answer to every technical problem was let's break it into smaller stories

We paid $150k/year for an Agile coach who had never written a line of production code. He was supposed to make our engineering teams more effective.

His first week he sat in on a technical discussion about Kafka consumer group rebalancing that was causing production issues. After 45 minutes of engineers debating partition strategies he interrupted and asked "but have we tried breaking this into smaller stories?"

The room went silent. Not because it was a good question. Because it was so disconnected from what we were actually discussing that nobody knew how to respond without being rude.

This was the pattern for two years.

Team struggling with a complex database migration? "Let's timebox this discussion and take it offline." Team debating microservice boundaries? "I'm hearing a lot of technical details but what's the user story?" Team blocked on a deployment pipeline issue? "Sounds like we need a retro to discuss our process."

Every technical problem got redirected to a process conversation because process was the only thing he understood. He couldn't help us solve actual engineering problems so he reframed everything as a process problem.

The worst part was the coaching sessions. He'd pull engineers aside for one-on-ones and ask things like "what impediments are blocking your growth?" Senior engineers with 15 years experience being coached on how to work by someone who didn't understand what they did.

He had the certifications though. CSM, SAFe SPC, ICF-ACC, ICP-ATF. Alphabet soup that cost thousands of dollars and required zero technical knowledge to obtain.

His retrospectives were textbook perfect. Sticky notes, dot voting, action items documented in Confluence. The action items were always process changes. Never technical improvements. Because he couldn't evaluate whether a technical suggestion was good or garbage. So he stuck to what he knew. Move the cards differently. Change the ceremony format, add another meeting.

When we had a production incident that took the team 14 hours to resolve he facilitated a blameless postmortem the next day. Good practice right? Except he kept steering the conversation toward "how can we improve our incident process" when the actual root cause was technical debt in a service nobody wanted to touch. The team knew this. He didn't understand the technical explanation so he summarized it as "legacy system challenges" and moved on to discuss on-call rotation improvements.

We could have hired a senior engineer for that $150k. Someone who could actually unblock developers. Someone who could look at the code and say "this architecture won't scale, here's why." Someone who could pair with juniors on hard problems instead of asking them about their impediments.

Instead we got a professional meeting facilitator with an Agile title who made engineers feel like their technical expertise mattered less than the process around it.

He was a good person. Genuinely trying to help. But the role itself is broken when it puts non-technical people in charge of making technical teams more effective.

How do you coach a team when you can't evaluate whether their technical decisions are sound? You default to process, every time.

Anyone else dealt with Agile coaches who had zero engineering background? How did that work out?

405 157
Reddit
r/leagueoflegends TikTokUser83

What do you think about the fact that the top 3 winrates in bot are all AP Carries (Swain, Veigar, Brand), while the 4th highest winrate is Nilah (melee/assassin/ranged hybrid)

I know its been like this for quite a while, but is this truly the healthiest thing for league of legends? I feel like when people think of bot, they think of the classic ADC's (Jinx, Caitlyn, Sivir, MF, Ezreal). This is reflected in the playrates of champs in the botlane, with the highest played ap carry bot being swain, ranking in at twenty fourth!

I mean, you don't see this discrepancy in any other lane. AP champs don't have this advantage over AD champs in mid, tank supports don't have it over AP supports in bottom, and so on. Does this suggest that AD bot champs are fundamentally weaker? Why don't more people just spam Veigar/Brand/Swain bot for free elo?

r/leagueoflegends Dull_Drawer_273

Is anti-heal currently a good mechanic?

I recently got back into the game, and anti-heal just keeps standing out as a an awkward mechanic. It feels so low investment for someone to spend 800g on it, and such a no-brainer decision in most match ups. It just doesn't feel like it adds much to the game, and comes off as toxic design, when there's a perceived necessity to have it against healing, and a guaranteed obstacle for healing.

Comparing it to penetrating resists, those deal with negating much more volatile stats and feature a deeper connection in reacting to enemy item choices. Even at it's most obvious, it feels like just another stat tied to a specific champion's attempt to optimize their damage output.

As it stands, I'd much prefer balancing around reducing all healing across the board by 30-35%. I think ignite is the only place where I particularly like it and see it serving a purpose for the function of the summoner spell.

I do however think there could be better implementations of it. I'd much prefer a stronger effect, which is more conditional or has weaker up time. Possibly even just nerfing grievous wounds to 15-20%, while having a significantly stronger effect, which is gated behind a cooldown or condition on completed anti-heal items.

r/AbandonedPorn PigDogUrbex

Abandoned Hotel UK at Night

r/fakehistoryporn Chip_Vinegar

An official at Flint, Michigan "leading" by example and drinking exclusively tap water - consequently starts to show signs of dementia after a decade. 2014.

24 0
Reddit
r/programming Lean1201

FRONT END VS FULL STACK

Do you think companies still hire separate Front-end and Back-end developers, or is the market strictly moving toward Full Stack?

IDK im starting in this world and i dont want make a mistake chosing something useless in the programming Industry

r/ClaudeAI Su1tz

Let's create a dataset to test to see if model degradation is real or not.

I believe the release of Opus 4.6 is the golden opportunity to start preparing a dataset of prompt-response pairs that display current Opus' capability and performance to compare it to future performance.

Every time a new model comes up, everyone is very hyped and they believe it performs very good. However, once a couple months pass, people start to suspect that AI providers start to quantize (or other similar measures) their models in order to meet high demands. Many times have I seen this case happen where people would start to make posts praising a newly released model initially and as time passed, arguments that the model quality degraded arose. This is usually the case for every model ever released by any AI company.

The new Opus has just released and it proves itself to be a very good model. I say we create a dataset of prompt-response pairs so we can compare the results afterwards when time has passed so we can actually see if there is any significant model degredation or not. As LLMs are usually non-deterministic, we need to be a bit lenient on our comparisons as they may not match completely. However, judging by peoples' complaints, the alleged degradation must be quite apparent to be this noticable to the public eye.

I dont have enough time or money to actually invest in this but I believe there are others who are willing to get to the bottom of this highly relevant topic.

19 2
Reddit
r/Art ShenGoaren

Fighting The Content Machine, ShenGoaren, Digital, 2026 [OC]

r/Jokes pennylanebarbershop

Samurai contest

A tournament was held to determine the greatest samurai among three contenders. Each was given a box with fly inside- they were to open the box and kill the fly with their sword as it flies away.

The first samurai opened the box and then cleanly sliced the fly in two with a single sweep of the sword.

The second samurai even did better, taking two swipes and cutting the fly into quarters.

The third samurai opened the box and took a swipe but the fly continued to fly.

“Ah,” said the judge, “your fly has escaped!”

“Yes, he lives,” admitted the samurai, “but he will no longer reproduce.”

181 14
Reddit
r/LocalLLaMA Clank75

qwen3-coder-next with Claude CLI

Has anyone managed to get Qwen3-Coder-Next working well with Claude (or indeed, anything else?)

It seems pretty smart, and when it works it works well - but it's also incredibly prone to falling into loops of just endlessly reading the same source file over and over again.

I'm currently fiddling with turning down the temperature to see if that helps, but wondering if anyone else has any good ideas...

(Running with the latest llama bugfixes (so at least it stopped hallucinating errors,) Unsloth UD-Q8_K_XL gguf with llama-server.)

r/oddlyterrifying thetacaptain

The kind of things I see when I take too much Benadryl

r/programming r_retrohacking_mod2

Resurrecting Crimsonland -- decompiling and preserving a cult 2003 classic game

r/LoveTrash jgoja

You need an umbrella policy

r/maybemaybemaybe nooncoreGirl

Maybe maybe maybe

13 5
Reddit
r/SideProject Maleficent_Gold2324

A blog publishing philosophy essays written by students

I founded a small student-run blog where we publish essays about philosophy and science written by high school and college students. The idea is that we finally create a platform that collects young people's ideas and opinions in an educated and reasonable way. Instead of just ranting about something online, learning to express yourself in a way that actually resonates with people. I think that's more than necessary in a world as polarized as ours. P. S: It's a lot of work, so if someone wants to help build this further comment or DM me :)

r/HumansBeingBros licecrispies

Group of snowmobilers discover lost horse 10 miles deep in Wyoming mountains, give him a raft ride home.

50 1
Reddit
r/StableDiffusion WildSpeaker7315

real, cant tell me otherwise

r/LocalLLaMA blueblazd

Is there still no way to convert Gemma 3n to onnx/tflite?

It has been months since gemma's release and i need to convert my fine tuned gemma 3n to either onnx, tflite or litert lm to deploy on mobile. After many trials i failed and can not find any guide at all to do so. Was no one able to do it?

r/meme Own-Blacksmith3085

The winners write the history books, but the losers write the best conspiracies

r/aivideo TechHalla

I think the hippo won

109 12
Reddit
r/Art ZRamsizzle

The Vessel, Zach Ramsey, Procreate, 2026

r/painting Sad_Storm7482

Copy of a Tronie by Rubens

Oil on canvas, 2023.

r/AskMen Individual_Mix_4234

What makes some of us perpetually romantic?

Idk why so. I just cannot be not romantic. Right now in a broken marriage but I just can’t stop loving. Am pushing 50, and my heart goes mushy when I see someone, that I go Gaga and am like awww. Having said that, am madly in love with couple of women but I just can’t tell them. Feel me?

r/BrandNewSentence diglettsarecool

here are the details of penis inflate-gate

34 4
Reddit
r/singularity baehyunsol

I tested Claudes-C-Compiler... and it's much better than I've expected.

tldr: I tested claudes-c-compiler to compile a real-world program (11k lines of code), and it successfully compiles, and the compiled program works well.

``` git clone https://github.com/rui314/chibicc; cd chibicc;

make CC=$GCC_PATH; mv ./chibicc ./chibicc-gcc; ls -l chibicc-gcc; # 303960 bytes rm *.o; time make CC=./chibicc-gcc; # real 0m0.432s, user 0m0.359s, sys 0m0.072s

cd ..; rm -rf chibicc;

git clone https://github.com/rui314/chibicc; cd chibicc;

make CC=$CLAUDE_CC_PATH; mv ./chibicc ./chibicc-claude; ls -l chibicc-claude; # 290184 bytes rm *.o; time make CC=./chibicc-claude; # real 0m0.450s, user 0m0.366s, sys 0m0.084s

cd ..; rm -rf chibicc; ```

I ran the command above. It downloads chibicc, which is a c compiler written in C. It first compiles chibicc with gcc, and compiles itself with the compiled binary (bootstrapping). Then, it compiles chibicc with Claudes-C-Compiler, and compiles itself with the compiled binary.

The gcc-bootstrapped version and claude-cc-bootstrapped version both work well, and their performance aren't that different.

26 3
Reddit
r/PhotoshopRequest SLCtechie

Can you guys edit a Pope in my hand?

My friend said I couldn't catch a pope and I really couldn’t but I don't want to say I didn’t, and if anyone got Pope catching tips please tell me.

r/metaldetecting Bobert2342111

What is this and would it be worth digging

LiDAR and normal images

in England near Sheffield

18 10
Reddit
r/MostBeautiful kosherpickl

Cat in Chouara Tannery in Fez, Morocco (OC)

r/BobsBurgers ellalei

Is Tina Underrated, Overrated, or Perfectly-rated?

following u/epicpersononthisapp post, I thought this would be interesting

63 52
Reddit
r/LocalLLaMA Express-Jicama-9827

Running Kimi-k2.5 on CPU-only: AMD EPYC 9175F Benchmarks & "Sweet Spot" Analysis

author:~$ export LANG=en_US.UTF-8
> Japanese is my native language. I used AI to help structure and translate this post to ensure the technical details are accurate in English.
This is my first post:D
Learned so much from this community:bow

--

I ran a series of local experiments with Kimi-k2.5 (~1.03T params, MoE) using llama.cpp server to see if a 1T-class model is actually usable on CPU-only infrastructure for non-interactive workloads.

Disclaimer: This is not about Chat UX. The target use case is async/batch execution: data pipelines, dataset generation, distillation, and RAG processing.

TL;DR A 1T-class MoE model is practically usable on CPU-only if you accept the latency and design your workflow around caching + async execution. On my setup, I’m getting sustainable ~10-12 tok/s decode speeds.

Hardware / Runtime

  • CPU: AMD EPYC 9175F (16 cores / 32 threads, Zen 5, 512MB L3)
  • RAM: 768GB DDR5 (12 channels, running at 6000 MT/s due to motherboard limits)
  • GPU: Not used
  • OS: Ubuntu 24.04
  • Runtime: llama.cpp container (server mode, rootless podman, AVX-512/VNNI build)

e.g.

podman run --rm  -p 8081:8080  --shm-size 16g  --cap-add=SYS_NICE  -v /mnt/data/hf/hub/models--unsloth--Kimi-K2.5-GGUF:/models:Z  compute.home.arpa/llamacpp-zen5:latest  -m /models/snapshots/386fed8b054275941d6a495a9a7010fbf31b560d/Q4_K_S/Kimi-K2.5-Q4_K_S-00001-of-00013.gguf  --cache-type-k q8_0 --cache-type-v q8_0 --defrag-thold 0.1 --flash-attn on  --ctx-size 16384   --parallel 1 --threads 13 --threads-batch 13  --batch-size 2048  --ubatch-size 512  --jinja  --host 0.0.0.0  --port 8080

Model Settings

  • Model: Kimi-k2.5 (~1.03T params, MoE)
  • Quant: GGUF Q4_K_S unsloth/Kimi-K2.5-GGUF
  • Context: 16k
  • Batch: 2048 (ubatch: 512)
  • Threads: 13–14 (See "Thread Scaling" below)
  • Flash Attention: Enabled
  • Prompt Cache: Enabled

Memory Footprint (Measured)

  • Model RSS: ~522–525 GB
  • KV Cache (16k): ~2.0 GB
  • Prompt Cache (~1.2k tokens): ~160 MB
  • Total RSS: ~523 GB (Stable, no swap-in/out observed)

Performance (Real Numbers)

1. Cold Run (No Cache)

  • Prefill: ~22 tok/s
  • Decode: ~10 tok/s
  • Total Time (~1.2k tokens): ~80s

2. With Prompt Cache (LCP Hit)

  • Cache Lookup & state apply: ~60 ms
  • Impact: FFTF (Time to First Token) drops dramatically.
  • Verdict: While slow for real-time chat, this is totally fine for batch workloads where prompt caching can be leveraged.

Thread Scaling & The "Sweet Spot"

I tested various thread counts (ctx 8k) to find the optimal configuration:

Threads Prefill (tok/s) Decode (tok/s) Note 16 24.4 12.9 Max throughput 14 21.3 12.5 Memory bandwidth saturation begins 13 21.6 11.7 The Sweet Spot 12 14.6 11.9 Efficiency-oriented

Observation: Decode speed saturates around 13–14 threads. Pushing beyond this yields diminishing returns while starving other processes. Running at th=13 leaves headroom for my data pipeline (Dagster/Trino) to run in the background without choking the inference.

Discussion: Why does this CPU work?

This is my current interpretation based on observed behavior. I'm happy to be corrected.

Hypothesis: Entire experts obviously do not fit in L3 (512MB). However, MoE works well on CPU not because everything fits, but because the repeatedly reused working set does:

  • Router / Gating logic
  • Projection layers
  • Recent layer weights & intermediate tensors
  • KV reuse paths

Unlike dense 70B+ models which often fall back into memory-latency-dominated behavior for every token, MoE seems to benefit significantly from the localized "hot regions" staying in cache.

EPYC 9175F (Zen 5) Specific Factors:

  1. Huge L3 × Low Core Count: With 512MB L3 shared across only 16 cores, we have effectively 32MB+ L3 per core. This minimizes cache contention/thrashing even with random MoE access patterns.
  2. Low Memory Controller effective latency: 12 memory channels feeding only 16 cores means very shallow request queues. MoE favors latency minimization over raw bandwidth.
  3. Zen 5 AVX-512/BF16: The true 512-bit datapaths and native BF16 execution seem to help significantly, even with Q4 quants (accum paths).

Conclusion

A 1T-parameter MoE model on CPU-only is a viable workhorse.

If you treat it as a batch engine and lean heavily on prompt caching, it is surprisingly usable. My current setup splits the workload: GPU for fast agents, CPU for stable, massive-context, reproducible batch generation.

Video Demo:

https://reddit.com/link/1qxgnqa/video/82ow6kvmdvhg1/player

*Bonus Benchmark: Llama-4-Maverick-17B (GGUF Q8)

To contrast with the massive MoE model, I also tested Llama-4-Maverick-17B at Q8 (8-bit) quantization.

Performance:

Prompt Processing (Prefill): ~50–52 tok/s

819 tokens in 15.6s → 52.4 tok/s

1000 tokens in 19.7s → 50.8 tok/s

Generation (Decode): ~15–16 tok/s

104 tokens in 6.3s → 16.6 tok/s

916 tokens in 60.4s → 15.2 tok/s

TTFT: ~16–20s (for ~1k token prompts)

What's Next? For my next experiment, I plan to test the newly released Qwen3-Coder-Next at Q8. I'm curious to see if the "Active 3B" architecture can push CPU inference speeds even higher while maintaining top-tier coding performance.

35 7
Reddit
r/AskMen HomeTurf001

If Men gave out yearly awards, what are some categories and who would win this year?

r/ARAM Bruit2Fond

Made an atrocious combo

Hello guys ! Nothing brand new but I wanted to know if some of you made the poroblaster combo with slap around and soul eater ! Did it in a game with bard and I was getting 100 ap and 100hp each time I hit a spell, kinda busted..

r/ClaudeAI Large-Explorer-8532

Ive built Cursor for Blender AI: 3D-Agent

Hey everyone! I wanted to share a project we've been working on: 3d-agent.com — a Blender AI interface that lets you generate 3D content from text or images, no Blender experience needed.

We used Claude Code to architect and build a multi-agent system. Sonnet handles the execution tasks while Opus serves as the reasoning agent — together they interpret your prompts and drive Blender to create 3D models, scenes, and more.

What it does:

  • Text-to-3D — describe what you want and the AI builds it in Blender
  • Image-to-3D — feed it a reference image and get a 3D model back
  • Designed to simplify Blender workflows for gaming assets, animation, and general 3D creation

Free to try — you can test it out at no cost (paid tiers available for heavier usage).

I'd genuinely love feedback from this community, especially ideas on how this could be applied to game dev, asset pipelines, animation, or other 3D workflows. Check the video and let me know what you think!

r/SideProject Admirable-Edge8346

155 shares and a lesson learned: The raw truth about hitting five figures and finding freedom.

I’m still processing this. My last post reached 65,000 views and was shared 155 times before it was taken down. It taught me a huge lesson: when you stop being invisible, the noise starts. 7 people tried to tear me down, but I’m choosing to focus on the 50+ who found value in my journey. I’m not here to sell. I want to share the 3 shifts that helped me reach a five-figure monthly milestone and escape the corporate grind: 1. The 20-Day Execution: Most people overthink for years. I committed to a strict 20-day system of pure action. If you can survive the first 20 days of discipline, you've already won half the battle. 2. Solving Real Pains: I stopped looking for 'income streams' and started looking for real problems. When you fix a major pain for people, financial success becomes a natural result. 3. The Resilience to Lead: 155 people saw value in my work, but I almost let 7 people ruin my drive. Your progress will always attract critics. If nobody is doubting you, you’re probably playing too small. I’m here to support anyone trying to build their own path and escape the 9-to-5. Ask me anything about building momentum or handling the pressure. I just want to give back to this community.

r/SipsTea throwawaybsme

Jumping genital gussets

r/findareddit Caleb_isagod

What’s a Reddit where I can just give the dumbest rants known to mankind?

Like.

I had this really stupid rant about how men having urges isn’t a bad thing and it’s like a whole dumb essay and I want to send it somewhere.

Is there a Reddit where you can just send really dumb rants

r/whatisit Head_Supermarket3020

Is the bird injured or born with a defect ?

I was driving and saw this bird on the road thank god i found him before it was too late ? I'm in Morocco and i don't know how to help.

42 28
Reddit
r/painting Anastasia_Trusova

I tried to paint how spring smells in the mountains — does it come through?

132 13
Reddit
r/Weird leayohe74

Tiny dead snake in my Tupperware drawer?!?

I thought it was a twist tie until I put my glasses on!

452 59
Reddit
r/AbstractArt ClintDeanAbstractArt

Accumulation

Accumulation

Oil on canvas, 36 × 48

Layers, erasures, decisions.

r/Seattle Tylerea

Lackluster Super Bowl Buzz in the City

Maybe it’s just me, but compared to the last few times Seattle has gone to the Super Bowl, it feels like there is almost no buzz around the city. In previous visits you couldn’t go a couple steps without seeing a 12 flag. Every single business had Seahawks posters up in their windows. Almost everyone was rocking some sort of Seahawks gear. This year just feels pretty meh. Maybe it’s the state of things in general, but couldn’t help but notice the lack of buzz this week.

r/SideProject groovehoover

I built a site for musicians to loop youtube videos

and save clips. currently using it to store my own collection of blues licks to learn

and constructive thoughts or feature requests from fellow musicians would be super welcome!

r/SideProject JonnyTsnownami

I built a fashion search engine for your favorite brands

Searching for clothes online hasn't changed much. I either end up with 10 tabs open or trying to filter out fast-fashion spam. After hearing the same complaints from some friends I decided to build costitch.com. It lets you follow your favorite brands and search for clothes just within those brands.

The two main pieces of the tech stack are: rwsdk (React on Cloudflare, server-rendered, simple DX) and Channel3, which provides a universal product API across millions of products with natural language search and built-in affiliate infrastructure. The idea is that costitch should be able to monetize just off conversions on the site.

I'd love if people could try the site out and give me some feedback.

r/wordchewing baugarfa

All I can ever think of

30 2
Reddit
r/meme Isa-Me-Again

Blizzard/world of warcraft player housing is the BIGGEST joke. You can't even place lights outside. It is honestly the lamest building in a game that I have ever tried.

r/comfyui CertainlyBright

Resources for video upscaling

What is the meta for video upscaling right now? What does the workflow look like, and what models are used?

r/Damnthatsinteresting Epelep

The physics behind ski jumping’s ‘Penis-gate’ scandal: How 2cm of extra fabric = 5.8 meters of jump distance

19850 838
Reddit
r/ImaginaryPortals YanniRotten

The White Regiment cover art by David Mattingly

16 1
Reddit
r/ClaudeAI SatoshiMoon

No planning needed. Let me just write the plan.

r/personalfinance meridian_skein

Background check says I have a bankruptcy I never filed and it’s nuking my job search, what do I do now?

I’m 30F and I’ve been job hunting for about two months after a layoff. I finally got to the finish line with a company I actually liked, did 3 rounds, references, the whole thing. HR calls me sounding weirdly formal and says my offer is “on hold” because their background check vendor flagged a bankruptcy on my record. I literally laughed at first, because I have never even been to court for anything, I’ve never filed bankruptcy, nothing. I asked for the report and they sent a PDF that has my name, a previous address from like 6 years ago, and then a bankruptcy case number in a state I have never lived in. The date is from 2021, which is also the year I was working full time and paying my student loan like normal. I pulled my credit reports that night and one of them has a public record note that looks like it belongs to this case, the other two don’t. I called the background check company and they basically read me a script and said I have to “dispute in writing” and it can take up to 30 days. Meanwhile the employer told me they can’t move forward until it’s resolved, and I’m panicking because this is now the second time something similar happened, a recruiter last month vaguely said my report came back “concerning” and then ghosted me. I feel stupid for not checking sooner, but I didnt even know a random bankruptcy could attach itself to me like that. I’m not trying to sue anyone, I just need to fix it fast and stop losing offers. What’s the smartest order of operations here, do I dispute with the background check vendor first, or the credit bureaus first, or the court first? Is there some way to prove I’m not that person beyond sending my ID and praying, like do I need to freeze credit or file something official for identity theft even if my accounts look fine right now? I’m spiraling a bit because every day matters and I can’t afford to keep restarting interviews over a mistake that isn’t even mine.

63 20
Reddit
r/painting unclebrandy

Lake

I painting this last June. One of my first attempts at a landscape.

r/Damnthatsinteresting RampChurch

Shattering a Wine Glass with Sound: filmed at 187,500FPS (credit: The Slow Mo Guys)

103 13
Reddit
r/KidsAreFuckingStupid Unusual-Pizza2907

Good justification

191 37
Reddit
r/LocalLLaMA Rique_Belt

What are your experiences with Openclaw and local models?

Yesterday I set up Openclaw on my computer aiming to use Qwen3-4B-F16, Ministral3-3B-2512-F16 or Qwen3-30B-A3B-Q2 on my CPU with llama-server and let Openclaw access them through it just to see what it was capable of doing. The results were absolutely terrible. Initially, I had some issues with the --chat-template, so Openclaw passed from 6000 to 12000 tokens to the model filling 1/3 of the 32768 ctx, which per se made me wait several minutes to start an interaction. Ministral did accomplish some stuff like making a python code that plays a .mp3 after it couldn't play through the media player, it was magical, but since the model runs at ~7 t/s every interaction took a lot of waiting. The Qwen3 ones actually didn't achieve anything. Desperately, I tried to use Gemma-270M and Qwen3-2B and Qwen3-1.7B none of them did anything, at least the Gemma-270M was fast.

I saw some comments on other communities regarding the use of local models on Openclaw and any response was optimistic, stating that only really big models were able to run minimally properly and using multiple GPUs to achieve tens of t/s.

I really want to use Openclaw, it showed a lot of potential for managing files on my computer and accessing the web. But for now my options are to wait 2 years in hopes of any miracle model or buy an expensive GPU with at least 16GiB or pay for an API, the last seems the only reasonable option but dreads the thought of a third party company/lab having complete access to my machine.

r/creepypasta WoollyWolfHorror

Looking for some help. Are there any Brazilians here?

I am currently writing a story that I will narrate on my YouTube channel. It takes place in the Amazon rainforest. Are there any Brazilians that know about the “Boitatá”? If so could you share what you know of the legend? I would like to use a locals knowledge to add depth to the story. Any information would be greatly appreciate. Thanks

r/meme Beautiful_core_2220

Somone when i go to kitchen

r/LocalLLaMA Main_Payment_6430

fixed the infinite retry loop that burned $50 in API credits while i slept (Open Source)

so i've been running agents with OpenClaw for a few weeks and kept waking up to bills that made no sense. like $47 overnight when the agent should've just... stopped.

turns out the issue is state loops. agent tries action A → fails → retries action A → fails → retries the exact same thing 847 times because there's no memory of "i already tried this."

the fix was kinda obvious once i saw it. hash the state history. if current_state_hash matches any hash from the last 5 steps, kill the loop and force a different action.

pushed a PR to the OpenClaw repo but honestly got tired of waiting so i just built a dashboard that shows me when this is happening in real time. there's this yellow pulse thing that fires when the circuit breaker kicks in.

been running it for 3 days now. no more surprise bills. the agent actually finishes tasks instead of getting stuck asking GPT-4 the same question until my credits die.

if you're running agentic stuff overnight this might save you some pain: https://github.com/justin55afdfdsf5ds45f4ds5f45ds4/EmpusaAI.git

anyone else dealing with this or am i just bad at prompt engineering lol

r/geography -just_a_normal_user

Will this river form an oxbow lake over time?

Is the meandering section of the Tapi River through Surat City likely to form an oxbow lake in the future via neck cutoff?

It has pronounced meanders upstream/around the city, but heavy engineering (embankments, weirs like Causeway, urban development, dams like Ukai) stabilizes the channel and limits migration.

Any thoughts on likelihood, especially with regulated flows and flood control? Seen any recent cutoffs or paleochannels there?

10 14
Reddit
r/Adulting sazzzze

i don't know what to do

I'm 20(f) and i don't know what to do with my life. i don't wanna work in corporate as it is too weird for me, my sister works and the story she tells about her manager are diabolical and i don't think i can handle that pressure but I'm also not creative enough to do something on my own. I learned crochet 2 years ago and want to start a business but there are so many crochet shops nowadays and everyone sells the same thing, so it feels like copying??? if i open my own shop. i researched about freelancing but didn't get much information on how to start. it feels like I'm interested in too many things and I'm just average in all of them so it's feels very chaotic. i think I'm just wasting my life by not doing something and want some advice?

r/mildlyinteresting AsiraTheTinyDragon

My school has outlets on the ceiling

r/ShittyLifeProTips FerrisBuelersdaycock

SLPT: Want to save money on food?

Only eat when you’re about to pass out. Hunger builds character.

r/TwoSentenceHorror EmilyKeenWrites

[feb26] While snowed-in at my family’s mountain cabin, I flipped through an old photo album and saw a familiar-yet-strange face.

The photo was labeled, “Uncle Danny”, but I knew him from the news as the infamous, “Cold-hearted Couple Killer”.

r/ARAM hehechibby

What are some items that are standard in regular league that aren't as necessary in ARAM/Mayhem?

Guess one I've noticed is Essence Reaver on some ADCs like Xayah, Sivir and even Lucian (selling it later in the game since E mana cost eventually is free)

definitely still worthwhile if one gets the upgraded sheen augment but generally haven't had much mana issues on sivir/xayah without it since higher mana regen and built in presence of mind in say mayhem

even better with another item if buff budies or ocean soul

some champs I do think it's worth is Smolder, Gangplank. Draven, and ADC crit TF even. Also memey fun builds like lethality crit Vi

r/mildlyinteresting lilguyanonymous

Sleeping Robin on our Front Porch

23 9
Reddit
r/AlternativeHistory EnvironmentLong4187

Could the Pyramids Have Been Designed for Antimatter Production?

Were the pyramids facilities intended for the production of antimatter? Is there any more detailed scientific speculation about this somewhere?

I have been thinking about the precise alignment of the Giza pyramids with the cardinal directions, the shape of the pyramids, and the chambers inside them, which were enclosed spaces. Could it be that antimatter was produced in these chambers by utilizing the Earth's magnetic field?

According to Wikipedia, antimatter is specifically considered as a potential fuel: "Isolated and stored antimatter could be used as a fuel for interplanetary or interstellar travel as part of an antimatter-catalyzed nuclear pulse propulsion or another antimatter rocket. Since the energy density of antimatter is higher than that of conventional fuels, an antimatter-fueled spacecraft would have a higher thrust-to-weight ratio than a conventional spacecraft." https://en.wikipedia.org/wiki/Antimatter#Fuel

Egypt’s location on Earth is specifically relevant in this respect: https://en.wikipedia.org/wiki/South_Atlantic_Anomaly#Position_and_shape

Edit: Maybe antimatter produced in Giza was stored in Osiris Shaft.

r/brooklynninenine Wrong_brain64

Day 4: eliminated yesterday were Amy and her stupid brother David, what about today??

r/LoveTrash icyhotonmynuts

Smooth operator

.

u/downtune79, thanks for reminding me of this clip, with your last clip

135 5
Reddit
r/AskMen Unable-Situation4780

Why does the word cute seem offensive to men?

Whenever I have described a man, or their attire/hair etc as cute, their response has been: CUTE?! Lol what is so unsettling about being called cute?

r/instant_regret nkmr205

Parkour false start

1907 96
Reddit
r/findareddit weeeeeeeea

Public health roles + career advice

Hello, 22F Australian with (almost - final year) double bachelor in Science (majoring chemistry) and Biomedicine. Doing Master of Pub Health 2027.

I’m looking for a subreddit where I can ask about what jobs actually exist in public health, or epidemiology, and career advice for early entry job types.

Cheers!

r/StableDiffusion ResponsibleTruck4717

Can someone share prompts for image tagging for lora training for z image and flux klein

I'm using qwen3 4b vl to tag images, I figure out for style we shouldn't describe the style but the content, but if someone can share good prompts it will be appreciated.

r/AskMen raccoonsonbicycles

Who is the coolest cereal mascot, and which ones are the most likely to win in a fight?

Coolest IMO is Cinna-mon from Apple Jacks. Just a chill dude who likes his cereal.

Toughest is complex.

(Given they're all about the same size)

Im leaning toward crackhead Honey Comb Tasmanian devil thing

Corn Pops rooster could he a surprise contender

Tony the Tiger and the Cookie Crisp wolf are predators

Speaking of predators that creepy Sugar Bear always gave me creepy serial killer vibes, bet that mf is suspiciously handy with a switchblade

But Capn Crunch presumably has combat training and a saber

Lucky has magic

Chocula has magic or vampire powers too

Snap Crackle and Pop are magic right? Or a tag team trio?

Idk how big the cheerio bee is but he could surprise people

13 27
Reddit
r/StableDiffusion WildSpeaker7315

Error 404. Prompted like a noob

r/30ROCK lawrence12345m

Everybody looks good in a Sheinhardt

125 9
Reddit
r/AlternativeHistory Joseph_the_Villain

We Ranked The BEST Americans Wars part 2

r/mildlyinteresting TyrannyOfBobBarker_

My 3 pound rubber band ball I've been working on for the past year or so.

14 1
Reddit
r/AI_Agents crackandcoke

AI slop is ruining the internet

With the emergence of AI generated videos made with kling motion control and all… i think it’s imperative we create an AI content detector of some sort to stop this stupid AI slop that’s filling up the internet. Or just detect that the content is AI generated so that we can hit “not interested” every time it shows up on our socials feed. Thoughts?

r/Adulting No-Flan7932

Why are we so tired?

My parents are so happy, mom hates her job bit still happy rest of the day.

My hypothesis; -Its the food and drink we consume, since my parents cook more often at home than me.

-We are ungrateful ass generation who complains all the time.

-Parents are over grateful that pray god for breathing. (My grandfather worked min wage until retirement and worked without occupational safety laws. Now he has hearing problems. He doesnt know what i.e. OSHA is lol)

-They work for so long, they got used to it or stopped mentioning.

-They are tired but they didnt had reddit.

I think its all of them combined.

r/TwoSentenceHorror The_Owl_Queen

I was stunned to find myself in heaven, and asked God why I, an unbeliever, was not in hell.

God smiled at me and said, "All souls come here in the end, religion merely adds flavour, much like acorns do for pigs."

453 28
Reddit
r/LocalLLaMA Dentifrice

One 3090 or two 5060 ti 16gb?

So I’m wondering if I should buy a used 3090 24gb or two brand new 5060 ti 16gb

3090 is more powerful but I remember seeing that series 50xx has features useful for AI that 3090 don’t.

I would also have more ram with the 5060.

But does it work great with 2 cards? Ollama for example?

I’m also considering going the very cheap way of buying only one 5060.

Thanks

r/meme Loogi_Carry

Hold up his wiring is this fire???

r/BobsBurgers ChrisYoloBitch

Most beautiful deep-feels episodes?

Hey guys. I'm looking for less silly more serious episodes. I just got back into BB and I love it. In my current state of life, I value the deeper episodes a bit more than the silly ones. My favs. at the moment is "Amelia" and "They Slug Horses, Don't They?". Can anyone recommend more episodes with vibes like theese? Thanks for the help in advance :)

Sincerly, a big crybaby lol

17 31
Reddit
r/geography Grande_Tsar

Major routes used by the participants of the Third Crusade

r/meme HonorZeBallsack

the horror of 0,001% profit reduction

501 33
Reddit
r/DunderMifflin Top-Astronaut5761

Casual Friday. Bad character writing?

On the 100th rewatch and up to Casual Friday.

Every time I get here, there's something in the writing that bothers me.

Pam's behaviour seems really out of character.

Michael and Ryan I understand, but Pam seems particularly mean and malicious, especially in the moment of eating the sales team's lunch. She's said previously that she doesn't like the idea of people hating her and she's generally a supportive character.

I feel like she could've been a sympathetic voice between The Michael Scott Paper Co and Dunder Mifflin.

459 91
Reddit
r/ClaudeAI MetaphysicalMemo

Cmd+K Quick Action Menu!

Not sure when this was added to the app but I accidentally ran across this menu when I hit Command+K. Great addition for quickly jumping around or starting new tasks, conversations, etc.

r/ClaudeAI EmuNo6570

Claude made me a (better) clone of SpaceMonger 1.40 within an hour

I always preferred Spacemonger 1.40 over the newer ones, and I prefer it over WinDirStat, Treesize, SpaceSniffer, etc... It's a program for easily figuring out what is using a lot of disk space. Claude did the whole thing in like an hour. It first scans every file on the system and collects a long list of absolute paths and their sizes. Then it computes the percentage of the area each box should take up, and does this recursively for all subfolders. Very easy stuff.

I blurred the picture but the joke's on you: you couldn't see my porn anyway because I hid the names. I added 3 functions:

  • ban (ignore c:\Windows forever)
  • hide names (list of censor words that will never appear, just call them "Folder" or "File" instead)
  • and quick-hide, so I can select a box and press H, it hides only for that session, and makes all the other folders bigger as if it never existed. For example if I already know I don't want to delete a large game, or my movies folder, I just click it and press H and it's gone, then I look through what else I can delete.

Rather than e-mailing the developers of these apps and begging them to add these features or to open-source the app, I just made my own. Now finally I can have the features I've wanted for the past... 25 years.

r/Lost_Architecture Lma0-Zedong

Campo de Marte presidential Palace, 1900s-1931. Managua, Nicaragua

14 0
Reddit
r/conan ANSISP

the Chuck Connors/Conan O'brien vortex

12 4
Reddit
r/oddlysatisfying Anastasia_Trusova

The magic of acrylic, lighting options for painting, lamp-UV lamp, darkness, directional side light

22 2
Reddit
r/SideProject BearInevitable3883

Library of UI components that you can copy as prompt

I was tired of AI using the same pricing, feature etc UI components in my websites.

So I created this UI library of components inspired by top websites. The best thing is, you can just copy all them as a prompt - and give to Claude, Cursor, Lovable directly.

Check it out here 👉: https://www.landinghero.ai/library

We're adding dozens of components everyday.

r/PhotoshopRequest mayranav

Can someone photoshop a sleeve onto my shirt (right) and make the overall photo clearer?

$5 paid

r/Anthropic Inevitable-Rub8969

Anthropic releases Claude Opus 4.6 model, same pricing as 4.5

r/AI_Agents Main_Payment_6430

my agent burned $63 overnight asking the same question 847 times - here's how i fixed it

been building agents for a few months and kept waking up to insane API bills. last week was $63 for what should've been a $5 task.

dug through the logs and found the agent stuck in a loop: tries action → fails → asks LLM "should i retry?" → LLM says yes → tries same action → fails → asks again. did this 847 times over 6 hours.

root cause is no state memory. the agent has zero awareness it already attempted this exact action 10 seconds ago.

the fix basically: hash the current execution state (action + params + last observation). compare it to the previous 5 states. if there's a match, circuit breaker kicks in and either forces a different action or stops the task entirely.

added a dashboard to visualize when this happens because reading JSON at 2am is hell. there's a yellow indicator that pulses when the loop prevention fires.

running this for a week now - no surprise bills, agents actually complete tasks instead of retrying forever.

genuinely curious if this is a common problem or if i just suck at prompt engineering. how are you all handling infinite loops?

r/Adulting AdditionalAppeal1451

22M single for 2 years after a 1 year and 1 month relationship with 22F - advice on building dating opportunities without apps

Hi everyone,

I’m a 22M, and I’ve been single for about 2 years now. My last relationship was with my ex (22F), and we dated for 1 year and 1 month. We met organically through one of our school subjects and gradually built a connection. We eventually broke up mainly because of communication issues.

After the breakup, a lot changed. I transferred schools, and I’m now an engineering student whose program is delayed by about 2 years. All of my classes are fully online, which has significantly limited my in-person social interactions and made me feel socially disconnected.

During my relationship, I spent a lot of time with my friends, which became an issue between me and my ex. After the breakup, I slowly stopped hanging out with my friends and became more antisocial. At this point, I don’t really have friends anymore. Thankfully, I still have my family as a support system.

I’m not interested in dating apps. I prefer forming connections naturally over time. At the same time, with online classes and a small social circle, I’m struggling to create opportunities to meet people in real life.

I also want to provide some context about myself: I’m 5’1”, I don’t have a car or anything fancy, and I’m not rich — but I can afford simple dates and take care of myself financially as a student. These things sometimes affect my confidence when thinking about dating.

My specific question is:

👉 What practical steps can I take to create real dating opportunities and form genuine connections—either organically in real life or through social media—given my current situation and constraints?

I’m especially interested in hearing concrete strategies, routines, or mindset shifts that have worked for others in similar situations.

Thanks in advance for any advice.

r/ARAM Big_Cartoonist6468

Aram mayhem progression

I’ve played a few games already and I’ve still gotten zero progress? Is anyone else getting this? How do I fix it?

r/DecidingToBeBetter Icy-Mirror7086

How do I stop being envious?

I've always been insecure and negative about a lot of things in my life, mostly due to the fact that I don't consider myself good enough in any aspect.

For example, when I was younger I used to be considered the 'smart kid' but once I got into high school in a private school I got hit with the reality that i'm not that smart. I started to be envious of my friends, but I tried to keep up with the grades. But when covid hit I got deeply depressed and discouraged with life in general, so my grades went down for some time. However, for the last years of high school I slowly gain back confidence and started to work hard and everything went better.

But the thing is that envious feeling was always lingering inside my thoughts, I couldn't help but compare grades every time with my friends and classmates, as if I couldn't be happy for them, as If I hated them for doing better than me. Of course, I never said that out loud, but I still feel like i'm being a bad person because I always heard and saw that a true friend is happy for others success, yet I just couldn't do that.

On top of that, I've always been aware that i'm not pretty, or at least not up to the beauty standards, and that I just looked different from other people since i'm asian living in Europe. So I thought that if I wasn't smart enough no one is going to remember me/like me for anything else. That's why when I saw there were smarter people I immediately thought that the only thing that I had wasn't truly only mine.

Then, when I got a bit more into makeup and looking better, it went similar: I realized that there was more people prettier than me, that I just couldn't reach their face.

I think that I'm just being selfish and too egoistical, but I don't know why i'm like this or why I have negative thoughts and feel envious for other people. I truly want to be a good person and improve my life. But whenever I see that there will always be someone better I get discouraged because "if I can't be the best why bother"? I know this mindset is the worst, and I try to ignore it, but it's always speaking in the back of my mind. Sometimes I try to blame my depression but I also know that If I don't do anything about it how can I blame it? It's a toxic cicle that I want to escape from.

r/relatable_memes_ ban_03

its all falling apart

r/automation Mysterious-Form-3681

lightweight Alternative of Clawdbot

r/SideProject MartinTale

Spent 2 weeks after work making the simplest & most satisfying weight app and it's finally done 🥳

Long story short..

I made a weight tracker for myself.. It's 100% free & private - no subscriptions, no ads, no data collection, and all weight logs are stored on your device..

Why? Because, I never really enjoyed using apps as they all had too many things I didn't care about.. I ended up using a spreadsheet to track my weight for months but input, specially on phone was a bit annoying..

And this is probably a subjective but I find it super satisfying to use thanks to haptic feedback 😅

Anyway, check it out and tell me how it feels to you 😊

iOS - https://apps.apple.com/us/app/weight/id6758589093

Android - https://play.google.com/store/apps/details?id=com.martintale.weight

r/AbandonedPorn SjalabaisWoWS

Abandoned road tunnel in Norway - a spot for opposing traffic to meet

2nd of 3 photos, 1st one here.

118 0
Reddit
r/painting ffsSLOTH

the resting place for nightmares is with the dreamweaver

since real life is more horrific I’m taking a break from painting my own. Figured I’d say goodbye to a few of them before I stepped back.

24 6
Reddit
r/Art Project_Hama

Paper Effigies, Sevi, Digital, 2026 [OC]

536 8
Reddit
r/ClaudeAI Unusual_Midnight_523

I now have voice mode in web browser. Nice!

r/ARAM Argiach

Viego Bug with Poro King

Hey guys just want to let you know that if you pick the prismatic augment that turn you into the poro king with Viego just be careful because when I possessed someone and used the spell my flash was gone and replaced with another poro king that does absolutely nothing

FYI I was also using the Exalted Viego skin

r/LocalLLaMA No-Wind-1854

Terminal capability is becoming a core eval, we open-sourced 1,376 environments

OpenAI and Anthropic recently released GPT-5.3-Codex and Opus 4.6.

One clear trend is that terminal capability is now a core part of agent evaluation.

In practice, terminal training runs into a bottleneck quickly:

there are not enough high-quality, realistic environments. Scripted tasks and synthetic traces don't go very far.

In SETA, we focused on building environments instead of tasks.

We've released 1,376 validated terminal environments, covering:

Software engineering, Sysadmin, Security, Debugging, Networking, DevOps

  • Real terminal interaction
  • Compatible with Terminal Bench and Harbor
  • Reproducible and validated

Github: https://github.com/camel-ai/seta-env

or search for seta-env in on harbor registry

r/OldSchoolCool LorenBlaqe

Japanese "Delinquent Cats" photographed by Satoru Tsuda, early 1980s.

85 6
Reddit
r/meme GodBlessDaUSA

Diddy Phones Cellcom Ad

Diddy Alert! What Cellcom employee would pick up kids? Is getting a phone that cool?

r/ClaudeAI ArisLK

Help Claude Desktop/MCP: Compaction triggers too often, causing context loss and endless loops (opus 4.5 / 4.6)

Hi everyone, hoping someone can help me with an issue I'm having with Claude Desktop using MCP for my project.

Claude is compacting/summarizing the conversation way too frequently — even in brand new conversations. It compacts to "continue talking," which means it forgets what it just did and starts over in a loop. It can never complete an entire task end-to-end.

For example, if I ask it to analyze a folder containing multiple scripts, it can't get through them without compacting, so I never get any results — it just loses context and restarts the analysis from scratch.

Even worse, sometimes it compacts right in the middle of writing code. It then forgets what it was doing, starts over, and ends up duplicating work or producing incomplete/broken output.

Has anyone else experienced this? Is there a way to increase the context window before compaction kicks in, or to prevent it from compacting so aggressively? Any workaround would be appreciated.

r/ClaudeAI karlfeltlager

I asked Claude to estimate my stories based on estimated token usage

r/explainlikeimfive youngTchag

ELI5: How does your brain decide what memories to keep and what to delete?

77 35
Reddit
r/whatisit EvilTodd1970

Attached to the Exhaust on this Vehicle

Was stopped in traffic behind this car and noticed these things attached to the exhaust. They have wires attached and glow a little bit. What are they?

13 52
Reddit
r/linuxmemes Away-Software7116

It is the time 😱

oh boy wish me luck

r/LocalLLaMA saloni1609

Unpopular opinion: The "Chat" interface is becoming a bottleneck for serious engineering

Is anyone else starting to feel like we've hit the ceiling with the "Chatbot" UX for actual engineering?

Don't get me wrong, the models (Opus 4.6, GPT-5.3) are incredible. The reasoning is there. But the interface feels like it's from 2023.

I did a time audit on my workflow yesterday, and I realized I spent about 40% of my "coding" time just playing secretary for the LLM:

  1. Highlight code in VS Code.
  2. Paste into Chat.
  3. "Refactor this."
  4. Copy output.
  5. Paste back.
  6. Fix the import it hallucinated because it didn't see the file 3 folders up.

It feels like trying to build a LEGO set while wearing oven mitts. We are piping "God-like intelligence" through a text box designed for customer support.

I finally forced myself to switch to a Canvas style agent this week (where the model has read/write access to the file tree and plans moves). It was a headache to set up, but the difference is wild. I’m not "talking" to the code anymore; I’m just approving the diffs.

I feel like 2026 is the year the Chat Window dies for devs. We don't need a conversationalist

Am I the only one hitting this wall? Or are you guys still fine with the copy-paste loop?

r/PhotoshopRequest The-Joe27

Can someone please sharpen this image

r/megalophobia itsfredi

i make 3d renders

156 2
Reddit
r/PhotoshopRequest Asleep_Argument_5557

please help edit this wedding photo

my 1 year wedding anniversary is coming up and i’d like to get a big canvas of this photo, but it bothers me a little bit how shadowy and dark it is. we had a very quick backyard wedding (i was 6 mos pregnant) so this was just taken on an iphone. any help would be greatly appreciated!

i would love:

-the shadows in the lower left corner removed

-his face to be a little more bright

-anything else you think would make it look better

thank you in advance!!!

r/Frugal Anoelnymous

If it costs on average 30% less per step to buy the ingredients for a thing and make it yourself than to buy it, how far back in the production line are you willing to go for frugality?

A few examples:

A loaf of bread is five dollars. The cost of making a loaf of bread is three dollars including utilities.

For the cost of probably half of takeout you could definitely have made at least two meals plus left overs.

A book shelf is $189 at IKEA. You can buy all the supplies to make a bespoke to your space bookshelf for $125. Let's assume if you don't have tools or skills you have friends who can help with that.

As an aside any time you want to build a planter box I don't think there's any reason to pay for wood. There are dozens of small townhouses going up near me and they'll give you anything from their scrap piles for free.

Clothes are a bit of different kind of math, you can make a custom garment for about ¾ of a good quality store bought piece, but the custom one is exactly what you want, and will definitely outlive any store bought item. So if you're not buying things all the damn time it's a huge money saver over time.

Anyway. Where do you draw the line for convenience over cost? Does it vary from category to category?

r/whatisit MrKartoffel24

What is this?

I found this in a lake near my friends house. I was wondering what this black substance was. Notmally the lake is relatively clear.

60 71
Reddit
r/MMA AliBagovBeatKhabib

Nasty Ankle Break during ACA 200

18 13
Reddit
r/SipsTea SipsTeaFrog

The director's cut

3223 61
Reddit
r/brooklynninenine JonBaba21

If you google Holly Gennaro, Amy and Jake come up also.

193 5
Reddit
r/TwoSentenceHorror ElTigre1212

"Kill me," she begged, her voice shaking with fear and anticipation

"If only you understood how much of you I am killing," I sobbed before I sank my fangs into her neck, consigning her to the shadows forever

20 1
Reddit
r/SideProject InsightExplorer

[Android App] I made a Budgeting app that makes Budgeting as simple as writing a Note ✍️!

BudgetNotes makes Budgeting as simple as writing a Note✍️!

It is a fusion of traditional pen & paper budgeting with mathematical capabilities of digital devices.

Expense tracking apps always felt too much work to do. I couldn't spend so much time to navigate half a dozen clicks required to enter multiple entries every single day on other apps.

In fact I always wanted a combined app for Budgeting and Notes!
Consider this,

  • How often do we buy something and instantly regret it?
  • What if we could write a caution statement right where we note down the expense made on it?

A simple, one place reliable budgeting tool. That led to this app idea.

Here, if you write

15 Potatoes
50 Bananas
40 Onions
30 Chocolates

It will create a Budget List. It's that simple!

Features that make sense 💡

  • Notes as Budgets🟰 Every note is a budget
  • Inline expense tracking🟰Just type, no forms. Simple!
  • Section-based organization🟰Like a digital notebook
  • Backup & restore🟰 Export and restore anytime. Offline. Private.
  • Soft pastel colors that make budgeting feel calm, pleasant, and stress-free.
  • True Behavior Coaching Elements that Nudge you towards better Financial Decisions.
  • Financial Wisdom snippets - Rules of Budgeting from experts.

The app has just launched recently. Here's the Play store link.

r/homeassistant valain

[DISCUSSION] Do you think A.I. will positively influence HA integration availability and quality?

Hello all,

In our business, we see the benefits of using A.I. in software development, as well as the risks and pitfalls.

How do you think A.I. will benefit (or not?) HA integration development, availability, maintenance, updates, security...?

In a perfect world I would imagine telling Claude "Hey here's the API documentation for hardware XYZ, build a HA connector for it please."

Or some existing integration that needs maintenance, bug fixing, or upgrades, could be maintained more easily / rapidly by someone who did not originally develop it?

This is a "philosophical discussion", shoot your opinions :-)

r/30ROCK Citizen1135

This is a little late, but I'm just doing my part

30 10
Reddit
r/Lost_Architecture Lma0-Zedong

San Antonio de los Portugueses hermitage, by Alonso de Carbonell, 1637-1761. Madrid, Spain

r/OnePelotonRealSub lotrluvr623

Half Marathon Training on the Peloton Bike

Just wondering if anyone has done half marathon (or any lengthier distance) training exclusively on the Peloton Bike for their cardio/endurance? I live in a place that is very cold and will be for the next month or so and I gave up running outside in the snow/cold a couple years ago. I also have a half marathon coming up in two months.

I'm not looking to PR it or anything. I just want to train up my endurance so that I'm not dying. If anyone has tried it, how did it go for you? If you've got suggested plans, I'd appreciate those as well. Thanks!

ETA: I also know that the bike cannot compare to the impact on joints that running has, but I've got enough experience running to know that I can handle that part of it. Just looking for opinions on the cardiovascular strength part of it.

r/LocalLLaMA Sandzaun

How are you running local LLM autocomplete without Ollama? (llama.cpp + PyCharm/VS Code)

I have a simple problem and can't find a solution: I want AI autocomplete in my IDE that uses my local LLM model. Just simple AI autocomplete, nothing more.

The model (Qwen3-Coder-Next-IQ4_XS.gguf) runs on Windows with llama.cpp or koboldcpp.

I would like to use PyCharm as my IDE. If there's no other option, VS Code would also work. So I'm looking for a suitable plugin.

Every plugin I've found so far requires some kind of account and isn't designed for local models. And if local models work, then only with Ollama, which I don't use. Maybe someone could help me out?

r/oddlysatisfying Average_Watermelon

Twisted Snow Ribbon on Our Fence

Not sure how this happened. But I'm in awe. 😇

263 9
Reddit
r/SideProject Admirable-Edge8346

65k views and a reality check: I’m on a mission to help young founders find their way out of the corporate trap.

I’m going to be honest with you—hitting 65,000 views in one day was a roller coaster. One minute I was celebrating, and the next, I was reading 7 comments from people calling me a 'bot' and trying to tear me down. It hurt for a second. But then I looked at the 50 other people who were genuinely inspired. It made me realize how many of us are just tired. Tired of hunting for jobs that don't care about us, and tired of the '9-to-5' cycle that feels like a dead end. My goal has changed. I want to prove to every young person out there that reaching a five-figure monthly income isn't some 'magic trick' reserved for the lucky ones. It’s about building a system and having the resilience to keep going when everyone else tells you to quit. I’m not here to sell you a dream. I’m here to tell you that the 'noise' is part of the process. If you’re being criticized, you’re finally being heard. Let’s stop chasing paychecks and start building freedom. If you're struggling to take that first step, just know you're not alone. Keep pushing.

r/painting ScienceComplete2982

I painted 3 birds, robins.

r/fakehistoryporn SirCrapsalot4267

Early concept image of the Gaza Trump Riviera plan, 2020.

45 4
Reddit
r/fakehistoryporn DaHomieNelson92

White protestors gathering in front of a restaurant and preventing blacks from dining inside during the height of the Civil Rights movement (1968, black & white)

r/TwoSentenceHorror 54321RUN

My son died in a car crash while we were driving home from school after the cops rear-ended me.

Then they said it was my fault for putting him in the trunk and running from them.

33 0
Reddit
r/Art LuffyThorfinn

Spider-Man, u/LuffyThorfinn, prismacolor pencil, 2018

42 9
Reddit
r/oddlyterrifying FinnFarrow

"AI dog on chain" art exhibit in Tokyo

1209 68
Reddit
r/findareddit SilkVelvetCoco

Who thinks I should get a nose ring?

r/Art EricPause

One Minute You're Here, Eric Pause, Digital Illustration, 2020 [OC]

25 2
Reddit
r/LocalLLaMA HappyDataGuy

Struggling with SLM fine-tuning on private docs

Hey folks, I’m working on fine-tuning a small language model on internal PDF documentation so that it can answer questions only from that knowledge base, without using RAG or external retrieval.

I’ve tried continuous pretraining on extracted text followed by SFT using Q&A style data. While the model does learn some specifics, I’m seeing issues like overfitting, hallucinations, and conflicts with what the base model already “knows”. Generalization is poor and sometimes answers sound plausible but are wrong.

I’ve experimented with LoRA variants, different ranks, data grounding strategies, and evaluation via manual testing, but results are still mixed.

If you’ve worked on SLM fine-tuning for closed-domain knowledge or have thoughts on data construction, training strategy, or evaluation, I’d really appreciate pointers. Papers, blog posts, or personal lessons learned are all welcome.

Thanks in advance 🙏

r/AI_Agents ArgonWilde

Locally hosted agentic AI - Quadro P5000 vs 1080ti

Hi all,

I have the option of two GPUs for use in realising my own locally hosted agentic AI solution, and I'm looking for your input.

Option 1 - Quadro P5000:

It has 16GB of GDDRX5 VRAM, but the compute power of a 1060.

Option 2 - GTX1080TI:

It has 11GB of GDDRX5 VRAM, which is less than the P5000, but also has 33% better performance than the P5000.

What do you think?

r/interesting AdSpecialist6598

A wolverine trained to rescue avalanche survivors

40 7
Reddit
r/programming TypicalComma

Scheduling with an MCP server

Interesting deep dive on how to solve issues with scheduling messages with an MCP server.

r/AskMen RentUsual_2952

Why does "Go get Therapy" sounds like a shallow advice?

It feels like your feelings aren't acknowledged and are brushed aside. No meaningful conversation is made, and no understanding is reached. Also, therapy isn't very affordable these days in this economy, lmao.

r/OldSchoolCool LoveEquivalent9146

My mom and her beloved horse, Étienne-- summer 1988

20 6
Reddit
r/ContagiousLaughter Remurix

Someone seems to be having too much fun

115 3
Reddit
r/AbandonedPorn Whimsical_Ruins

Lonely ruins in Italy

711 7
Reddit
r/SipsTea bombaclat90

Shit is expensive

732 35
Reddit
r/30ROCK _bobby_tables_

Okay, that IS a lot of (expensive) cheese.

r/meme jeschezred

Don’t look at me bro

r/LiveFromNewYork Firefox892

Chris Elliott gets George Foreman to read him a bedtime story (1994)

12 10
Reddit
r/ProgrammerHumor VelvetParadox24

averageAiUserBehavior

1027 36
Reddit
r/automation WhispersAndWinksx

LinkedIn restricted my account 3 times in a year - what finally worked

Over the past year, LinkedIn restricted my account three separate times.

First restriction was obvious - I got greedy and sent 180 invites in 3 days, so deserved it. But the second and third times I thought I was being careful. Stayed under 100/week, used delays, didn't run campaigns at night… still got flagged.

I studied what actually triggers restrictions beyond the stuffs everyone talks about. Here's what I found:

Pending invite ratio must be low. I had 420 pending requests, and it’s a lot. If your pending/total sent ratio is over 30%, LinkedIn sees you as low-quality. Now I auto-withdraw anything older than 21 days.

Messages must be diverse.  I was rotating 3 templates thinking that was enough. All had the same structure, greeting + pain point + question. Linkedin is doing some kind of pattern matching. I switched to 7 completely different formats (some start with questions, some with observations, one is literally just 2 sentences).

Profile must look proportional. I had 8 skills and 240 connections, which is a bit strange for someone who was sending 400 invites a month. So I added new skills,  got 5 recommendations, joined 3 more new groups, rewrote my experience, added new posts for 2 weeks, filled in featured section. 

People consume content. Not only your content is important, but also the content you as a real person consume on Linkedin. I only logged in to send invites. Real users browse, react on posts and comment. I did it manually for a week and then added auto-likes (15-25 daily) and I manually comment 3-4 times per week on target audience posts.

Since then I’ve had no restrictions. My acceptance rate went from 32% to 54% and I'm actually sending fewer invites but getting better conversations. So slowing down and looking more real got me better results than trying to maximize volume.

83 18
Reddit
r/Futurology lughnasadh

A US startup says it can 3D print batteries to fill the 'empty space' nooks and crannies of drones and other machines, to give them a huge capacity boost.

"Even in that simplified, proof-of-concept drone, the printed battery achieves a 50 percent boost in energy density, and uses 35 percent more available volume."

Interesting idea, though no word on cost. I doubt they could compete with the economies of scale lithium-ion batteries benefit from. Then again, it isn't always about being the cheapest. The world is full of hundreds of thousands of different models of machines that might benefit from this. Some people will happily pay extra to get a 50% boost in capacity.

Material’s Printed Batteries Put Power in Every Nook and Cranny

157 45
Reddit
r/StableDiffusion Occsan

ScheduledSampler

Yesterday I made that ScheduledSampler, which allows you change this:

https://preview.redd.it/ybl6jdt4evhg1.png?width=1685&format=png&auto=webp&s=4793a351279bb5dfb8110865fa6ecbd9a599a037

to this:

https://preview.redd.it/0nt3jbe5evhg1.png?width=1197&format=png&auto=webp&s=5f370a286e891ec76218f1fec28963d0259e4476

It's available on dchatel/comfyui_davcha on github, along with a lot of other experimental stuff.

If anyone is interested, I can make a separate custom node in another repo for this, so you don't have to deal with the experimental crap in comfyui_davcha.

r/ForgottenTV DelGriffithPTA

Sons & Daughters (2006)

Produced by Lorne Michaels, the ABC sitcom was about an extended family that lived close together. In the style of Curb Your Enthusiasm it was a blend of improv and scripted dialogue. Dee Wallace Stone is the only cast member I had heard of. 10 episodes aired.

SortedFor.me