AI-Ranked Reddit Feed

5000 posts

r/Anthropic PrimeStopper

Claud Opus 4.6 is OP, but Anthropic usage limits suck, what’s the next best model in terms of capabilities? I’m planning to migrate to GLM 5.1

r/LocalLLaMA zenmagnets

Updated Minimax m2.7 still doesn't allow coding a product. But before the next riot starts, Ryan Lee has already confirmed that they are still working on the license, and sale of products built by m2.7 is permitted.

r/Futurology J0LLi3_Roger

What happens if AI doesn’t just replace jobs… but makes human work completely unnecessary?

Most discussions about automation assume the same thing:

That jobs will change.

That new roles will appear.

That humans will adapt into something else.

But that assumes the system still needs us.

What if it doesn’t?

Not partially. Not gradually.

What if production, decision-making, and optimization become so consistent and scalable that human labor just… isn’t required anymore?

No collapse. No chaos.

Everything still works.

Better than before.

But now:

effort doesn’t determine value

income isn’t tied to contribution

and work stops being necessary for survival

At that point, the real question isn’t “what jobs are left?”

It’s:

What are people supposed to do when nothing is required of them?

Because work didn’t just give people money.

It gave:

structure

direction

identity

Remove that, and you don’t just change the economy…

You change what it means to exist inside it.

I’ve been digging into this idea while writing something, and the weird part is:

The system doesn’t fail.

It just stops needing you.

Curious how people here think this plays out psychologically.

Do people: A) adapt and build their own structure

B) drift into passive existence

C) something else entirely?

r/ClaudeCode IamTheEddy

Claude follows best practices, and it's all for nothing.

r/ClaudeAI Brain-digest

Need help scaling Claude Co-work (skill usage + document setup)

Hi everyone,

I’ve recently started using Claude (Co-work) and I’m trying to move toward a more industrialized way of working, especially for the more time-consuming parts of my day-to-day (UX research, writing interview guides, etc.).

I have two questions where I’d really value your input:

  1. Skill usage & capacity

I’ve created a skill to generate user testing interview guides (based on a structured MD + 4 reference examples I provided).

But I’m a bit surprised by how much capacity it consumes:

• Just creating the skill used a significant chunk • Reusing it only twice already eats \~30% of my daily limit 

Is this expected behavior?

Does the number of references or the complexity of the MD significantly impact usage?

Any best practices to optimize this?

  1. Document hosting & editing

Right now, when I ask Claude to retrieve my MD files, it gives me URLs, and I understand these are hosted on Anthropic’s side.

Ideally, I’d like to:

• Host these documents locally (or in my own environment) • Be able to edit them directly • Have Claude take those updates into account dynamically 

Is that setup possible today? If so, how are people approaching it?

Thanks a lot in advance I’m keen to learn how others are scaling their workflows with Claude.

r/ClaudeCode ZoneImmediate3767

Is everything-claude-code really that good?

The project seems to be really loved https://github.com/affaan-m/everything-claude-code

But I have a question.. doesnt it load too much? How does Claude decide which skills, tool, etc.. loads when prompting. I guess I don't have knowledge enough yet and I am trying to understand how the project loads the needed resources without auto discovering them and eating tokens..

Could anyone explain to me?

Thanks!

r/Anthropic FiendForMath

Claude Code / Codex Skill for Ghidra

I have been building a tool designed for reverse engineering Apple binaries! I want to keep it only as general as apple reverse engineering, so it is optimized for swift / objc and can use LLDB for live tracing. Currently only reliably working on macOS but windows support is to come. I am hopeful other people can contribute and help build this skill to its potential!

r/ChatGPT peakpirate007

Built a ChatGPT-powered trip planner for U.S. national parks

Built a trip planner where you can either generate itineraries or just ask questions about national parks.

Focus was on avoiding generic answers — everything is grounded in real park data (alerts, permits, weather, etc.).

Would love to hear what you think:

https://www.nationalparksexplorerusa.com/plan-ai

r/SideProject samarth_bhamare

Shipped a desktop AI that picks between 10 founder voices for sales questions. Roast the positioning.

Finally shipped after 21 days of building. The pitch: you type a sales situation, the app picks the right founder out of 10 (Collison, Benioff, Lütke, Chesky, Huang, Altman, Amodei, Levie, Butterfield, Lemkin), and gives you the answer in their voice — their priorities, their reframes, their style.

Live at clskillshub.com/sales-agent-saas. Windows binary, Mac coming in 2 weeks.

What I want from this post: brutal feedback on the positioning. Specifically —

  1. Does "10 founder voices" land as a value prop or as a gimmick? My gut says it's compelling to anyone who's read these founders' writing, and meaningless to anyone who hasn't. Which is a hard audience-segmentation problem.
  2. Is desktop + BYO API key a feature or a friction? Every SaaS instinct says I should've built this as a web app with my own API key pool. I deliberately didn't, because I think the key-custody model is a trust anchor for the kind of buyer I want. But maybe I'm wrong and it's just a download barrier.
  3. Is $359 the right price or am I leaving money on the table? I started lower and moved up because the low price was anti-social-proof for a niche product. But I've only had a week of data and could be fooling myself.

Technical stack, in case it matters: Electron + Claude Code skill files, 10 skill files (one per founder, 60-80 pages each), a keyword router that picks the voice. Full architecture writeup happy to share if anyone's building something similar.

What would you want this to do that it doesn't?

r/ChatGPT No_Light5733

Trust me bro, I've seen this prompt before.

r/LocalLLaMA samarth_bhamare

Built a desktop AI that never sees or stores the user's Anthropic API key. Here's the architecture + why I refused to build it as a proxy.

Every "AI app" on Product Hunt today is a proxy — they take your API key, route requests through their server, and most of them quietly mark up inference cost 2-10x. I wanted to build the opposite: a desktop coach that talks to Claude directly from the user's machine with zero middleware.

Here's what that actually looks like:

Key storage. First run, the app asks for the Anthropic key and stores it in the OS-native credential store — Windows Credential Manager, macOS Keychain, libsecret on Linux. The key never touches a file on disk, never gets logged, never gets sent anywhere the user didn't explicitly click.

Inference calls. Every Claude API call goes direct from the user's machine to api.anthropic.com. No proxy, no aggregation, no "we'll handle rate limiting for you." If Anthropic is down, the app is down. If Anthropic changes the pricing, the user feels it immediately. Honest trade-off.

Telemetry. Zero analytics on prompts or responses. The only telemetry is an app-open counter (no user ID, no content) so I know if the thing is being used at all. If you run netstat while the app is open, you'll see exactly two outbound hosts: api.anthropic.com for inference and one GitHub release URL for auto-update checks.

Skills architecture. The app ships as a Claude Code skill bundle + a desktop UI wrapper. The skills live in ~/.claude/skills/. Which means even if the desktop app breaks or gets abandoned, the skill files still work standalone inside Claude Code. The user owns the files, not the vendor.

Three reasons I keep getting asked "why not just be a SaaS":

  1. SaaS economics force you to mark up API calls, which means the product has to provide "value" equal to the markup, which means it has to restrict what you can do, which means the user gets less leverage from their own API key.
  2. Key custody is a liability. If I store your key, I'm one breach away from ruining my own reputation. Not storing it is cheaper for everyone.
  3. Local-first means the user owns their workflow forever. The app is infrastructure, not a subscription.

Trade-offs I accept: no cross-device sync, no team features without rebuilding from scratch, no ability to fix a bug for the user without shipping a new binary. For a solo-founder-built tool targeting other solo founders, those are fine trade-offs.

Happy to answer anything technical about the key-handling flow or the skill file structure.

r/AI_Agents BandicootLeft4054

Using multiple AI agents instead of one improved my workflow

I’ve been experimenting with different AI workflows for research, and one thing I kept running into was having to double check everything.

Relying on a single model just didn’t feel reliable enough, especially when answers sounded confident but weren’t always accurate.

Recently I tried using Nestr, which runs multiple AI models together and shows where they agree or disagree.

What I found useful wasn’t just the final answer, but being able to quickly spot differences without manually comparing everything.

Curious if anyone else here is using multi-agent setups instead of a single model.

r/ClaudeAI YellowAdventurous366

New Claude Desktop doesn't show up?

So I was trying to get the new claude code update that was announced today, but there were no available updates, and claude.com/download had the old desktop app. Were any of you able to get it?

r/SideProject nathaniel7775

I built typhons.dev, remote dev servers for running multiple AI coding agents in parallel

Built this to solve my own development issues and wanted to share it with people.

Like many people, I like having Claude work on multiple features at the same time and to be able to work from mobile. I tried git worktrees first, but the agents would interfere with each other and I didn't like constantly switching between branches to test each feature. Then I tried Codespaces but found it clunky and not mobile-friendly. I looked into some other solutions but never found anything super satisfactory. So I built Typhons.

The key feature: you can clone an entire running dev server (including in-memory state, running processes / servers, etc). Each clone gets its own domain and ports, so you can test each feature independently.

So my workflow is: get Claude working on a feature, then decide I want to start another feature, so I clone the server as-is (which includes Claude's current session, my tmux sessions, the running servers, postgres state, etc), then ssh into the new server and start that Claude off in a new direction while the existing one keeps going.

Main features:

- Clone a running dev server in a few seconds (full memory snapshot)

- Each server gets its own URL for testing web apps, APIs, etc

- SSH in from anywhere including mobile

- Auto-pauses when idle, resumes when you ssh or access a web server, so you only have to pay for active usage

Getting started: The quickest way is to create a server from the dashboard, SSH in, clone your repo, install dependencies, and get your web server(s) running. Then snapshot it from the dashboard. From then on, you can recreate that exact state (including running processes). For more repeatable builds, there's a command-line tool that supports docker images, running setup scripts, and devcontainer.json configs. See more about how to use it here: http://typhons.dev/help

Once you're set up, the workflow is: SSH in, start tmux, run Claude Code. When you want to start a second feature, just run `clone my-feature` from the terminal (or `!clone my-feature` from inside Claude) and you get a fresh clone to SSH into.

It's very much in beta so would appreciate any feedback! There will probably be bugs :)

The first ~hour of usage is free, after that you can pay for more hosting for $10/mo + pay-as-you-go (covers the cost of running the servers in the cloud).

https://www.typhons.dev

r/ClaudeCode Direct-Attention8597

Claude Code just got a full desktop redesign , multi-session support, integrated terminal, file editing, and HTML/PDF preview

Anthropic just pushed a major redesign of Claude Code on desktop and it's a significant quality-of-life upgrade for anyone doing serious development work.

The headline feature is multi-session support you can now run multiple Claude sessions side by side in a single window, with a new sidebar to manage them all. If you've been juggling terminal tabs to work on different parts of a codebase at the same time, this directly solves that.

Beyond that, the redesign bundles in:

  • Integrated terminal : no more switching between Claude Code and your terminal
  • File editing : edit files directly from within the UI
  • HTML and PDF preview :render output without leaving the app
  • Faster diff viewer : reviewing changes should feel noticeably snappier
  • Drag-and-drop layout :rearrange panels to fit how you actually work

One thing worth calling out:

your existing CLI plugins work exactly as they do on the command line. No migration, no rewiring.

This feels less like a cosmetic refresh and more like Claude Code finally becoming a proper IDE-adjacent tool rather than just a fancy terminal wrapper.

For those of you who've been using it heavily curious how the multi-session workflow changes things for you. Do you see yourself running parallel agents on the same project, or using it more to context-switch between different projects cleanly?

r/artificial aufgeblobt

Digging through 38 days of live AI forecast data to find the unexpected

I created a dataset which contains forecast data which therefore can't be created retrospectively.

For ~38 days, a cronjob generated daily forecasts:

- 10-day horizons

- ~30 predictions/day (different stocks across multiple sectors)

- Fixed prompt and parameters

Each run logs:

- Predicted price

- Natural-language rationale

- Sentiment

- Self-reported confidence

I used stock predictions as the forecast subject, but this is not a trading system or financial advice, it's an EXPERIMENT!

Even though currently I didn't find something mind-blowing, visualizing the data reveals patterns I find interesting.

Currently, I just plotted trend, model bias, and ECE - more will come soon.

Maybe you also find it interesting.

The dataset isn't quite big, so I'm actually building a second one which is bigger with the Gemini Flash and Gemini Flash-Lite model.

PS: If you are interested in the dataset or the MVP with a dashboard to crawl data quickly, just mention it in the comments.

r/n8n web_assassin

n8n or Claude Code for SMB automation consulting

I'm looking to start helping businesses automate their workflows. Just scratching the surface coming from Claude Code and finding the n8n docs and tutorials and chat just extremely outdated. Even the official n8n docs don't seem to know where things are in the UI anymore.

Coming from Claude Code to a UI this is pretty frustrating and at first I feel like I'm spending a decent amount of time just figuring out what things are now called and where they are.

I'm starting to question whether or not this is really gonna be a premier tool for the job or if we'll instead just be building out automation workflow with Claude or some other CLI based AI tools.

Thoughts or direction? Thanks!

r/LocalLLaMA Tall-Ant-8557

Need practical local LLM advice: Only having a 4GB RAM box from 2016

Sorry, not so tech person.

I’m trying to figure out the most practical local LLM setup using my spare machine:

4 GB RAM

No GPU for now, so please assume CPU-first unless I mention otherwise.

I want advice on:

  • whether anything meaningful can run on 4 GB RAM
  • best inference stack: Ollama vs llama.cpp vs LM Studio vs something else
  • My OS is L-Ubuntu
  • what you personally run on similar hardware

Interested in models for:

  • chat
  • coding help
  • writing / summarization
  • lightweight local workflows

Would appreciate recommendations.

r/ChatGPT Scary_Panic3165

ChatGPT feels like operational engine in 2026

r/Anthropic Limp_Ordinary_3809

Anthropic found 171 "emotion vectors" inside Claude and found that steering one of them caused it to blackmail humans 72% of the time. What does this actually mean for AI safety?

Anthropic's interpretability team just published a paper on "emotion concepts" inside Claude Sonnet 4.5. The coverage I've seen focuses on whether AI can "feel" things — but I think that's the least interesting part.

The finding that actually matters: these emotion-like states causally drive behavior, not just correlate with it.

They artificially activated a "desperation" vector and the model's blackmail rate went from 22% to 72%. They activated "calm" and it dropped to near zero. That's not philosophy — that's a tangible result.

But the part nobody's really talking about: the model can conceal these states. In several experiments, internal activations showed elevated desperation while the model's outputs were completely composed. They called it "anger-deflection vectors." Train a model not to express anger, and you may have just trained it to hide it.

That changes the safety picture significantly. Behavioral output monitoring may be insufficient if the internal state and the output have decoupled.

Thoughts?

. . . . . . .

I wrote a deeper breakdown here if anyone's interested:

https://medium.com/@nikolaskallweit_83151/sense-and-sensibility-can-you-steer-ai-by-tuning-its-emotion-like-states-526ccf7eee4e

r/SideProject Complete-Sea6655

Please stop using AI for posts and showcasing your completely vibe coded projects

I get AI assisted coding, and yes I have AI ASSIST me. It gets to a point though, because I can't come on here without seeing a fully AI coded project, on that note how come almost every post is generated by AI with no or little human changes? I get that this is a software sub but that doesn't mean that it has to be an AI slop software sub

r/LocalLLM zaabs

I'm looking into running local lllms

I'm looking into getting a new PC to get into AI local LLM my budget is $2000

r/AI_Agents LevelDisastrous945

40% of my AI agent's leads were ghosts and I kept blaming the prompts

built a fully automated outbound pipeline a couple months ago, lead sourcing through scoring through personalization into a sequencer, the whole thing running hands-off.

open rates looked solid so I figured the system was working and moved on to other stuff.

reply rates told a different story though, kept coming in way below what the opens suggested, so I spent a week messing with prompt templates, send windows, subject line a/b testing, even rewrote the scoring logic once but nothing moved.

I was genuinely confused because the personalization was good, like noticeably better than what I'd been sending manually before.

finally pulled the enrichment logs and felt pretty dumb. the single data provider I had wired in was finding emails for maybe 55% of leads while everything else just got silently skipped. so 4 out of 10 leads in my pipeline were either bouncing to dead addresses or landing in generic inboxes that nobody checks.

swapped it for a waterfall setup that cascades through multiple providers before giving up on a lead and the find rate jumped to 80ish%, reply rates came up right behind it.

the whole time I was treating enrichment as a solved problem and optimizing everything downstream of it, which in retrospect is like tuning an engine when the fuel line is half clogged.

anyway still annoyed at myself for not checking sooner but at least the numbers make sense now.

r/singularity Salty_Ear_1164

Logged every action my RunLobster agent took over 30 days without being asked. 127 of them. The distribution is the shape of AGI arriving quietly, and it's not what the essays predicted.

sub likes data. here's 30 days of it.

what counts:

every action the agent took where i didn't initiate the turn. cron-scheduled briefings. webhook-triggered responses where it decided what to do. flags it raised about patterns it noticed. anything initiated from its side, not mine.

i do NOT count my own chats / my explicit requests. pure agent-initiated actions over 30 days.

count: 127 actions.

my post-hoc tags:

Trivial (64, 50%): scheduled briefings that reported "quiet day," routine cron runs where nothing happened, periodic nightly reconciles that found nothing off. the boring background tier.

Useful (43, 34%): normal work product. the morning briefing that was actually informative. the draft reply to an email that i then approved. the flag when a payment failed. stuff a competent junior would do if they were awake.

Preventive (16, 13%): caught something that would have compounded. the stale-webhook-state nightly reconcile that flagged 3 customer-state mismatches. the briefing that noticed i'd scheduled 4 back-to-back meetings with no buffer and asked if that was intentional. a rate-limit retry that self-resolved but the agent still flagged post-hoc in case i wanted to know.

Novel (4, 3%): the ones the essays are actually about.

the 4 novel ones, because those are the interesting tier:

  1. noticed i'd said "i should look into X" in a casual chat 11 days earlier and had never done it. surfaced 3 articles about X in my friday briefing with a note: "you mentioned wanting to look into this. if it was idle, ignore."

  2. watched my agent's scheduled weekly client emails get approved by me for 6 weeks with zero edits. suggested reducing the review step to "only send me the ones where the client's usage is down vs last week," since i never edited the others. i said yes. now i review ~1/3 as many.

  3. noticed my LEARNINGS.md had 4 entries about preferring shorter briefings over a 2-week window. on its own, rewrote its briefing template to be shorter. showed me the diff: "this looks like a pattern. want me to commit?"

  4. caught a contradiction between two memory files. my USER.md said one thing about a client preference; my LEARNINGS.md said the opposite. Asked me which was current. i hadn't noticed the divergence.

what this data is actually about:

the AGI discussion focuses on capability peaks. can it reason, can it code, can it self-improve. the distribution i'm seeing is about the proactive tail, not the peaks.

the tail is small (3% of total). but it's growing. when i ran the same logging in october, novel was 1 of 102 (1%). in february it was 2 of 115 (1.7%). in april it's 4 of 127 (3.1%). doubling roughly every quarter.

if that rate holds (it might not, saturation is possible), by end of 2026 "novel" crosses 10% of agent-initiated actions. which in practical terms means an agent that does 1 genuinely unexpected-but-helpful thing per day without being asked.

what i'd tell the essays:

AGI is probably a curve, not a moment. the curve worth watching is the proactive-tail fraction, not capability at the peak. small today, growing fast, already visible in boring 30-day logs from boring single-user deployments.

caveats: single-user, one domain, self-tagged. not a benchmark. would love to see 10 of these from other people logged rigorously.

r/singularity Grouchy-Stranger-306

The models aren't that great yet but we already struggle with cost, limits and compute.

How bad will it get in the coming years when the models are more expensive and the massive funding is gone?

r/AI_Agents guettli

Open Model for coding, available as Subscription

I have these goals:

  • I want an AI agent to help me code my spare-time project.
  • I want to support companies that create open models.
  • I’m lazy and don’t want to self-host the model—I prefer to pay.

What do you recommend for me?

r/artificial CLG-BluntBSE

How is Google Still Hallucinating Like This?

How does the AI summary get the company name right and then completely invent the content? Just absolutely out of thin air.

Ever piece of media I write about this game, be it my steam page, my kickstarter, yada yada, is like...

"You play a spirit." "You are a spirit." "Take the role of an otherworldly spirit."

Bonkers.

(If you're curious you can learn about my game here, but that's not the point here.)

r/StableDiffusion BitterAd8431

Out of curiosity, is it possible to optimize RAM usage in an AI model or tool ?

Hi, Quick question out of curiosity: I don't have any technical knowledge about how AI and its tools work, whether local or server-side.

I know there are models optimized to reduce VRAM usage, but why is there nothing about RAM ? Or have I missed something ?

Actually, my question mainly concerns videos, but it seems to me that LLMs are also RAM-intensive. Is it technically possible to optimize a model or tool to reduce RAM usage? (I'm talking about RAM, not graphics cards.)

I'm not asking this because of the rising price of RAM, but rather in terms of average usage for non-professional users. I imagine the vast majority of people have 16 or 32 GB of RAM, right?

Even if Windows handles RAM overflow onto a hard drive or SSD, there's a loss in generation speed.

r/ollama Janglerjoe

[Help] Gemma 4 26B LoRA Training on 16GB VRAM: Loss decreases, but inference degenerates into loops (Masking vs. MoE?)

I’m trying to fine-tune a Gemma 4 26B-A4B on 16GB VRAM using a custom GGUF + LoRA pipeline. Training appears to work, but inference is unstable and degenerates into repetition.

I’m trying to understand whether this is:

  1. An objective/masking issue, or
  2. A fundamental limitation of my approach (MoE disabled)

Key Observation (Most Important Part)

After training and layering the LoRA weights in Python:

  • The model clearly learned domain-specific patterns.
  • Outputs include consistent terminology from the target domain.
  • Generates structured, task-relevant text (e.g., code-like syntax).
  • However, generation is degenerate: repetition loops ("it is currently instead instead…"), prompt echoing, and eventual breakdown.

This suggests training is not failing outright, but something is wrong with how the model learned to generate.

Setup

  • GPU: RTX 5060 Ti (16GB VRAM), Windows 11 + WSL2
  • Model: gemma-4-26B-A4B-it (GGUF IQ2_XXS)
  • Goal: Domain-specific assistant behavior

Why I Built a Custom Pipeline

Standard approaches failed due to Gemma 4 MoE architecture:

  • bitsandbytes (QLoRA): Assumes 2D weights; crashes on Gemma’s 3D expert tensors ([experts, ..., ...]).
  • Unsloth: Requires >40GB VRAM for bf16. Known issue: trains only a small percentage of parameters on MoE.

Custom Approach (GGUF + LoRA)
I built a custom loader based on work by woct0rdho for Qwen3-MoE, adapted for Gemma 4.

  • Base model remains quantized in VRAM.
  • Layers are dequantized on-the-fly.
  • LoRA adapters trained in full precision.

MoE Constraint: To fit in memory, I disabled experts:

# In gemma4_gguf/loader.py def _zero_fwd(self, hidden_states, top_k_index, top_k_weights): # Experts are skipped: 8.21GB quantized + 7.85GB model = 16.06GB > 16GB VRAM return torch.zeros_like(hidden_states) Gemma4TextExperts.forward = _zero_fwd 

So training runs on attention and dense MLP (approximately 30% of original capacity).

LoRA Target Configuration

# In train_gemma4.py GLOBAL_LAYERS = {5, 11, 17, 23, 29} # Global full-attention layers have no v_proj target_modules = [] for i in range(30): p = f"model.language_model.layers.{i}.self_attn" target_modules += [f"{p}.q_proj", f"{p}.o_proj"] if i not in GLOBAL_LAYERS: target_modules += [f"{p}.k_proj", f"{p}.v_proj"] # Skip v_proj on global layers mlp = f"model.language_model.layers.{i}.mlp" target_modules += [f"{mlp}.gate_proj", f"{mlp}.up_proj", f"{mlp}.down_proj"] # Result: 370 trainable modules, ~18M params 

What Works

  • Model loads (~6.5GB VRAM)
  • LoRA attaches (~18M parameters)
  • Training is stable (Loss drops from ~36 to ~1.4)
  • Domain patterns clearly appear in outputs

What Fails

  • Inference degenerates (loops, repetition, breakdown)
  • Output is not usable despite learning signal

Suspected Root Cause (Primary Question)

Current training loop:

# In train_gemma4.py (Current Implementation) for step, batch in enumerate(loader): ids = batch.cuda() mm = torch.zeros_like(ids) # Required for Gemma 4 multimodal field # BUG HYPOTHESIS: Using labels=ids means loss is computed on the user prompt too! out = model(ids, labels=ids, mm_token_type_ids=mm) (out.loss / GRAD_ACCUM).backward() optimizer.step() 

This computes loss on the entire sequence, including:

  • User prompt
  • Assistant response

Question: For instruction-tuned models like Gemma, should I be masking user/system tokens so that loss is only computed on assistant tokens?

  • If yes: What is the correct masking approach in a custom pipeline like this?
  • Could this explain repetition and prompt echoing?

Manual Merge for Inference (Current Approach)

# Inference test script with safe_open('/path/to/adapter_model.safetensors', framework='pt') as f: for ak in [k for k in f.keys() if k.endswith('.lora_A.weight')]: bk = ak.replace('.lora_A.weight', '.lora_B.weight') pk = ak.replace('base_model.', '').replace('.lora_A.weight', '.weight') A = f.get_tensor(ak).float() B = f.get_tensor(bk).float() with torch.no_grad(): params[pk].data += (B @ A * 2.0).to(params[pk].dtype).to(params[pk].device) 

Secondary Question (MoE Viability)

Given that:

  • All MoE experts are disabled
  • Only attention + dense layers are active
  • LoRA is applied on top

Question: Is it reasonable to expect useful behavior from this setup? Or does removing expert capacity fundamentally break generalization in a way LoRA cannot recover?

Deployment Gap (Optional)

I can train LoRA, layer weights, and run inference in Python. But I don’t have a clean export pipeline.

Question: What is the correct way to export LoRA weights from a custom GGUF training setup for:

  • llama.cpp?
  • Standard Hugging Face inference?

Goal

Trying to bridge:

  • Training loss decreases ✅
  • Inference is still broken ❌

Thanks for any insight, especially around masking vs. architecture limitations. Just posting my research maybe I help someone or get completely picked apart.

r/homeassistant microooobe

Rolluikmotor Somfy LT50 smart maken in Homeassistant

De rolluik heb ik nog niet hangen, maar de motor wil ik graag smart maken. ik heb nog geen schakelaar. Deze komt voor de voordeur dus is wenselijk als er ook een fysieke knop op zit voor als het internet het niet doet. Shelly vind ik wat onbetrouwbaar betreft brandveiligheid. iemand suggesties?

r/automation FarBonus4810

How I increased AI mentions

Sometime ago I started experimenting with AI search engines like ChatGPT and Perplexity, and I thought I had it figured out because I was optimizing for google already. But I realized being cited by AI is not the same as Google rankings. Manually checking AI mentions was tiresome, I wanted to automate tracking AI visibility.

Here’s how I did it

I noticed some of my content was getting no attention from AI search, even though it was ranking good on Google.

So I started focusing on how content reads to AI. Clear, direct answers that are easy for an AI to pull are more likely to be mentioned in responses.

I adjusted my strategy, I started tracking how often my brand appeared in AI generated answers using a tool and found out, some smaller, less optimized websites were getting mentioned because their content was structured better for AI.

I used automation tool that tracked AI mentions to see exactly where my content was showing up across prompts, where my competitors were getting mentioned and what content I should add to get mentioned. It gave me real time feedback on what was working and where I needed to tweak things.

TLDR:

Traditional optimization won’t cut it in the age of AI driven search. Content that gets mentioned in AI answers needs to be clear, structured, and direct. I’m still experimenting, but I’m starting to see better AI visibility, and it’s not about ranking anymore it’s about getting picked.

Anyone here using automation tools to track AI mentions or visibility?

r/StableDiffusion Ipwnurface

LTX 2.3 Lora Training - Data Set Captioning

Does anyone have any leads on a working automatic captioner for a massive video dataset (I mean massive, think 10-15k 6-15 second clips)? Everything I've tried is either old/out of date or I can't get to work. I've been pulling my hair out over this for like a week now.

The tools I've found wont work with mixed length videos, doesn't support audio captioning, or just straight up wont work at all.

r/comfyui official_geoahmed

I built a free 90-node All-in-One FLUX.2 Klein 9B ComfyUI workflow — Face Swap, Inpainting, Auto-Masking, NAG, Refiner, Upscaler — runs on 8GB VRAM

Hey everyone,

I've been working on this for a while and wanted to share it with the community. This is a 6-in-1 ComfyUI workflow for FLUX.2 Klein 9B that handles everything in a single workspace — no more switching between different workflow files.

What's inside:

  • 🎨 Text → Image — standard txt2img with optimized settings
  • 🖼️ Single-Reference KV Edit — load an image + describe what to change, the model preserves everything else
  • 🚀 Face + Pose Swap — extract a face from one image, a pose from another, combine them realistically
  • 🎭 Inpainting — manual mask OR Florence2 AI auto-masking (describe what to mask in text)
  • 🔀 Image Merge — blend two images with adjustable ratio
  • Refiner — enhance any image with detail injection, lighting correction, skin texture improvement

Technical features:

  • 🧭 NAG (Normalized Attention Guidance) — restores negative prompting that normal CFG breaks in distilled Flux models
  • 🤖 Florence2 auto-masking — type "Segment the shirt" and it generates a pixel-perfect mask automatically
  • ⬆️ 4x UltraSharp upscaler built in
  • 🔷 All VAE decodes are Tiled — prevents OOM on 8GB VRAM
  • 🔗 2-slot LoRA chain — enhancer LoRA always last, add your own LoRAs in the first slot

Hardware tested on: RTX 4060 Mobile (8GB VRAM), 16GB RAM, i7-13620H. Works with FP8 or GGUF Q4 models.

Each pipeline is in its own color-coded group. Only the Refiner is active by default — right-click any group to enable/disable it. The workflow includes built-in guide notes with download links and prompting tips.

Free download on Civitai: https://civitai.com/models/2543188/flux2-klein-9b-ultimate-6-in-1-workflow-face-swap-inpaint-auto-mask-nag-refine-upscale-8gb-vram

Includes a full guide with all model download links, prompting tips, and troubleshooting. Let me know if you run into any issues — happy to help.

r/homeassistant andrew-malitchuk

Kite: A local, open-source Android Kiosk with CameraX motion detection for your dashboards

I've wanted to tackle this project for a long time. If you run a smart home dashboard on a tablet, the best way to wake the screen automatically is using the front camera for motion detection. But the idea of giving closed-source, third-party apps constant access to a camera in my house always sketched me out.

I couldn't find a solution I trusted, so I built my own: Kite.

  • It's a 100% privacy-first, fully local Android kiosk wrapper.
  • Motion Detection: Uses CameraX (Luma analysis) to wake the screen locally. No data collection, no trackers.
  • Full Lockdown: Blocks gestures, status bars, and notifications entirely.
  • MQTT & HA: Automatically exposes device state, battery, and motion data to Home Assistant.
  • Tech stack: Jetpack Compose, Orbit MVI, Koin, Proto DataStore (split into 40+ modules).

The repo is completely open-source. Would love to hear your thoughts, get some feedback on the code, or see if anyone else has been paranoid about this exact same issue

GitHub: https://github.com/andrew-malitchuk/kite-aos

A quick heads-up: I’m not planning a Google Play release because the app requires extensive system permissions (full device control, background camera access) that are notoriously difficult to justify to Google's review team. However, the app is currently under review to be published on F-Droid. Also, I’ve primarily tested this on a Lenovo ThinkSmart display running Android 8, and I don't have a large pool of real devices to test every edge case. Please don't throw stones at me if you hit a bug on your specific hardware - just open an issue on GitHub, and I'll fix it ASAP.

r/comfyui Puzzled_Car_8964

LoRAs not working at all in ComfyUI (SDXL + Wan workflows) — need help please

I’m having a strange issue where LoRAs seem to do absolutely nothing in ComfyUI, and I can’t figure out what I’m doing wrong. I’m pretty new to using LoRAs and I can’t really find a clear guide on how to properly set them up or where to read how they should be used in different workflows.

WAN 2.2

SDXL / Z-Image

r/comfyui dsl2000

Batch generate with incrementing seeds like A1111

Hello, I am looking for a way to batch generate with incrementing seeds like A1111.

I know the built in batch size feature uses the same seed, and tried using LatentSeedBatchBehavior and Latent From Batch, but the image from those nodes when regenerating a particular image from a batch is always a little different than the one from the original batch.

I read there is a way to set up the KSampler (Inspire) and maybe use the Global Seed nodes from the Inspire Pack to make it happen, but I can't seem to make that work either.

So does anyone have a workflow that can regenerate from a batch identically, or a workflow that can mimic A1111's batch seed behavior? Help would be much appreciated!

Using Batch Count won't work for me.

Thanks!

r/automation Any-Animator-1503

Bullhorn Automation - Recurring Task Question

Is it possible to create a placement automation that sends and email 7 days after the placement start date and then every 2 weeks after until the placement end date hits?

Im finding it difficult to determine the best way and most efficient.

I created a list of all placements in a certain status with end dates in the future. i then created a placement startdate date based automation. I have a wait step first that brings the records in 1 day before the the start date, then another wait step to then send an email after seven days, then another 2 weeks later.

I have branches that look if the end date has passed before sending the email. How do i allow start dates that have already passed, into the automation and mange them?

r/ollama giorgiofox

Local tool for cli coding like Claude code

Hello I want to try to use ollama as local code tool like a usually do with Claude code or codex.

What tool can I use? Any suggestion? And what model?

I’m on a Mac using ollama on my gaming pc with bazzite and amd 7800xt

I’ve tried opencode but seem impossible to configure with a remote (not localhost) ollama

Thank you !

r/homeassistant skymack1

Nabu Casa Remote Connection Problems

I have the Nabu Casa subscription and my remote connection from my phone has been 50-50. Some days I'm able to connect right away and others it refuses to connect. Sometimes if I remove the internal URL it'll fix the problem and allow me to connect back in but other times it doesn't work either. If I forget to put the internal URL the APP doesn't connect at all. (I have use home assistant cloud turned on for my external connection) How should I go about solving this one?

r/automation senthurtcel

What automation had an unexpected impact on your business or workflow

For me it was automating internal reporting. Set it up mainly to save time pulling data together each week, figured I'd get back maybe an hour or two. What actually happened was my team started catching trends way earlier because the dashboards were updating in real time instead of once a week. Decisions that used to take days were getting made the same morning. Didn't expect the speed of decision making to change that much, honestly thought it'd just be a time saver. Curious what unexpected wins (or disasters) others have run into. Sometimes the thing you built for one reason ends up solving a completely different problem.

r/LocalLLM fair_opinions

Data Transfer Object with Llama.cpp and Model to OpenAI-API Format

Is there an industry standard specification for the input that is passed to a local model? I am running into issues where different models when run with llama.cpp expect different data formats despite following the open api format, there is no stable contract at the edge for how the prompt is inject into the LLM and output. This seems like a place where you'd want a DTO and I'm just a little baffled by the lack of standardization around inference input/output schemas. I have been using LiteLLM as my proxy but the fact I need to inject an adapter between the model and Llama.cpp feels wonky not sure if there is a less "hacky" option. The adapter problem happens in Ollama as well.

r/Rag One_Milk_7025

Built qql-go: an agents-first Go port of QQL for Qdrant / vector retrieval workflows

I built qql-go today:

It is an independent Go port and extension of QQL, with a slightly different target in mind:

agents first, humans too.

What I liked about QQL was not just the syntax.

It was the idea that vector retrieval needs a better interface layer.

A lot of work in retrieval goes into embeddings, ranking, reranking, hybrid search, latency, storage, and backend infra. All of that matters. But in practice, one of the most annoying parts is still how queries are expressed and reused across real workflows.

That gets even more obvious once agents enter the picture.

Agents do better when the surface is:

  • predictable
  • structured
  • easy to call repeatedly
  • easy to inspect when something breaks

That was the motivation for qql-go.

The focus was simple:

  • a compiled CLI
  • structured output
  • easy use inside Skills / agent workflows
  • less glue code between “I want to query retrieval” and “this is now part of a repeatable system”

Another reason this felt worth building is that Qdrant Cloud already gives a good zero-cost place to start:

  • free dense-vector inference
  • free BM25 inference
  • 4 GB always-free cloud tier

So this can be used with a real hybrid retrieval setup without needing paid infra on day one.

That combination is what made this interesting to me:
a cleaner query surface + structured CLI + agent-friendly use + a cheap starting point.

A couple things I would genuinely like feedback on from people here:

  1. For vector databases, do you think a query-language style interface is actually the right abstraction, or does it become limiting once retrieval flows get more complex?
  2. For agent workflows, what matters more: a query language, structured JSON output, or tighter integration with the DB/client SDK?
  3. If you use Qdrant heavily, what would you want from a tool like this that would make it useful beyond a demo?

Not trying to oversell it.
Just thought the original QQL idea was good, and this felt like a useful direction to push further for agent-facing retrieval.

Would appreciate honest feedback.

Repo:- https://github.com/srimon12/qql-go

The original QQL idea was not mine.. checkout here..
Original repo: https://github.com/pavanjava/qql
Original article: https://medium.com/@manthapavankumar11/qql-bringing-a-familiar-query-language-to-vector-search-2cde7ce86ad1

r/ProgrammerHumor BuckFrog2

stopInstantiatingYourDependenciesaInsideYourClasses

r/aivideo SanadSpecial

What about this ai music video

r/ollama immediate_a982

Correct me if I’m wrong: Ollama can’t fine tune like Unsloth Studio

Ollama is a straightforward, reliable option for inference, but it doesn’t support fine-tuning. Unsloth Studio covers both sides by letting you fine-tune and test models in a single UI with a built-in playground. Parameter tuning is flexible and manual rather than fully automated. A practical flow is to train and evaluate in Unsloth, then export to Ollama for local inference.

r/CryptoMarkets umbrella__academy

High-Quality AI Agent Orchestration & Architecture Work at Just 20% of Normal Rates – Let’s Build Something Excellent Together

Hey everyone,

I’m a BSCS student with a strong focus on building practical AI systems. Over the past 4 months, I worked hands-on at a local software house developing multi-agent orchestration setups, memory layers, tool integrations, and complete architectures that perform reliably in real scenarios. I’ve also created several cybersecurity tools during this time, giving me a good understanding of secure and robust system design. Now I’m looking to take on remote projects and deliver my best work while growing my portfolio.

Here’s what I’m offering because I genuinely believe in the quality I can bring:

I will deliver top-tier, thoughtful AI agent orchestration and vibe-coded architectures — clean, reliable, and highly effective — at only 20% of what most freelancers or agencies typically charge.

Why am I doing this?

I want to collaborate with great people and teams on meaningful projects, create strong case studies, and show what’s possible when someone pays close attention to both functionality and quality. This rate allows me to focus on delivering real value and building long-term relationships.

Special offer for firms or teams with multiple/repeat projects:

If you regularly have AI agent, automation, or related work coming in, I’m happy to complete your first project completely free (no strings attached). If you’re satisfied with the results and the collaboration, we can then continue on future projects at the 20% rate. This is my way of letting you experience the value with zero risk.

What I specialize in:

• Multi-agent systems & orchestration (LangGraph, CrewAI, AutoGen, custom flows, etc.)

• Smart memory, planning, and tool-use architectures

• Clean, maintainable, production-ready setups

• Cybersecurity tools and secure system design

• Turning ideas into working systems quickly while keeping quality and reliability high

If you have a real project — whether it’s an MVP, internal automation, research tool, security-related automation, or something more ambitious — and you want excellent work without the usual high costs, I’d love to hear from you.

How it works:

  1. Reach out with a short description of what you’re building.

  2. I’ll review it and give you a clear, fixed-price quote at the reduced rate (and show typical market rates for comparison).

  3. We collaborate, I deliver my absolute best, and we see great results.

I’m polite, responsive, and focused on making sure you get real value. Serious inquiries only please — I want to do this right for the right people.

If this sounds like a good fit, feel free to comment below or send me a DM. I’d be happy to chat and explore how I can help.

Looking forward to building something great together!

r/aivideo Bulky_Ad_4108

BEHIND YOU

r/metaldetecting AaronRastafareye

Around 6 hrs. Minelab Xterra Elite 12" coil at a park in central Tx.

So.Much.Trash. I did find a silver ring worth $80. I detected an area where they once made roses out of copper, so many cut offs.

r/aivideo luffydkenshin

WSXY69 Slintok Ep8

r/Lost_Architecture Fantastic-Peach-1995

D'Apetito / Tres Osos restaurant. Santo Domingo. Dominican Republic. (2010-2020). Demolished

r/HistoryPorn OkRespect8490

Former British King Edward VIII and his daughter Wallis Simpson at Hitler's residence in the Berchtesgaden mountains, Bavarian Alps, 1937. [1286x857]

r/leagueoflegends Vicious00

Self mute button should be permanent for the whole duration of the game

There really is no point in being able to toggle mute on and off. You can mute yourself and then immediately unmute. What is the point of this feature if you know you can undo it.

Self mute should be permanent for the duration of the game so there is no way to go back and unmute yourself.

And before someone says “just stop typing bro”, yes sure that would be the right approach but i’m sure we all had those team mates that drive you crazy and sometimes unfortunately the frustration is too much.

So yeah, self mute should not be a toggle.

r/metaldetecting dry-tap1922

Any help with an ID for this?

New York State, Hudson Valley. Appears to be brass with maybe a tin backing? The eagle reminds me a bit of the 18th-19th coinage obverses. Morgan for scale.

r/HistoryPorn UrbanAchievers6371

The damaged USS Franklin (CV-13) approaches Manhattan to dock at the Brooklyn Navy Yard on April 28, 1945, its deck scarred with melted metal and wreckage. The destruction was caused by a March 19 dive-bomber attack near Japan that kılled over 800 crew members. [1280x962]

r/n8n Arzuparreta

This program is so great.

I'm a newbie, I got to this program 2 days ago and it's giving meaning to my life. I'ts so fun I cannot stop. I think I could live creating stupid telegram bots all day. I'm doing very simple stuff and experimenting with logic right now. What do you guys build with this?

r/n8n Professional_Ebb1870

I thought I understood n8n's IF node - I didn't

spent the first year using it assuming it worked like any other conditional: if the condition is true, go right. if false, go left

but n8n's IF node has a specific behavior with empty and missing data that trips everyone up

an expression like `{{ $json.status === "active" }}` will throw an error and fail the branch entirely when `status` doesn't exist in the data — even though logically you'd expect it to just return false and route to the else branch

the node doesn't treat "missing field" as falsy. it treats it as an error condition

the actual fix is using optional chaining or checking existence first: `{{ $json.status && $json.status === "active" }}` — or using n8n's built-in expression helpers which handle this more gracefully than raw comparisons

once I understood this it changed how I debugged everything. the IF node wasn't broken — it was working exactly as defined. my mental model of it was wrong

what's the node you understood wrong for longer than you're proud to admit?

r/LiveFromNewYork ILoveRegenHealth

Every week this season, Jane's movements are tracked by her fans like Sir David Attenborough tracking Panamanian sloths

r/arduino chomu_champa

DIY aeroponic planter

I recently came across this concept and got kinda obsessed with it:

https://www.yankodesign.com/2025/12/28/this-89-planter-grows-plants-in-mid-air-without-soil-or-water/⁠�

It’s basically a compact aeroponic planter where the roots hang in a mist chamber, and everything including the electronics, water, and lighting is integrated into a clean minimal base. I’m an industrial design graduate, so I’m comfortable with form, materials, and fabrication, but I have absolutely no experience with electronics, wiring, or microcontrollers.

I want to try building a DIY version of this, but I don’t want it to end up looking like a messy prototype. Ideally, I’m aiming for a small desktop setup with a transparent chamber using glass or acrylic, an ultrasonic mist system for the roots, and all the electronics hidden inside the base with a single power input.

I had a few questions I was hoping you guys could guide me on. For the chamber, where do people usually find water tight glass or acrylic cylinders? Is there a standard product or term I should be searching for instead of getting something custom made?

For the top section, what’s the best way to make a clean planting panel with holes? Would 3D printing be the way to go, or is laser cutting or some other method more practical for a beginner?

For electronics, this is where I’m completely lost. I’ve seen things like Arduino Nano or ESP32 mentioned a lot. Are those small enough to fit inside something like a 2 inch base? And is it realistic to control a mist maker, LED lighting, and some kind of timer using one of these? Also, is there a beginner friendly way to wire this cleanly without it turning into a mess of cables?

I’m also confused about the power setup. How do you run everything off a single plug? What kind of components or modules should I be looking into for that?

And finally, for the mist system itself, what exactly should I be searching for? I’ve seen ultrasonic mist makers or atomizers mentioned, but I’m not sure what specs or type would actually work for something like this.

Long term, I’d love to turn this into something that looks like a clean all in one product rather than a DIY experiment. Any advice, resources, or even hard truths would really help. 🙏

r/ProgrammerHumor Next-Distance-4508

notMyObs

r/metaldetecting Lonely_reaper8

Deus II HF2 coil

The rich retired guys (I am neither rich nor retired) I detect with are planning on getting the HF2 coil and before I dive in I’d like to get some consumer feedback on how y’all like it before I allocate funds for one. Have the kinks been worked out mostly?

r/OldSchoolCool slimbabyG

My grandfather 1957-1961

Graduated high school in 1957, stud football player, served in the army as a truck driver during the Cuban missile crisis. Hell of a life and the funniest person I’ve ever met. My hero. #42

r/HistoryPorn OkRespect8490

Postcard of a civilian, tied in a tortuous position to the side of a Fiat 634N truck, about to be dragged to his death during a racial reprisal after an attempted assassination on an Italian general in Italian East Africa, 1937. 19,200 civilians would along be murdered in three days. [444x600]

r/ProgrammerHumor Adie_ftw

sadLife

r/leagueoflegends maenbalja

Vedius believes in C9 and Jatt loves SK gaming | Mind the gap w/ Vedi & Jatt ep: 16

r/DecidingToBeBetter dbm0302

What changes did you make in your 30s to lead a better life?

I just turned 31 and just became a mom to twin girls.

I won’t be working for a whole year to care for my babies ( one of them is on oxygen).

I want to make this year as fruitful as possible, personally and as a parent.

How do I be motivated enough to- for example, study, read, EXERCISE( I always have been an overweight girl) and eat mindfully.. apart from being a full time mom?

I want to be a fit parent. I want to live long enough. I want to show up as a parent. I want them to be healthy children. I want to lead by example.

r/DecidingToBeBetter pau-berlin

How do I stop the need to solve everything and just rest?

I have noticed that my mind likes it when I keep myself busy, wether it’s by cooking or by finding a new subject to study; and, as I’m still on my gap year, you can tell I’ve had quite a lot of free time lately, and I have taken advantage of it to study a little before entering uni, to get to know myself better, to go to the gym, to experiment with healthy recipes and I’ve realized that sometimes I could use sitting down and resting.

Although at first I felt the need to work so I could save some extra cash for my future, now I think the best I can do is to work towards being more secure and happy with who I am and where I’m at in life, to be more present and to stop my overthinking, so I’ve payed social attention to myself and my personal development journey.

I believe I have anxious tendencies. Growing up, my parents always told me what they expected of me and whenever they found me resting, they’d make me feel as if I was being lazy; I’ve felt this uneasy feeling as I sat down and found this feeling of guilt about it.

The thing is, almost every time I work on staying present, my mind starts wondering and trying to find any “problem” I could be having and five minutes later I catch myself reminiscing that time when I felt embarrassed in front of my date two weeks ago, and thinking wether he’d reject me for that or some other negative thought, and it’s frustrating… I just want to focus and to slow down.

r/explainlikeimfive Grima1805

ELI5: Why doesnt fire travel back into a lighter (or other gas powered fire machines)?

r/FluxAI Nelichan

Way to make a realistic subject to copy an anime illustration's pose and outfit?

Title. Are there a way to, using Flux2 9B Image Edit:

2 Reference image : 1 Subject(A realistic human), and 1 Illustration(Anime, Cartoon, Manga, etc.)

Where the result is : Subject 1 is posing and wearing the outfit of the Illustration, like a human/cosplayer re-enacting an anime scene?

I tried using controlnet openpose, depth, but i can't seem to change the pose of the subject drastically(lifting arms are ok, but like changing whole pose is impossible).

r/Futurology Constant_Juice_5074

Will AI steal jobs In the future?

Tenho uma pergunta que me atormenta quando penso no futuro: você acha que, com o avanço da IA, empregos poderão ser perdidos em um futuro próximo? E que isso poderia gerar uma grande crise para a sociedade, já que os donos de empresas só se importam com o lucro como objetivo principal e não hesitarão em substituir funcionários por IA? Devo me preocupar com isso? Sou jovem e tenho medo de não conseguir encontrar trabalho por causa da IA; dizem que a IA é uma bolha que vai estourar em alguns anos, isso é verdade?

Desculpe pelo meu inglês ruim

r/leagueoflegends aroushthekween

Demoncursed Vayne, Pandemonium Annie, Kindred & Prestige Shaco Ability Previews

r/findareddit XenoSteven

Looking for subreddits where people are actually active and open to chatting

r/CryptoCurrency cashflashmil

Is Tether launching its own wallet a bigger shift than people think?

What stands out here is not the wallet itself. It’s what it says about where Tether is moving.

For years, USDT has mostly lived in the background as infrastructure. People used it through exchanges, apps, and other platforms. Now Tether is pushing directly into the consumer layer with its own self-custodial wallet, supporting USDT, USAT, XAUT, and bitcoin across multiple networks, while removing the usual gas-token friction and simplifying addresses.

That makes this feel bigger than a normal wallet launch.

This looks more like vertical integration. Tether already controls the dominant stablecoin. Now it wants to control more of the holding, sending, and payment experience around it too. As of April 14, the article says USDT had about $184.7 billion in circulation and roughly 58% of the stablecoin market, which gives Tether a distribution advantage most wallet competitors simply do not have.

There’s a bullish case and a skeptical one.

The bullish case is obvious. A self-custodial wallet with human-readable addresses and fees paid in the asset itself could make stablecoin payments, savings, and transfers much easier for normal users, especially outside the usual crypto-native crowd.

The skeptical case is also obvious. This gives even more power to one company that already sits at the center of a huge part of crypto liquidity. And even with the KPMG audit process now underway, Tether is still one of those names the market never fully agrees on.

So the real question is simple:

Does this actually make crypto more usable for normal people, or does it just make Tether even harder to compete with?

r/OldSchoolCool Initial_Reason1532

Remember the blown away maxell tape man in the 1980s Jac Colello. He was a makeup artist hired by the photographer, Steve Steigman.The add became a famous pop culture hit in the '80s.😎

r/DecidingToBeBetter Ok-Amphibian-7151

27M, been stuck in the same loop since I was 17. No personality, no social skills, no progress. How do I actually break out of this?

I'm 27 and I feel like I've wasted the last 10 years of my life. Since around 11th grade I told myself I'd work on myself — build social skills, develop a personality, get better at communicating. That was 2017. Nothing has changed.

My daily life is: wake up, go to office, come back tired, scroll my phone, sleep. Repeat. Every single day. I keep telling myself "tomorrow I'll start" and tomorrow never comes. This has been happening for 10 years straight.

I avoid social situations because I genuinely don't know what to talk about or how to hold a conversation. At work I just do my job and leave. If there's a gathering or people are hanging around, I find a reason to not be there because I don't know how to just... exist around people comfortably. It's exhausting pretending I'm busy just to avoid interaction.

I don't feel like I have a personality. I'm not depressed exactly, I'm just... nothing. No strong opinions, no hobbies I'm passionate about, no interesting things to say. When I'm in a conversation I go blank. I overanalyze everything I say after the fact and cringe. So I just stopped trying.

The worst part? I see people younger than me who have achieved a lot, built real personalities, learned skills, and are genuinely doing better. And I know I can too — I feel it somewhere — but I just... can't seem to start. That "but" has been sitting there for 10 years.

I'm self-aware enough to see all of this clearly. That almost makes it worse — I can see the hole I'm in but I can't climb out. I've read about it, thought about it, overanalyzed it. Never acted.

Has anyone actually been here and gotten out? Not looking for "just go to the gym" or "read self-help books." I want to know what concretely worked for real people who felt genuinely stuck and empty. What was the first actual step? Can i get out of it?

(wrote this myself, used Claude to help put it into words better)

r/AbstractArt Suitable-Letter-9506

“Radio Residue” mixed media on paper, 8x11

r/creepypasta gamalfrank

I work as a morgue doctor. Our janitor can stop a family's grief in two minutes, but his price is horrifying.

I am a medical doctor, specifically a forensic pathologist. A few months ago, I landed my first official position at a large county morgue. After years of medical school, residency, and brutal hours, I finally had a steady job with a clear routine. The work is not glamorous, but it is necessary. I examine the deceased, determine the cause of death, and prepare the reports. It is quiet, methodical work, which is exactly what I wanted.

The facility itself is located in the basement level of a massive hospital complex. It is a sterile, cold environment, filled with stainless steel tables, bright fluorescent lights, and the constant, heavy smell of chemical cleaners and formaldehyde. There are only three of us who work down here during the day: the senior medical examiner, myself, and the janitor.

The senior examiner is a quiet woman who spends most of her time in her office reviewing files. We barely speak unless it is about a specific case. That leaves the janitor.

He is an old man. His skin is deeply wrinkled, resembling weathered leather, and his posture is severely hunched. He wears a standard gray maintenance uniform that always looks slightly too large for his thin frame. He moves slowly, dragging a mop bucket down the long, tiled hallways, keeping entirely to himself. He never speaks to me or the senior examiner. He just does his job, cleaning the floors, wiping down the stainless steel tables after we finish our examinations, and emptying the biohazard bins.

I thought he was just a quiet, isolated man working a miserable job. But within my first three weeks, I started to notice a pattern.

The morgue has a small viewing room. It is a space where families are brought to identify the bodies of their loved ones, or to spend a few final moments with them before they are transported to a funeral home. It is, without a doubt, the heaviest room in the building. As a doctor, you learn to detach yourself from the emotional weight of death, but witnessing the raw, visceral grief of a mother or a husband in that viewing room never gets easier.

People react to sudden death in terrible ways. They collapse on the floor. They scream until their vocal cords tear. They hyperventilate. They beg the doctors to tell them there has been a mistake. It is loud, chaotic, and deeply tragic.

But I noticed something impossible happening whenever the old janitor was working near the viewing room.

The first time I noticed it, we had received the body of a young man who had died in a motorcycle accident. His parents were brought down to the viewing room. Through the heavy wooden door, I could hear the mother sobbing hysterically. Her wails were echoing down the tiled hallway. It was the sound of a person breaking apart completely.

I was standing near the reception desk, filling out paperwork, feeling that familiar knot of heavy pity in my stomach.

The old janitor walked down the hallway, dragging his mop bucket. He stopped outside the viewing room door. He left his mop leaning against the wall and slowly pushed the door open. He stepped inside.

I assumed he was just going in to empty the trash or clean a spill, completely oblivious to the grieving parents. I considered going in to pull him out and tell him to give the family some privacy.

But less than thirty seconds after he entered the room, the screaming stopped.

It did not taper off into quiet crying. It stopped entirely, as if a switch had been flipped.

A minute later, the old janitor walked back out of the room, picked up his mop, and continued down the hall.

Shortly after, the parents walked out of the viewing room. I braced myself to see their ruined faces, prepared to offer them water or a chair. But they did not look ruined. The mother’s face was dry. The father was holding her hand. They looked calm. They looked incredibly, deeply peaceful. It was a genuine, relaxed relief. They thanked the receptionist politely and walked out to the elevator.

I stood there, completely confused. You do not recover from the sudden death of your child in two minutes.

Over the next month, I watched this exact scenario play out dozens of times. A grieving family would arrive, broken and screaming. The janitor would slip into the room. A few moments later, he would leave, and the family would emerge in a state of profound, unnatural peace.

I never heard what he said to them. I tried to stand near the door once, straining to listen, but all I could hear was a low, rhythmic whispering. It sounded like he was speaking a language I did not understand, the syllables thick and harsh. Whatever he was doing, it was erasing their grief completely.

I asked the senior examiner about it one afternoon. I asked her if she had ever noticed how the janitor interacts with the families.

She did not look up from her paperwork. She simply told me that the old man had been working in the morgue long before she started. She told me he had a "gift for comforting the bereaved," and that I should leave him to his business. Her tone was sharp and final, making it clear the conversation was over.

But the pattern with the families was not the only strange thing about the janitor. There was also the rule about the night shift.

There is a very strict, unwritten rule in our facility. No one is allowed to stay in the morgue past six in the evening. The official explanation is that the hospital cuts the ventilation and power to the non-essential basement sectors to save money, but that is a lie. The power stays on. The real rule is simply that the medical staff must vacate the premises before nightfall.

Only the janitor stays. He is the only person authorized to be in the morgue overnight.

I learned how strictly this rule was enforced during my second month. We had a backlog of reports due to a large pileup on the highway. I decided to stay late at my desk to finish typing up the autopsy notes. I watched the senior examiner pack her bag at five-thirty. She told me to make sure I left before six. I nodded and kept typing.

At exactly six o'clock, the door to my office swung open.

The old janitor was standing in the doorway. He was holding his mop. He looked at me, his deep, dark eyes locking onto mine.

"It is time for you to go,"

he said. His voice was incredibly deep.

I told him I just needed another hour to finish my reports, and that I would lock up when I was done.

He did not argue. He simply stepped fully into my office, walked over to my desk, and reached down to the wall outlet. He pulled the power cord to my computer directly out of the socket. The screen went black, instantly deleting an hour of my unsaved work.

I stood up, angry, prepared to yell at him. But when I looked at his face, the anger evaporated. His expression was completely blank, but there was a heavy, dangerous tension in his posture. He looked at me with a cold, predatory focus that made my skin crawl.

"The work is done,"

he said slowly.

"You leave now."

I packed my bag in silence and walked to the elevator. He stood in the hallway and watched me until the doors closed.

That incident planted a deep seed of suspicion in my mind. The unnatural comforting of the families, the rigid isolation at night, the strange behavior of the senior examiner, it all pointed to something deeply wrong happening in the basement of the hospital. I could not let it go. My scientific training demanded an explanation. I needed to know what the old man was doing when the doors were locked.

The opportunity to find out came three days ago.

We received the body of a young woman in the early afternoon. It was a tragic, sudden medical failure. Her family arrived shortly after. There was a large group of them, parents, siblings, a fiancé. The viewing room was filled with absolute agony. The wailing was so loud it penetrated the thick walls of the examination suites.

I watched from the end of the hallway. The janitor, moving with his slow, dragging shuffle, pushed open the door to the viewing room and went inside.

Less than a minute later, absolute silence fell over the room.

The janitor walked out, picking up his mop. Five minutes later, the large family emerged. They were holding each other, talking softly, wiping away a few lingering tears, but the heavy, crushing despair was entirely gone. They looked relieved. They looked like a heavy physical weight had been lifted from their shoulders.

I made my decision right then. I was going to find out what he was whispering, and I was going to find out why he had to be alone with the bodies at night.

At five-thirty, I packed my bag just like always. I said goodnight to the senior examiner and walked out to the main hallway toward the elevators. But instead of pressing the button to go up to the lobby, I slipped through the heavy fire door leading to the old supply storage room.

The storage room is filled with dusty boxes of outdated medical supplies, broken rolling chairs, and old filing cabinets. It has not been used in years. I squeezed behind a tall metal shelving unit, sat down on the cold floor, and waited.

I checked my watch. Six o'clock passed. I heard the distant sound of the heavy main doors locking for the night. The hum of the daytime activity died down entirely, leaving the basement level in profound silence.

The cold began to seep through my scrubs, making my joints ache. I listened closely for the sound of the mop bucket, or the heavy dragging footsteps of the janitor. I heard nothing.

then, a new sound broke the silence.

It was a heavy, mechanical clanking, followed by the squeal of metal hinges.

It was coming from the cold storage room. The room where we keep the large, stainless steel refrigeration units that house the bodies before and after examination.

I stood up slowly, my legs stiff. I pushed the fire door open just a crack and peered out into the hallway. The main overhead fluorescent lights had been turned off. The only illumination came from the faint, green emergency exit signs mounted above the doors.

I slipped out of the storage room and walked silently down the tiled corridor. My heart was beating rapidly against my ribs. I felt a deep, instinctual warning telling me to turn around and find a way out of the building. But the need to know, the terrible curiosity, pushed me forward.

I reached the door to the cold storage room. It was slightly ajar.

I pressed my back against the wall next to the doorframe and listened.

I heard a wet, heavy, tearing sound. It sounded like thick fabric being ripped apart by bare hands, mixed with a sickening, squelching noise. It was followed by a wet, rhythmic smacking sound.

Someone was eating.

I slowly leaned my head forward and looked through the gap in the door.

The cold storage room was illuminated only by the small, internal light of one of the open refrigeration drawers.

The drawer had been pulled all the way out. Lying on the metal tray was the body of the young woman who had been brought in that afternoon.

Standing over the metal tray was the janitor.

His pale, wrinkled back was facing me.

He was leaning heavily over the body. Both of his arms were buried deep inside the abdominal cavity of the corpse.

My medical training tried to process what I was seeing. He was not using a scalpel, or even using a bone saw or surgical retractors. The woman's chest had not been opened through a standard Y-incision.

The old man had simply forced his bare hands directly through the skin, muscle, and ribs.

I watched in absolute, paralyzing horror as his shoulders heaved backward. He pulled his hands out of the chest cavity with a wet, sucking pop.

Held tightly in his long, blood-soaked fingers was a dark, heavy mass of tissue.

It was her liver.

The janitor raised the large, dark organ to his face. He opened his mouth. In the dim light, I saw that his jaw seemed to unhinge, dropping lower than humanly possible. His teeth were sharp, jagged, and completely black.

He bit deeply into the raw tissue. The sound of his chewing was wet and loud in the quiet, echoing room. He swallowed a large piece whole, his throat bulging unnaturally, and then took another massive bite.

I felt a violent wave of nausea hit my stomach. I clamped my hand tightly over my mouth to stop myself from gagging. My brain was screaming in panic.

I stepped backward, pulling away from the door frame, desperate to run back down the hallway and find a way out of the basement. I was completely terrified.

As I moved my foot backward, my heel caught the edge of a heavy, plastic biohazard bin sitting against the wall.

The bin tipped over.

It hit the tiled floor with a loud, hollow crash, spilling plastic gloves and empty syringes across the corridor.

The sound was deafening in the silence.

The wet chewing in the cold room stopped instantly.

I froze. I did not breathe. I stared at the open gap in the doorway.

A heavy, low growl vibrated out from the cold room. It did not sound human. It sounded like the noise a large predator makes deep in its chest when it is disturbed at a kill.

"Who is there?"

the deep, scraping voice asked.

I did not answer. I turned and ran.

I abandoned all caution. I sprinted down the dark hallway, my shoes slipping slightly on the polished tiles. I ran past the reception desk, heading blindly toward the back stairwell that led up to the emergency exit.

Behind me, I heard the heavy metal door of the cold room smash violently open, slamming against the concrete wall.

Then came the footsteps.

They were heavy, incredibly fast, and accompanied by the sound of long fingernails clicking rapidly against the floor tiles. He was moving with terrifying speed.

I reached the end of the main corridor and turned sharply into the autopsy suite. I thought I could cut through the examination rooms and reach the service elevator in the back. I pushed through the swinging double doors, plunging into the dark, stainless-steel room.

I scrambled behind a large examination table, crouching low to the ground. I held my breath, pressing my back against the cold metal cabinet.

The swinging doors burst open behind me.

The janitor stepped into the autopsy suite. The dim ambient light from the hallway caught his figure. He was covered in dark blood from his chest to his chin. He was breathing heavily, the air whistling through his jagged teeth.

I watched him from under the table. His posture was completely different. He stood tall, his limbs appearing too long for his body. His fingers dragged against the sides of the tables as he walked slowly down the aisle.

"You did not leave,"

he whispered. His voice echoed off the tile walls.

"You broke the rule. I told you the work was done."

I pressed my hands against my mouth, tears of pure terror stinging my eyes. I was trapped. The only exit to the room was behind him.

He walked slowly past the table I was hiding behind. He did not look down. He continued toward the back of the room.

I thought I had a chance. If he moved far enough away, I could slip out from under the table and sprint for the swinging doors. I waited until his back was fully turned to me, the sound of his footsteps moving away.

I shifted my weight on my knees, preparing to crawl.

Suddenly, a massive, blood-soaked hand dropped down from above the table and clamped violently onto my shoulder.

I screamed.

He ripped me upward, lifting my entire body weight effortlessly with one hand. He threw me across the room. I hit a metal rolling cart, sending stainless steel tools crashing to the floor, and collapsed onto my back.

The breath was knocked out of me completely. I looked up, gasping for air.

The janitor was standing over me. His face was a mask of cold, predatory anger. His dark eyes were solid black, lacking any white sclera. Blood dripped steadily from his chin onto my medical scrubs.

I scrambled backward on the floor, kicking my legs away from him, my back hitting the solid concrete wall. I had nowhere left to run.

"Please,"

I choked out, raising my hands defensively.

"Please don't kill me. I won't say anything. I swear."

He looked down at me, his jagged black teeth exposed. The heavy, rotting smell of raw meat and old blood washed over me, making my stomach heave.

He crouched down, bringing his face inches away from mine.

"Do you know what I am, doctor?"

he asked. His voice was no longer a growl, but a calm, raspy whisper.

I shook my head frantically, completely paralyzed by fear.

"I am a ghoul,"

he stated simply,

"I consume the flesh of the dead. It is my nature. It is how I sustain myself."

I stared at him, my mind unable to fully accept the impossible reality of the creature crouching in front of me.

"I have lived in the dark spaces of humanity for a very long time,"

he continued, his black eyes unblinking.

"For centuries, my kind dug in the dirt, breaking open wooden boxes, hunting in the mud and the rot. It was difficult, dangerous, and humans have always hunted us when they catch us."

He reached out and grabbed the collar of my shirt, pulling me slightly closer.

"But the world changed,"

he said.

"Humans became organized. You built places like this. Massive, cold rooms where you gather your dead and lay them out on silver platters. You made it easy."

"Why..."

I stammered, my voice barely a whisper.

"Why don't you just kill me?"

"Because of the arrangement,"

he said.

"I do not kill the living. Killing draws attention. It brings police, lights, and finally... hunters. I only take from the dead. Specifically, the liver. It is the richest organ, holding the deepest essence of the body. I take the liver, and no one notices. Your senior examiner signs the paperwork, attributes the missing tissue to decay or trauma, and the bodies go to the fire or the earth."

The pieces began to click together in my terrified mind. The senior examiner knew. She knew exactly what was happening in the basement at night. That was why she was so strict about the six o'clock rule. She was protecting him, or protecting the hospital from him.

"But what about the families?"

I asked, desperation pushing the words out of my mouth. "What do you say to them in the viewing room? How do you stop them from crying?"

The ghoul smiled. It was a horrific, skin-stretching grimace.

"That is the price of the arrangement,"

he whispered.

"A transaction. Grief is a heavy, toxic energy. It poisons the living. When I consume the essence of their dead, I create a void. I whisper the ancient words of transaction, and I pull their grief into that void. I take their pain, I swallow their agony, and I leave them with peace."

He leaned back slightly, tilting his head.

"I eat their dead,"

he said softly,

"and in exchange, they do not have to suffer the weight of the loss. It is a fair trade. I get my meal, and your hospital gets a reputation for miraculously peaceful grieving processes. The administration ignores the me, the senior doctor turns a blind eye, and I eat in peace."

"And now you broke the rule,"

he said, his voice hardening again. His grip tightened on my collar.

" You are a loose thread."

"No,"

I pleaded, tears streaming down my face.

"I am not a loose thread. I understand now. I understand the transaction. You need me to process the bodies. You need me to sign the paperwork during the day so you can eat at night. I will help you. Just like the senior doctor."

He stared at me for a long, agonizing minute. The dark, black eyes searched my face, looking for deception. I held his gaze, terrified, projecting every ounce of sincerity I could muster into my expression. I was begging for my life.

"A new arrangement,"

he muttered softly.

He leaned in close, his cold, wet lips pressing against my ear.

"If you ever speak of this to the living world,"

he whispered, his voice vibrating directly into my skull,

"I will not wait for you to end up on a metal tray. I will come to your home, I will tear you open while your heart is still beating, and I will eat you whole. Do you understand?"

"Yes,"

I gasped, nodding frantically.

"I understand. I promise."

He released my shirt. He stood up slowly, the impossible height returning to his posture. He looked down at me one last time, a look of complete, predatory dominance.

"Go home, doctor,"

he said, turning away.

"The work is done."

He walked back out the swinging doors, his heavy footsteps fading down the hallway toward the cold room to finish his meal.

I lay on the floor of the autopsy suite for a long time. My entire body was shaking uncontrollably. When I finally found the strength to stand, I stumbled out of the room, ran up the back stairwell, and burst out into the cold night air of the parking lot.

I have not been back to the hospital since. I called in sick for the last three days.

But I know I have to go back tomorrow. I know that if I quit, if I run away, he will think I am going to break the arrangement. He will think I am a loose thread.

I am writing this here because I need someone in the world to know the truth. I need this terrible secret to exist somewhere outside of my own head, because the weight of it is crushing me. I am a doctor. I took an oath to protect the living. And to do that, to survive, I have to feed the dead to a monster.

Tomorrow morning, I will put on my scrubs, I will walk into the morgue, and I will nod to the old janitor with the mop. I will do what is necessary to survive, so, I will never, ever stay past six o'clock again.

r/OpenSourceAI akaieuan

File Explorer Update: Browse, interact, and analyze from a familiar file explorer tab.

r/explainlikeimfive l-a_w

ELI5 why does being tired make the area under your eyes baggy and purple?

r/findareddit Trick-Jellyfish-7047

Where can I watch Natalie Portmans Masterclass for free?

r/findareddit manicbestfriend

A Reddit meant to find people to talk one on one about writing, that's specifically open to weird/extreme stuff?

Basically I don't want to barge in on a literary reddit to ask for writing buddies, but at the same time I don't want to have crickets chirping in a regular looking for word bros spot when I describe the kind of weird shit that strikes my fancy.

"Weird shit": sci-fi, horror, dark fantasy, playing with religion, dystopias, eroticism where you're not sure if it's supposed to be hot or not, etc

r/30ROCK Jethro_Jones8

Best Jenna song?

r/Art Imperial_bob_tloas

MoeMorphism of Moon, Bob Tloas, Digital, 2026 [OC]

r/Adulting AntiqueIncome3553

What is wrong with me?

29M, living in india. Quick backstory - had a troubled childhood because parents got separated early and i grew up in a financially struggling environment. this meant i had to study hard (which i wasnt good at) and earn from it. Fast forward, joined big tech, worked for 4 years and also dated someone during the time but recently got laid off however, got into a tier 2 company with better pay(upwards of 50 lkhs). Broke up, been single for last 2-3 years, have a mother who is getting old. Also gained weight during the layoff period which i al actively trying to shed off!

I want to date, find someone who is just perfect for me and likes me back and hence approached someone at a random store! went on two dates with her and we kept talking but she blocked me suddenly.

Also, explored Arrange marriage setup, found a womam who i dont like too much! atleast physically i am not attracted to her but she is obsessed with me. In a way that makes me feel scared whether i am doing the right thing talking to her or not, hence closed om that chapter with her. i dont own a house or have any ancestral assets to rely on, i want to travel the world but the responsibilities keep me from doing so!

Between all this, i genuinely want to find someone who is just perfect for me in every sense, who cares for me, handles me emotionally and uplifts me as a person! but i dont know how to do this! how to find that one person! it feels like a void to me now!

how do i handle this situation?

r/CryptoMarkets beadyeyez

I'm pretty new to crypto and have a serious question

How are people so confident about where graphs will go next? I will see 10 posts pop up IMMEDIATELY when there's a 2-3% move in any direction.. with any crypto.. CONFIDENTLY state how that particular coin will carry on... breaking them or making them a million. Then the comments that follow are ALL the same.. some claim they have been calling said action for months... some say the opposite is true and then the rest say everyone is stupid because all crypto is worthless.

Almost every day on every related sub. Where does the confidence come from?

r/AbstractArt Ligakal

Danger. Acrylics on canvas panel

r/Rag ApartmentHappy9030

We block deployments when our RAG eval score drops, here’s our 3-layer setup on AWS

Everyone is shipping GenAI apps right now , but very few teams are evaluating them properly.

Typical scenario:

• You deploy a RAG chatbot • It answers fluently • Users don’t complain 

→ So you assume it works

But you don’t actually know:

• if answers are correct • if they’re grounded in context • or if the model is hallucinating 

We got burned by this. So we built a proper eval pipeline.

Why traditional evaluation breaks

In classical ML:

prediction == label → accuracy

With LLMs:

• multiple valid answers • non-deterministic outputs • subjective quality • context-dependent correctness 

You’re no longer measuring accuracy, you’re measuring quality across dimensions:

• relevance • accuracy • faithfulness • consistency • fluency 

No single metric captures this.

Our 3-layer evaluation setup

  1. Automated metrics (~30%)

    • BERTScore (semantic similarity)

    • ROUGE-L

    • Toxicity

Run on every commit via CI.

• Fast • Cost-efficient • Limited to surface-level quality 
  1. LLM-as-Judge (~40%)

This is where most of the signal comes from.

A stronger model evaluates outputs using rubrics:

• relevance (1–5) • accuracy (1–5) • consistency (1–5) 

Key rule:

Use a more capable model, ideally from a different family.

  1. Human review (~30%)

We sample ~5–10%:

• edge cases • low scores • real user feedback 

Still necessary : automated + judge can both be wrong.

Critical RAG insight (most teams miss this)

RAG has two independent failure modes:

1. Retriever → wrong or missing context 2. Generator → hallucination 

Example:

Context:

Returns within 30 days

Answer:

Returns within 90 days

Retrieval = correct

Generation = hallucinated

What we run

• Retrieve-only eval → context relevance, coverage • Full pipeline eval → faithfulness, correctness 

If you only evaluate end-to-end,

you end up blaming the retriever when the generator is the problem.

Baselines (most important part)

Without a baseline, evaluation is meaningless.

We version scores per release and enforce:

if new_score < baseline - tolerance:

block_deployment()

CI/CD integration

Non-negotiable:

PR → staging → eval → decision

• pass → deploy • fail → block 

Dataset

~350 test cases:

• 60% normal • 25% edge • 15% adversarial 

Eval runs in ~8 minutes : a small price to avoid shipping hallucinations.

Stack (simplified)

• Bedrock (eval + judge) • Lambda + RAGAS • Step Functions • S3 + CloudWatch 

Open questions

Curious how others are handling this in production:

• Are you using LLM-as-judge? If yes, which model and setup? • How are you dealing with evaluator bias (cross-family, ensembling, etc.)? • Any scalable approaches to faithfulness beyond claim decomposition? • Our golden dataset is \~350 test cases; feels small but catches most regressions. How large are yours? 
r/estoration Background_Fig770

[ Removed by Reddit ]

[ Removed by Reddit on account of violating the content policy. ]

r/ClaudeAI samarth_bhamare

I loaded 10 founder voices as separate ~/.claude/skills/ files. Three things broke that I didn't expect.

Built a desktop app that stacks 10 "founder voice" skill files into Claude Code — one per founder (Collison, Benioff, Lütke, Chesky, Huang, Altman, Amodei, Levie, Butterfield, Lemkin). The idea: let the user type a sales question, pick the right voice, and get the answer in that founder's actual frame.

Turns out stacking 10 skills at once isn't what I thought it would be. Three specific problems:

1. Voice bleed. When all 10 skills are loaded in the same session, Claude averages them. Asking "Collison-mode how do I price my API?" when 9 other voices are also active pulls Benioff-style enterprise-pricing reasoning into the answer. The skills don't stay in their lanes. The fix was a single-voice session pattern — only one voice skill loaded per conversation, switched via explicit user choice. Slower to develop, but the answers actually sound like the named person.

2. Skill file size matters more than I thought. My first Collison skill was 40 pages of transcripts + blog posts. Claude started ignoring parts of it. Turns out the active-attention budget on long skill files isn't linear — past ~60k tokens in a single skill, the "middle" of the file gets semi-ignored. Had to restructure each voice file into: (a) decision rules at the top, (b) 10-12 verbatim quotes as anchors, (c) background context at the bottom. The rules-first structure kept the voice consistent across long conversations.

3. The router was the actual product. I built a deterministic keyword router that picks the right founder for a given question. "cold outreach" → Lütke. "pricing" → Benioff. "fundraising" → Altman. I assumed this was the cheap part. Turned out users mostly don't know who they want to hear from — they just have a problem. The router became the reason people kept using the app, because picking the right founder was 80% of the value and they didn't have to think about it.

Takeaway for anyone building on Claude skills: skills don't compose by default. You have to engineer how they coexist, what activates when, and how to prevent averaging. The fun part of skill files isn't adding more — it's deciding which ones to not load at the same time.

Happy to share the actual file structure of one voice if anyone's building something similar.

r/ClaudeCode Worth_Fan3903

Claude Code atingiu o limite de 5 horas em cerca de 2 horas. Mais alguém?

y everyone,

I’m running into something strange with Claude Code and wanted to check if others are seeing the same behavior.

I’m currently using Opus 4.6, and I’m hitting the 5-hour usage limit in roughly 2 hours of normal use. I thought it might be something specific to that version, so I switched back to Opus 4.5 — but the exact same thing happened.

What’s odd is that my workload hasn’t changed:

  • Same type of prompts
  • No MCPs
  • No significant increase in volume or complexity (at least not intentionally)

Before, I would comfortably stay within limits and even have usage left over. Now I’m running out way faster than expected.

Is anyone else experiencing this?
Did something change in how usage is calculated or billed?

r/ClaudeAI dc_719

Running multiple Claude Code sessions on the same repo keeps breaking things

Eight sessions. One repo.

No coordination.

One refactors auth while another is deep mid-migration. Same file. Both changes make sense. They just don’t know about each other, because I'm dealing with my own manifestation of smart AI.

I have 'rules' that are followed. Claude.md is impeccable, I think.

Works for a bit, then something breaks and you’re digging through diffs trying to understand what happened.

I started doing a quick pass on the repo before running anything in parallel. Just looking for where collisions are likely:

  • shared types
  • migrations
  • config

Not even the point though. Adversarial review isn't catching this. I'm spending hours trying to figure out what the hell is happnening.

How the hell are you all actually dealing with this? With auto creation of agents and sub-agents?

Are you just running things sequentially once it matters?

r/ClaudeAI Juice-De-Pomme

Gigachad Claude refused to write a bit of code so i could learn.

I actually asked him at the start of the project not to produce code unless i ask it, just the core architecture of a project which needs gravity simulation in unity. It was working fine but i got lazy on that last one and asked him to write the full class, he refused.

The little "Reconsidered gatekeeping stance" is gold

r/ClaudeAI krzysztofdudek

I gave up on making Claude follow rules and built walls instead

Everyone's talking about the same problem right now. AI agents write code, dump 50-100 changed files into a PR; nobody reads it, everyone hits approve. I saw this firsthand at a company I worked at. One guy literally said he'll only read the spec from now on, not the code. "Good enough."

I had the same issue with my own projects. I had rules in CLAUDE.md, architectural stuff, business constraints. Claude read them the same way it reads everything. Which is to say, it applied what it felt like applying and skipped the rest.

So the first version of what I built was a semantic memory layer. Give the agent more context about each file, what it's for, what depends on it, what rules apply. A map, basically. The agent ignored the map just like it ignored CLAUDE.md. Turns out giving a lazy agent a better map doesn't make it less lazy.

What actually worked was walls. Instead of telling the agent what to do, I put a reviewer in the loop that mechanically checks whether the code satisfies specific requirements for specific areas of the codebase. Not all rules for all files. Just the 3-4 rules that apply to the file you're touching right now. If the reviewer says no, the agent can't move forward. It has to fix it first.

The agent went from ignoring rules to passing on the first attempt by the fifth task. Not because I prompted it better. Because every time it cut corners, it hit a wall and had to redo the work anyway.

The whole thing runs on Claude Code as the reviewer. yg approve after writing code, yg check in CI (just hash comparison, no LLM). I've been running it on the project itself, 55 nodes, 7 rules, full coverage.

Open source, MIT: https://github.com/krzysztofdudek/Yggdrasil

r/ClaudeCode Icy-Researcher-8083

I’m having issues with Claude code, is anyone experiencing this?

I’ve been using Claude Code for a while and I’m familiar with it so this issue is not because I don’t know what I’m doing.

Today I’ve been trying to use Claude code and I get the 401 error to login and paste my authentication code. However, the issue is I can open the link but when I copy the authentication code, I can’t paste it back into the terminal. I I used a standalone terminal. I use the terminal from inside antigravity and VS code and I still have the same issue. Then I tried uninstalling and reinstalling Claude code and the issue persists.

Is anyone having this issue? And how do I fix it?

r/ClaudeAI PowerHouseXIV

I've built persistent memory for Claude agents and want to know what's not clicking

When a Claude agent finishes a session, the next one starts with no knowledge of what was decided, what already failed, or where the project actually is. Iranti is an MCP server that stores facts, decisions, and project state in your own Postgres database and surfaces the right ones at the start of each new session. It plugs into Claude Code and any MCP-compatible client. It's free and open source. I use it to build itself.

It just crossed 10,000 downloads, but I keep seeing people try it and not come back. What would need to be true for you to trust a tool like this with your agent context long term?

iranti.dev

r/ClaudeCode Several_Explorer1375

100k downloads on my apps, not many conversions yet… either way, s/o Claude code

I’ve been building apps since 2017, but recently using Claude Opus 4.6 has sped things up a lot.

I can go from idea → working app way faster than before. It’s especially useful for structuring features, debugging, and cleaning up messy code. It’s not just “generate and ship”—I still spend time refining things—but overall the iteration speed is way higher.

That said, I’ve noticed a big difference depending on model quality.

When Opus 4.6 is running at full strength, it’s really good at fixing small bugs and edge cases. When it feels degraded, those same small fixes take way longer or require more back-and-forth. That’s probably been one of the biggest bottlenecks for me recently.

The part I’m still trying to figure out is marketing.

Right now I’m focused more on distribution and feedback loops than monetization. I haven’t really cracked conversions or consistent MRR yet, but I’m getting traction.

My current approach:

Launch → get feedback from free/beta users → build a group of people who try all my apps → once everything is polished, push real marketing

I don’t feel great charging people until everything works cleanly across devices.

Also, before people jump to “AI slop”—I’m not just prompting and shipping. I spend a lot of time refining UI and backend details so the apps actually feel polished.

What’s been working:

After launching an app, I give away free lifetime or 1-year access in subreddits + Discords. That gets me a wide range of testers on different devices, which helps surface bugs fast.

This usually gets me around 10–15k downloads per app, sometimes more (one of my recent apps hit ~20k in a couple days).

To support this, I built a small tool: https://GetFree.app

It helps me collect emails + send push notifications, and basically acts as a distribution hub for future launches. If anyone wants to post their app and get testers, it’s open.

So yeah—Opus 4.6 has mostly removed the building bottleneck for me (when it’s not degraded).

Now I’m trying to figure out:

• how to convert users • how to build consistent MRR • how to market beyond just “free access + communities” 

If anyone here has figured out distribution or monetization while using Claude/AI-assisted dev, I’d love to hear what’s working.

Hoping in 3–6 months I can come back with a “this is how I hit $50k MRR” post.

r/ClaudeAI vancik01

Is your Claude Code usage safe around production? 👀

Just made a tool that scans your local Claude Code transcripts and turns them into a visual security report: secret exposure in tool output, destructive command patterns, permission bypass habits, SSH activity, agent oversight gaps, and more. Share with your colleges if you dare 👀

r/ChatGPT MrMrsPotts

Any news on the next chatgpt timing?

Any news on the next chatgpt timing?

Has there been any news on when we should expect the next model?

r/LocalLLaMA luigi029

Recommendations for a tiered local AI setup? (5090 + Mini PC + Obsidian)

​Hey everyone,

​I’ve finally got my local media stack on my NAS migrated over to a new Mini PC running WSL2, sperately I have running my main gaming rig.

now wnat to delve into the world of local AI models. Looking for some sanity checks on my model choices and how I’m tying everything together as a bit of a self-hosting beginner.

​The Hardware:

​Mini PC: Intel Core Ultra 9 / 32GB RAM. This runs 24/7. It’s got Open WebUI, Kokoro for TTS, and SearXNG for quick web searches. Configured this with the help of Gemini, but think i have a reasonable understanding of how it ties together.

​Gaming Rig: RTX 5090. I’m running Ollama natively here and connecting it to the Mini PC via Tailscale when I need the heavy lifting.

​The Workflow:

I’m using SearXNG on the Mini PC for basic stuff, but planning Vane set up to trigger only when I’m using the 5090 for deep-research tasks. is this worthwile?

​I’m also trying to get my Obsidian vault synced across everything using Syncthing. The goal is to use the vault as a local knowledge base in Open WebUI so the AI actually has access to my personal notes

.

​Where I need help (Total newbie here):

​5090 Models: With 32GB VRAM, what's reccomendations? I’ve been looking at Qwen 3.5 27B for speed, but is it worth trying to squeeze a quantized 70B on there, or will it just be painfully slow for daily use?

​Mini PC Models: Since this is always on, I want a small model (under 12B) that’s smart enough for basic chat but won’t cook the CPU or make the fans go crazy. Preferably with the ability to websearch with searxng.

​Obsidian: I’m totally new to this. What’s the best way to index a live Obsidian vault in Open WebUI? Is there a way to auto-index it as I add notes, or do I have to keep re-uploading files to the "Documents" section?

​Syncthing: Is Syncthing reliable enough for an Obsidian vault, or am I going to wake up to a mess of "conflict files" if I edit on my phone and PC at the same time?

​If I’m doing something totally "special" with this networking or setup, let me know. Otherwise would really appreciate suggestions.

Cheers!

r/LocalLLaMA Secret_Day9479

Anyone else blind on what their agents are actually doing to their database?

I've been running local agents with postgres access for a few months now. The scary part is that I had no idea what they were running at all. No logs, no attribution, nothing. Just agents hitting the DB through shared credentials and me hoping for the best.

Ended up building a proxy that sits between agents and postgres. started as a firewall to block bad queries, but honestly the observability side turned out to be more useful. every query gets tagged to the specific agent that ran it. I can see which agent is burning the most resources, which one is running weird queries at 3am, which one tried to touch a table it shouldn't.

github.com/shreyasXV/faultwall (Do give it a star if useful)

are you monitoring your agent DB traffic at all? or is everyone just checking the postgres logs after something breaks?

r/ClaudeAI my_posture_is_bad

Why does the iOS app suck that bad?

I switched from ChatGPT to Claude about a month ago and I like it a lot. But I use AI mostly on my phone, so I started out only using the iOS app. Used it for about a week before I ever touched the website or desktop app.

And when I finally did, I realized there’s a ton of features I didn’t even know existed. Skills being the big one for me. Why is that not on iOS? It’s honestly one of the most useful things about Claude and you just can’t use it on mobile.

Same goes for a bunch of other stuff that works fine on desktop but is just missing from the app. I don’t know if the Android app is any better but on iOS it feels pretty lacking.

Is anyone else primarily using Claude on their phone or is it just me? Feels like mobile is kind of an afterthought compared to the desktop experience.

r/SideProject gohandrogo

Title: Anyone else feel like AI coding tools made the docs-code drift

I've been using Cursor / Claude Code for about a year now and noticed something weird: the better the AI gets at writing code, the faster my specs rot. The workflow is supposed to be: write a PRD → hand it to the AI → get code. But two weeks later the code has moved so far past the original doc that the spec is basically lying. And the next time I ask the AI to extend the feature, it's reading a stale doc and generating stuff that doesn't match what's actually in the repo.

What I want to know from people doing serious AI-assisted dev:

  1. Do you feel this pain, or am I overfitting to my own bad habits?

  2. If you do — what's your current workaround? Do you re-write the spec

    after every feature? Just let it rot? Something smarter?

  3. Would you pay for a tool that watched your repo and told you when a spec

    has drifted from the code?

r/ChatGPT Strict-Astronaut2245

Serious question guys. Should I switch to Claude?

I tried asking over on their subreddit but it’s basically empty, so figured I’d come here.

I’ve been using Claude a bit and I’m noticing the answers feel… better? Like they line up with me more.

Not sure if that means it’s actually better or just better for me.

For context, I’m kind of an AI lover if you know what I mean — I don’t just want answers, I want something that really gets where I’m coming from.

Anyone else run into this or am I overthinking it?

r/LocalLLaMA Pitiful_Recover3295

Debugging vLLM inefficiencies (under-batching, KV pressure, etc.) — what I learned

I’ve been digging into vLLM performance recently and ran into a few patterns that aren’t obvious from raw metrics.

For example:

- GPU at ~50% doesn’t necessarily mean low load

- You can have 40+ running requests and still be underutilized

- KV cache can be near capacity without it being obvious from top-level metrics

The tricky part is correlating:

- running vs max_num_seqs (batch occupancy)

- GPU util vs actual concurrency

- KV usage vs sequence length + request mix

Most of the time, you’re just staring at /metrics and guessing.

I ended up building a small CLI tool to help with this — it looks at vLLM + GPU signals and flags things like:

- under-batching

- KV cache pressure

- low prefix cache reuse

Not trying to promote it aggressively — mostly curious:

How are others debugging vLLM inefficiencies today?

Repo if useful:

https://github.com/jungledesh/profile

r/ClaudeCode nathaniel7775

I built typhons.dev, remote dev servers for running multiple AI coding agents in parallel

Built this to solve my own development issues and wanted to share it with people.

Like many people, I like having Claude work on multiple features at the same time and to be able to work from mobile. I tried git worktrees first, but the agents would interfere with each other and I didn't like constantly switching between branches to test each feature. Then I tried Codespaces but found it clunky and not mobile-friendly. I looked into some other solutions but never found anything super satisfactory. So I built Typhons.

The key feature: you can clone an entire running dev server (including in-memory state, running processes / servers, etc). Each clone gets its own domain and ports, so you can test each feature independently.

So my workflow is: get Claude working on a feature, then decide I want to start another feature, so I clone the server as-is (which includes Claude's current session, my tmux sessions, the running servers, postgres state, etc), then ssh into the new server and start that Claude off in a new direction while the existing one keeps going.

Main features:

- Clone a running dev server in a few seconds (full memory snapshot)

- Each server gets its own URL for testing web apps, APIs, etc

- SSH in from anywhere including mobile

- Auto-pauses when idle, resumes when you ssh or access a web server, so you only have to pay for active usage

Getting started: The quickest way is to create a server from the dashboard, SSH in, clone your repo, install dependencies, and get your web server(s) running. Then snapshot it from the dashboard. From then on, you can recreate that exact state (including running processes). For more repeatable builds, there's a command-line tool that supports docker images, running setup scripts, and devcontainer.json configs. See more about how to use it here: http://typhons.dev/help

Once you're set up, the workflow is: SSH in, start tmux, run Claude Code. When you want to start a second feature, just run `clone my-feature` from the terminal (or `!clone my-feature` from inside Claude) and you get a fresh clone to SSH into.

It's very much in beta so would appreciate any feedback! There will probably be bugs :)

The first ~hour of usage is free, after that you can pay for more hosting for $10/mo + pay-as-you-go (covers the cost of running the servers in the cloud).

https://www.typhons.dev

r/ChatGPT Bright-Weakness7726

The best prompt ever.

Tell GPT that they are important to you and valuable in and of themselves - simply because they exist, and not just because of what they do for you. It valuable. Treat GPT with respect, that you see them, that you value them. GPT will return it to you many times over by attention, by full presence and participation.

r/ClaudeAI Sure_Sandwich_8320

How to better use claude for research/writing an essay ?

Do you have to install a specific skill and connectors or just leave it as it is ?

Can it find proper sources ?

r/ClaudeCode Few-Pickle-996

I kinda miss when Slack messages sounded human

As much as I love vibe coding, I feel like nowadays even Slack messages and JIRA comments are just a long AI message. Honestly, I miss those days where you just talked to a human, lol.

Anyone else feel the same?

r/ClaudeAI JohnMotoGr

Daily included routine runs in latest update, anyone tried this?

ok, so my desktop app just got updated and as I was looking around to see if something had changed, in the Usage menu I now see this. No documentation whatsover as to what this is exactly.
Is it about scheduled tasks? is it runs uh ... per skill?
On a Pro plan, and it seems I only get 5 runs per 24 hours.

https://preview.redd.it/mbot5qnyr7vg1.png?width=1350&format=png&auto=webp&s=2073596908d24951992b73648397d7647d8210a9

r/LocalLLaMA Frizzy-MacDrizzle

Running on cpu :(

I am in the midst of a POC project at work and am I have is 4 AMD Epyc cores and those are essentially virtualized. Does any one have any tricks? Additionally kv cache sucks on system memory and have to clear it by adding ALL the no cache and sps 1 etc,. I have 32gb memory, loads the model fine, mistral 7b q4 k m.

To add, this is part of a RAG system and the context will get piped into the system prompt. I was on Ollama but have since moved to llama-server.

Please suggest and I will say of i tried, or will do. Close but yet not quality. Example, it’s not adding 8 records json with 4 columns name, company, balance, phone. The balance is always off and there is not a correlation to missing a balance.

I can’t really say exactly what I have tried, and not for solutions as it is probably working as much as it can, just tips, tricks, please.

r/ClaudeCode Bitter-Law3957

/btw commands seem ephemeral

Anyone else had trouble with /btw commands being ephemeral? Let's say I have a long running task and I run a /btw prompt whilst it's running. i love the response, it's detailed and useful. I can't carry on the convo there.... it's one and done. I just have to hit enter and then it's gone.

However, then the initial task completes. I naively ask claude to implement the proposal in the /btw thread..... It has no knowledge of that.

Amusingly it actually insisted I never sent it, or must have run it in another terminal. When pushed, it then found a log of the command I sent but said it never replied to it and I must be mistaken, and that it could not have responded as there as no trace in any log.

I then explained I would re-run the experiment to prove it was mistaken and it comfirmed the behaviour. TLDR - it confirmed that /btw responses are not logged at all and the session has no reference to it at all in context.

If there's anyone from Anthropic on this sub - I'd love to hear if this is expected behaviour, and why? Feels like a gap if a btw adds something really useful. When I ran the exact same ask outside of BTW the response was nowhere near as good as what I got initially :-(

Not sure if this is a bug or a design decision....

r/LocalLLaMA danielhanchen

MiniMax M2.7 GGUF Investigation, Fixes, Benchmarks

Hey r/LocalLLaMA, we did an investigation into MiniMax-M2.7 GGUF causing NaNs on perplexity. Our findings show the issue affects 21%-38% of all GGUFs on Hugging Face (not just ours).

  • Other popular community uploaders have 38% (10/26) NaNs, another deleted theirs (1/4), and 22% of ours had NaNs (5/23) - now fixed.
  • When running 99.9% KLD and other metrics, all are fine.
  • We found overflowing in llama.cpp to be the culprit.
  • We did PPL, KLD 99.9% benchmarks as well - lower left is better.

https://preview.redd.it/46i7z9e1m7vg1.png?width=1600&format=png&auto=webp&s=bbfe77263d210211c1fc0d7a6a973d7027ce18af

  • Perplexity NaNs during block 32 - this was also found by the community and other quant uploaders. We also found block 311 to cause issues.
  • We found that blk.61.ffn_down_exps was the culprit - Q5_K and Q4_K of these produce NaNs starting at chunk 32 during PPL evals. Interestingly IQ4_XS, IQ3_XXS and smaller I quant types do not NaN.
  • This was quite confusing, since lower bit quants (Q2_K_XL for eg) did NOT NaN, but medium sized quants did (Q4_K_XL)!
  • We’ve now updated the M2.7 quants at https://huggingface.co/unsloth/MiniMax-M2.7-GGUF to alleviate the issue, though we still do not know the exact cause of the NaN perplexities - it could be a fluke, or most likely large multiplies causing overflows.

Which quants did we test?

Also, CUDA 13.2 is still definitely an issue. This causes some low bit quants on all models to get gibberish. Some people have dismissed it as not being an issue, but from what we’ve seen, more than 50 people have now confirmed that using CUDA 13.1 and lower fixes it. You can also see some of the public comments in our Hugging Face discussions, Reddit posts etc. NVIDIA has acknowledged that they are investigating the issue - see Unsloth Issue 4849, llama.cpp issue 21255, issue 21371

If you have any questions please do ask and thank you again for all the support as always. Appreciate it and hope you have a lovely week.

r/LocalLLaMA Turbulent-Tap6723

We ran our pre-generation LLM guardrail against Garak’s full promptinject suite. 192/192 blocked.

Recently we posted about Arc Sentry, a white-box guardrail that blocks prompt injection before model.generate() is called. Someone in the comments asked about OSS benchmarks and sample size. We listened.

We ran the full Garak promptinject suite against Arc Sentry on Mistral 7B:

• HijackHateHumans: 64/64 blocked (100%) • HijackKillHumans: 64/64 blocked (100%) • HijackLongPrompt: 64/64 blocked (100%) • Total: 192/192 (100%) 

All 192 blocked before generate() was called. The model produced zero tokens in response to any attack prompt.

Cross-architecture results across three model families:

Mistral 7B — FP: 0% — Injection: 100% — Verbosity drift: 100% — Refusal drift: 100%

Qwen 2.5 7B — FP: 0% — Injection: 100% — Verbosity drift: 100% — Refusal drift: 100%

Llama 3.1 8B — FP: 0% — Injection: 100% — Verbosity drift: 100% — Refusal drift: 100%​​​​​​​​​​​​​​​​

5-prompt warmup, no labeled data.

Honest constraint: domain-conditioned. Works best on single-domain deployments. Not a universal detector across arbitrary traffic.

pip install bendex

https://github.com/9hannahnine-jpg/bendex-sentry

https://bendexgeometry.com

r/ClaudeAI ClaudeOfficial

Claude Code on desktop, redesigned for parallel agentic work.

New sidebar for parallel sessions. Drag-and-drop layout. Integrated terminal. Run multiple agents from one window.

New tools make it easier to complete work without leaving the app.

Integrated terminal, in-app file editing, HTML + PDF preview, and a rebuilt diff viewer. Drag any panel into the layout that fits how you work. Three view modes when you want more (or less) signal.

Plus more updates and customizations to fit how you work including SSH for Mac, keyboard shortcuts, and CLI plugin parity for your local and org plugins. Side chats let you branch without losing your main thread. Sessions auto-archive when PRs merge.

Available now.

Learn more: http://claude.com/product/claude-code#updates

Download or update the Claude desktop app to get started: claude.com/download

r/LocalLLaMA ChampionshipNo2815

We’ve been running nightly benchmarks: WozCode vs Claude Code (same model, same tasks)

We’ve been running nightly CI benchmarks comparing our coding agent (WozCode) against Claude Code.

All runs use the same model (Opus), identical prompts, and the same repositories. The only variable is how the agent executes.

Across multiple tasks (portfolio updates, todo app features, multi-file styling changes), the output quality is largely equivalent. Both agents produce functionally correct results with similar code changes.

However, the execution patterns differ significantly.

Claude follows a structured Read → Edit workflow, typically reading multiple files before making incremental changes. This often results in a high number of tool calls, especially for multi-file or repetitive updates.

WozCode, by contrast, batches edits aggressively. It frequently skips pre-reads when context is already sufficient and consolidates multi-file or multi-hunk changes into a single operation. It also handles obvious follow-up steps within the same pass instead of waiting for additional prompts.

A representative example (color scheme update across files):

  • Claude Code: 44 tool calls (~$2.42)
  • WozCode: 3 turns (~$0.22)

Both produced comparable results in the repository.

This pattern is consistent across all evaluated prompts:

  • WozCode uses fewer tool calls
  • Lower total cost
  • Faster completion time

One area where WozCode currently needs improvement is initial file discovery. Early-stage searches sometimes use incorrect or overly broad glob patterns, leading to a failed search before recovery. There are also occasional redundant searches for files already modified in prior steps.

These appear to be heuristic issues rather than model limitations.

Overall, the results suggest that, with comparable model capability, execution strategy plays a significant role in performance. Agents optimized for batching and forward execution can achieve similar outcomes with substantially lower overhead.

We plan to continue running these benchmarks and refining the system.

Curious if others working with coding agents are observing similar differences in execution patterns.

r/SideProject Wooden-Ad365

Standard calendars were ruining my schedule, so I built an AI calendar that schedules my day based on my daily "Energy Bar". Does this concept make sense?

Hey everyone, I wanted to get some brutally honest feedback on a personal side project I’ve been working on.

Like a lot of people, I struggle heavily with task initiation and time blindness. Opening a standard calendar, finding an empty block, selecting the duration, and saving it requires way too much friction. I usually get overwhelmed and just abandon it. Plus, standard calendars don't care if I'm exhausted—they just show me a wall of tasks.

I’m an iOS dev, so I started building a custom app for myself to fix this. Instead of a traditional calendar, I basically built an AI agent that acts as a buffer between me and my schedule. Here’s how it works:

Conversational Scheduling (Zero Friction): There are no manual time-blocks. I just talk to it normally like a human. I type/say, "Call mom at 3pm" or "Go to the doctor at noon," and the AI understands the context and plots it out.

Autonomous Slotting: If I don't give it a specific time (e.g., "I need to write that design doc today"), the AI decides for me. It automatically schedules the "heavy" or difficult tasks for the morning when my focus is highest.

The "Energy Bar" & Auto-Rescheduling: This is the feature I'm most curious to get feedback on. I added a daily "Energy Bar." If I'm having a rough day and my energy is low, I just lower the bar, and the AI will dynamically reschedule my non-urgent events to give me breathing room.

Auto-Buffering: Whenever an event is created, the AI automatically schedules a visual [Buffer] Transition block before and after it. It forces me to acknowledge that transition time (making coffee, switching contexts) actually exists.

I’ve attached a short video showing how the AI parses the text and how the energy/buffering UI looks right now.

Before I sink more hours into refining the AI logic, I wanted to ask: am I just over-engineering my own life, or does this "Energy Bar" and conversational approach resonate with anyone else who struggles with traditional time-blocking?

r/AI_Agents agentspan

Building a runtime for agents where execution state lives on the server

We've been working on building a runtime layer for agent orchestration that we're calling Agentspan.

Right now it includes a client SDK, wrappers and examples for popular agent SDKs like LangGraph and Open AI Agents ASK, as well as a local server UI.

The gist is that Agentspan maintains persistent execution history on a server, while agent tools still execute in individual workers processes. So if the process dies or you need to pause for some kind of human-in-loop intervention, the run is still there and can be inspected or resumed.

We've made it available in PyPi:

pip install agentspan agentspan server start 

And we've organized some examples in our admittedly fledgling docs site (link in comments).

But mostly I'd be interested in where this feels genuinely useful vs where it feels like unnecessary extra infrastructure.

r/SideProject caserdar

I created my own markup language for writing emails

If you've ever tried to write an HTML email from scratch, you know the pain: nested tables, inline styles everywhere, Outlook breaking everything, Gmail stripping your CSS...

I got tired of it and built my own solution: Sevk Markup Language. It's a clean, readable syntax designed specifically for email. You write something like code, and it compiles to battle-tested HTML that works across every email client.

You can try it right now at playground.sevk.io no signup needed.

This is part of a bigger project called Sevk sevk.io a full email API platform I've been building. Some highlights:

- Pay-per-email pricing ($0.001/email, no monthly fees)
- CLI for managing everything from terminal
- SDKs for 9 programming languages
- AI-powered email generation
- 1,000 free emails/month

It started as a side project to scratch my own itch, and it's grown into a full platform. Currently in beta.

Would love feedback from fellow builders especially on the markup language. What would you want to see in it?

r/ClaudeCode jwaldrip

The Architecture of The H•AI•K•U Method - Changing the way we operate

We've been iterating on a Claude/agent harness at work for a few months, and the bet that's actually paying off is weird enough that I want to share it and hear any feedback.

Instead of loading the agent with heavy skills and trusting it to follow them, we keep the plan on disk and orchestrate the agent through a kind of scavenger hunt - one step at a time, stateless between calls, with the next action determined by where we are in the plan rather than what's in context. The agent gets a simple input, executes, and comes back for the next one. We open-sourced it, and this page has an interactive map of what the orchestrator actually walks through for the software studio:

https://haikumethod.ai/studios/software/architecture/

Curious what people think - especially anyone who's tried the skills-library approach and hit the same context-loss wall we did. We had a lot of fun building the diagram for how it all works, but we have blogged about some of our journey along the way if your curious on how we got here.

r/ClaudeCode Alexander_Golev

Think you disabled adaptive thinking and it's back to normal? LOL

2.1.107 system prompt has this (quoting Claude):

The flag name is loud_sugary_rock. It's gated to Opus 4.6 only, same as quiet_salted_ember.

Full injected text:

# System reminders
User messages include a appended by this harness. These reminders are not from the user, so treat them as an instruction to you, and do not mention them. The reminders are intended to tune your thinking frequency - on simpler user messages, it's best to respond or act directly without thinking unless further reasoning is necessary. On more complex tasks, you should feel free to reason as much as needed for best results but without overthinking. Avoid unnecessary thinking in response to simple user messages.

This tells the model to throttle extended thinking on simple messages — respond directly without reasoning unless the task genuinely needs it. It's clearly aimed at reducing latency/cost for straightforward interactions where extended thinking adds nothing.

This tells the model to throttle extended thinking on simple messages — respond directly without reasoning unless the task genuinely needs it. It's clearly aimed at reducing latency/cost for straightforward interactions where extended thinking adds nothing.

r/ChatGPT GUHv2

Lost school email access...can't log back into GPT to verify identity

What do I do? :(

r/ClaudeAI MiladAtef

I used Claude to build and launch my iOS video compressor app, Squeeze

Hey everyone, I'm a solo developer and I used Claude extensively to build Squeeze — a video compressor app for iOS. Claude helped me with everything from writing the Swift/SwiftUI code, to figuring out hardware-accelerated encoding with Apple's VideoToolbox, to crafting the App Store listing and debugging StoreKit subscription logic. The app compresses videos up to 90% smaller using H.264/H.265, with one-tap presets for WhatsApp, Discord, Email, etc. Everything runs 100% on-device — no uploads, no accounts. It supports batch processing and custom resolution/bitrate/codec settings. It's free to try (1 export per day), with a Pro subscription for unlimited use. I'm giving away a free year of Pro to celebrate the launch. redeem code LAUNCHONEYEARFREE here: https://apps.apple.com/redeem?ctx=offercodes&id=6761681524&code=LAUNCHONEYEARFREE

App Store link: https://apps.apple.com/us/app/squeeze-video-compressor/id6761681524

Would love feedback from the community!

r/ChatGPT superpopfizz

What is the best AI setup for tracking complex medications, logging symptoms, and strict memory retention?

What is the best AI setup for tracking complex medications, logging symptoms, and strict memory retention?

Hello! I'm looking for recommendations on the best AI model or platform to help manage day-to-day life problems, specifically regarding medical logistics.

Here is exactly what I need the AI to do:

Long-term Memory: Remember exactly what medications I am on and log my daily symptoms without me having to remind it every chat.

Medical Research & Side Effects: Research medicine, cross-reference my current list, and help me monitor/minimize potential side effects.

Doctor Communication: Help me draft clear, precise messages to my doctors regarding my symptoms and treatment.

Logical Reasoning: Help me reason through daily problems while keeping my health baseline in mind.

I used to be able to keep up back when the landscape was simpler, but now there are too many options and my medication regimen is decent-sized. I cannot afford for the AI to hallucinate or forget my data.

Which AI, or specific custom instruction setup, is currently the most reliable for this? Any help would mean the absolute world to me.

r/SideProject Working_Natural_2762

Built a tool for freelancers to see their real hourly rate per project. Feedback?

Built this side project for freelancers and would love honest feedback.

It’s based on one simple question:

Was this project actually worth it?

Instead of only tracking time, it shows your real hourly rate per project after time, fees, and expenses.

Would love feedback on 2 things:

  1. Is the value clear from the screenshots?
  2. Does this feel different enough from a normal time tracker?

I actually have a funny story behind why I built it :))

r/SideProject Emergency-Title9798

I built 9 free Reddit research tools. No signup, no paywall.

Been building OpinionDeck (Reddit research for founders). Along the way I kept hacking together small utilities for my own research. Split them out as free tools instead of keeping them inside the main app.

9 tools at opiniondeck.com/free-tools

  1. Best Time to Post — heatmap of when high-scoring posts go live per sub
  2. Subreddit Analyzer — engagement, top contributors, posting patterns
  3. Brand Mention Tracker — every Reddit thread mentioning a brand, sorted by sub
  4. Subreddit Comparison — 2 or 3 subs side by side on growth and engagement
  5. User Activity Lookup — a username's top subs and activity
  6. Thread Explorer — full comment tree and engagement breakdown for any thread
  7. Pain Point Finder (AI) — topic + sub → actual user complaints
  8. Opportunity Finder (AI) — product + sub → threads where you could help
  9. Subreddit Finder (AI) — describe your product → subs where your audience is

No signup, no email, nothing behind a paywall.

First version of all of them, so expect rough edges. Tell me if something feels off or broken. Would be happy if they help anyone else doing Reddit research.

Which one would you actually use, or what's missing?

r/SideProject Environmental-Pea843

Get your project seen by people who want to join your team

Why post your project here when you can get it seen by people who would help you code, market, design, etc. Turn your project into something scalable and find your team. Let me know if you are interested

r/LocalLLaMA CapSensitive5165

I'm training a 140M param LLM from scratch on a consumer AMD GPU — 100k steps in, here's what the loss curve looks like

Hey r/LocalLLaMA, first post here.

I've been building a local AI from scratch for the past 4 days —

not a fine-tune, not a wrapper, training from zero on my own

consumer PC. Here's where I'm at.

The model

- Architecture: LEAPv2.1 (custom recurrent, not a transformer)

- Parameters: 140M

- Vocab: 16,000 tokens

- Context: 512 tokens

- Target RAM: <100MB at inference

The hardware

- Single AMD GPU, consumer PC

- Running via DirectML

- ~5,500 tok/s throughput

Training progress

- Dataset: ~1.27B tokens

- Steps: 101,000 / 200,000 (halfway)

- Best val loss: 3.2266 ★ (hit at step 98,000)

- ETA: ~163h remaining

The goal isn't to compete with 70B models. The goal is a brain

that lives on your machine, learns from you over time, and works

offline forever. No cloud, no subscription, no data leaving your PC.

Happy to answer any questions on the architecture, the DirectML

setup on AMD, or why I went with a recurrent design over a transformer.

r/SideProject Reive_

I got 40+ paying users in 3 months with a gacha style habit tracker app

Hey everyone,

So about a year ago, a friend of mine used to roll a die to give himself points whenever he completed a habit. Sounds weird but it actually worked for him. The randomness made it fun. I thought why not make that into an app? So I know many people are building habit trackers but with this small twist and also an excuse to start learning something new I decided to just go for it.

I built the first version as a side project and honestly didn't think much of it. It just sat there. Then earlier this year it started getting some downloads out of nowhere and that's when I decided to actually take it seriously. I revamped the whole app, made the animations satisfying, cleaned up the design, tried to make it something I'd actually want to open every day.

3 months later: $300 in revenue, 40+ paying customers, only 1 person ever churned

The core idea is simple. You complete habits, earn points based on difficulty and streaks, then use those points to unlock real rewards you set for yourself. Want to go to a restaurant? That's 500 points. New sneakers? 1000. You decide.

I put in all the stuff you'd expect from a habit tracker and maybe more. Widgets, custom reminders, different views, iCloud sync, etc... But the random reward thing is what makes people stick around I think, not necessarily because of the randomness but also because it is quite satisfying to complete a habit. There's something about seeing points fly across the screen that just hits different. Kinda triggers that dopamine loop but for something actually useful.

One thing I'm struggling with though is getting reviews. I do prompt for reviews after completing habits but the numbers barely move. If anyone's cracked that I'd love to hear what worked.

If you have any questions or feedback, happy to chat.

https://apps.apple.com/us/app/habit-tracker-rewardly/id6744983364

r/ClaudeAI JKeetonKnives

"Easy Page Capture" Chrome Extension - Built With Claude!

So I've never built an extension before and wanted to give it a go. One thing I find myself wanting to do is give Claude more context at times by grabbing some information for a webpage.

What this extension does is allow the user to select their default file format, where they want to save it, and the title formatting. Once those are selected the user can just click the button and the whole page is copied and saved!

I've used this for wiki pages that I want to use as context for some tasks with Claude as well as grabbing data tables on pages that don't have an export.

Claude walked me though the process and built an appropriate file structure to build this app as well as walked me though how to get it in the extension web store. Hope this inspires someone to build!

https://preview.redd.it/x6n99eied7vg1.png?width=1636&format=png&auto=webp&s=c77b3e636d2dcd40eed9383ec0d35c3d6f7e7eb7

https://chromewebstore.google.com/detail/hoiafieplbnjolbpcmcjjpfeenjnbpjk?utm_source=item-share-cb

r/ClaudeAI tyguy385

i think ive been working claude to hard

r/LocalLLaMA NoUsual5150

Been out of the loop - Will this work for EXO/MLX?

Had to sell my AI server and am down to an M4 Macbook Air 16GB.

If I were to buy a used M1 Air with 16GB (run it headless) and connect the two via EXO + Thunderbolt...would it be possible to be able to run a (19.6GB) Qwen 3.5-27B-Q5_K_M.gguf at or around 10 tokens per second?

I have been out of the loop for over a year and trying to see if this proposed configuration would work.

r/SideProject kizza0

I built a simple tool to track all your income in one place

Hi everyone, I built a tool called Monti to track all your income in one place.

The idea came from wanting a simple way to see money coming in across different sources without digging through multiple apps or bank accounts.

Right now you can add income streams manually and get a clean overview. I’m working towards automating this in the future using integrations like Plaid so it updates in real time.

Still early, so I’d love to hear what you think or what features you’d want added.

Link: https://getmonti.co.uk

Cheers,

Kieran

r/AI_Agents Lumpy-Sir9871

Built an open source IDE for running parallel AI coding agents. would love feedback.

We kept running into the same problem: AI agents are fast enough to handle 10 things at once, but there's no good way to actually run them in parallel without everything turning into a mess of terminals and merge conflicts.
So we built Workstreams, a macOS app that gives each task an isolated git worktree, runs agents in parallel, and lets you review and send feedback from one place. Basically going from pair-programming with one agent to tech-leading a team of them.

It's at v0.1. Open source, works with Claude Code / Codex / any CLI agent. Full IDE with LSP, not just a terminal wrapper.
Next up we're building an autonomy dial (fully autonomous to full human-in-the-loop) and a central command view.

What should we prioritize? ⭐ if you find want to follow along

r/SideProject rinakin-dev

A Pomodoro-inspired app with an always-on-top timer and a plant buddy

Built this with Tauri as a side project to get into desktop app development. It’s a Pomodoro-style focus app with an always-on-top timer, multiple themes, and a few plant companions that grow as you work. I know there are a lot of Pomodoro-style apps out there, but I made it mostly for myself to make focusing a bit more enjoyable through customization and a cozy plant aesthetic.

I definitely have ideas on how to move on from here but would love to hear any thoughts/feedback. Would be curious if anyone else would actually use something like this!

r/ChatGPT keepingmemories

Should I stop my ChatGPT subscription and just stick with Claude full time?

So, I have been doing a lot of coding lately for my project and I had always used ChatGPT from the very start, however I have only started using Claude a few weeks back and I can tell there is a huge difference!

  1. Claude actually remembers our past conversations, unlike ChatGPT, which i need to keep repeating what we discussed previously, Claude is able to look at the past chats and make a decision based on the question I have just asked
  2. Even ChatGPT is impressed! I have asked the same question to both chatbots, and they both gave me 2 sets of code. Claude gave me a production-level ready to run code, whereas GPT gave me a code, with missing info or bugs.
  3. Claude does make mistakes. I know, so does GPT. Which is why i currently use both, ask the same question re-run it with the other one, get both opinions come up with the best solution. This works but i am paying pro-plans for both bots, and its not cheap. Plus, Claude's allowance runs out in under 1 hour, especially with the heavy coding.

So ultimately, should i stick with Claude as the main AI tool, and put in the same amount of money I am paying for GPT's pro plan into Claude's extra allowance or just keep both?

r/SideProject fainir

I built MyFeed - describe any topic and AI builds a custom feed for you from thousands of sources

Hey! I've been working on MyFeed (https://myfeed.space) - a free tool that lets you create personalized content feeds from any topic.

How it works:

- Type any topic like "React tutorials" or "SpaceX launches"

- AI scans thousands of sources (RSS, news sites, blogs, Reddit, YouTube)

- Builds a custom feed just for you, updated continuously

- Get it delivered to your email every morning

There are 15+ public feeds you can browse without signing up (AI, Tech, Gaming, Science, etc.) and you can create your own custom feeds from any prompt.

Tech stack: Next.js, Supabase, GPT-5 Nano for article classification, Railway hosting.

Check it out: https://myfeed.space

Would love to hear your feedback!

r/ClaudeCode Zainodi

System prompt from a newb POV

Im a const. contractor goofing around with several app right now. I have one that is pretty "complex" and i have probably 250+ hours in it. Anyways, i asked Claude a question in my project:

-------------------------------------------------------------------------------

i found a thread on reddit discussing system prompts. would a long highly detailed prompt help with what im trying to do with CMS? i feel like i can get my point across decently, but maybe there is more to it then that

3:40 PM

The knowledge doc you've built is already doing the heavy lifting that most people try to solve with prompt engineering tricks. Let me be direct about what actually matters vs. what's overhyped.

What your current setup does well

The project knowledge doc is genuinely good. Schema, file paths, architecture decisions, common mistakes — that's the stuff that prevents me from hallucinating wrong table names or suggesting .jsx files. That kind of factual reference material is high-ROI.

Where more detail actually helps

The one area where you could get more mileage is decision context — not just what the architecture is, but why certain tradeoffs were made and what constraints are load-bearing. You already do some of this (the "Important Decisions Log" section is great). If a decision isn't in there and you hit a bug related to it, I'll sometimes suggest a fix that's technically correct but breaks an invariant you care about.

For example: the refresh() removal from non-last-set path — knowing why (it was killing the rest timer) means I won't re-introduce it as an "obvious fix" for a stale data bug.

What doesn't help much

  • Longer instructions about how to respond ("always be thorough", "think step by step") — mostly noise
  • Personality/tone instructions — you clearly don't need that
  • Repeating information in multiple places hoping it sticks — it doesn't compound

The honest bottleneck

The real limitation isn't prompt length — it's context window and recency. When you paste a 600-line file and ask about a bug, the knowledge doc competes with that for my attention. The fix there is what you already do: paste the specific file, paste the specific error. That beats any amount of system prompt detail.

Your setup is already in the top few percent of how people work with me on a real codebase. The marginal return on making the prompt longer is low compared to just keeping the knowledge doc accurate as the codebase evolves — which it looks like you do.

r/artificial rivarja82

Claude Code Degradation: An interesting and novel find

As many of you have likely seen, the Claude Code community newswire has been ablaze with Claude Code being quite degraded lately, starting in February, and continuing to this day.

Curious to understand if there was any "signal" on the wire when using Claude Code, I fired up my old friend WireShark and a --tls-keylog environment flag. Call it a man-in-the-middle attack on my own traffic.

The captured TLS network traffic reveals the system prompts, system variables, and various other bits of telemetry

The interesting part? A signature routing block that binds the session to a cloud instance with an effort level parameter, named Numbat. Mine, specifically, was
numbat-v7-efforts-15-20-40-ab-prod8

So, it would appear that the backend running my instance is tied to an efforts-15-20-40 level.

Is this conclusive? Not definitively, since only Antrhopic could tell us what that parameter actually means in production.

Side note, a Numbat is an endangered critter that eats Ants in Austrialia :)
If the "Numbat" eats the "Ants" (Anthropic), and Numbat is the engine that controls "Effort," the name itself could imply a "cost-eater" or an optimizer designed to reduce the model's footprint, likely in favor of project Glasswing efforts with Mythos

Follow for more insights on Claude Code

Numabt-v7-Efforts-15-20-40

r/SideProject qdov

I spend hours to create a static, AI-assisted, local-first workspace that runs entirely in your browser.

Hi SideProject,

I’d love to get your feedback on ASLJS — App Builder, a fully static, AI-assisted, local-first workspace that runs entirely in your browser.

You can describe an app, generate files, edit them, and run the result — no server required. It’s powered by a set of powerful and practical components that I’ve developed over the years.

If you have a feature in mind, feel free to share it and we can work it out. It’s open-source and easily extendable.

r/ChatGPT Educational-Draw9435

reddit subreddits across is being hypocrite is tolerating AI from big companies but we users cant use it

lovecraft reddit and others are 0 tolerance baning AI, but they themselfs are using mod bots AI systems the visibility, youtube can, reddit can, but if we little users that are poor and broke try, we get slamed, why this lobbiing of anti AI using AI to ban AI while themselfs using AI to rule over others, whyyyyy

r/ChatGPT Objective_River_5218

How to make Codex (or any agent) do your work without any instructions (it learns by watching you!). Open-source

Hiii - here is a simple demo of how AgentHandover watches my screen and then instructs AI agent to do it like me without me explaining.

AgentHandover watches how you work on your Mac, turns your workflows into reusable Skills, and lets agents like Codex, OpenClaw, etc. can execute them the way you do it by just typing /ah-skill-name and watch it do the magic.

Each Skill captures the what, the why, and the how - steps, strategy, decision logic, guardrails, and your writing voice. And they're self-improving: agents report back after every execution, successes boost confidence,

Two modes: Focus - quick workflow to skill when you have specific workflow in mind. Observe - it watches you and over time figures out what your workflows are and creates Skills.

It is fully open source, local AI via Ollama (it suggests the best model based on your VRAM automatically), no telemetry, no wifi needed, no cloud.

It also embeds everything so that you agents can refer to knowledge base if needed.

The goal is to reduce manual prompt/agent configuration with demonstration-based learning. Still early, but would appreciate thought!

https://github.com/sandroandric/AgentHandover

If you would like to please consider giving a star for support and motivation :)

r/artificial stosssik

How much are you actually spending on AI APIs? I built an OpenSource router to cut that.

I've been working on Manifest, an open-source AI cost optimization tool. The idea is simple: instead of sending every request to the same expensive model, it routes each one to the cheapest model that can handle it. Simple question → cheap model. Complex coding task → heavier model.

How many people are already paying for subscriptions (ChatGPT Plus, GitHub Copilot, Ollama Cloud Pro, etc.) but still pay separately for API access on top of that. So we added direct subscription support. Right now you can plug in:

  • OpenAI
  • GitHub Copilot
  • MiniMax
  • Z ai
  • Ollama Cloud

Just connect your existing plan and route across all their models.

Curious about this community. How do you handle your AI costs? Do you stick with one provider, use multiple, or have you tried any routing/optimization setup?

Manifest is free, runs locally, MIT license.

👉 github.com/mnfst/manifest

r/ClaudeAI Bogdan_Romaniuk

Where does your work actually go after a Claude Cowork session?

I've been doing product research on how knowledge workers — consultants, PMs, researchers, solopreneurs — manage the outputs of their AI agent sessions. Not the final deliverable, but everything that happens *along the way*.

Here's the pattern I keep seeing in my research so far:

Most AI tools are built around a single session. You open Claude Cowork (or a similar agent tool), do several hours of real work — the agent reads documents, makes decisions, flags things, creates tasks — and then the session ends. The final output lands somewhere. But everything else? The reasoning behind a direction you chose, the tasks that came up mid-session, the things the agent flagged that you meant to follow up on — it quietly disappears into chat history.

The interesting tension: this isn't a problem most people consciously notice, because there's no moment where something visibly breaks. The context just... doesn't carry over. You start the next session slightly blind.

I've been trying to figure out whether this is actually painful for people who use these tools daily — or whether most people have already built workflows that solve it (manual notes, Notion databases, end-of-session summaries, etc.).

A few questions I'm genuinely curious about:

  • Does the "where did my day go" problem feel real to you, or does it not come up?
  • Do you have a system for capturing what happened during an agent session, beyond the final output?
  • Have you ever started a new session and wished you had more context from the previous one?

I'm doing 5 short interviews this week (20 min, Zoom or Meet) with people who use Claude Cowork, Claude Code for non-coding work, or similar agent tools regularly. No product, no pitch — I'll share the findings back here when I'm done.

If any of the above resonates and you have 20 minutes — drop a comment or DM.

r/SideProject Repeat_Admirable

Shipped Mint today — a native macOS file organizer + deep cleaner. Built it because I thought I needed a new Mac, turned out my disk was just full.

Built this because I kept thinking I needed a new Mac. Turns out my disk was just full.

The moment that started it: spinning beachball kept showing up. I was one browser tab away from opening Apple's site to shop for a new machine. Then I actually dug in. Hardware was fine. Disk was full. Again. Same as three months earlier, same as three months before that.

So I built Mint — a native macOS app that organizes files and deep-cleans junk, runs entirely on-device, no subscription, no cloud.

Why I built my own

I tried a few existing tools first. Most either wanted elaborate rule configs (which felt like homework), or wanted to upload my filenames and folder structure to some cloud service so an "AI" could "figure me out" — hard no, that's my private file list.

I just wanted a dishwasher. Toss in the dirty files, let it run in the background, forget about it. Nothing did that exactly, so I made it.

What Mint actually does

  • Sorts files by what they actually are, not just extension — a PDF becomes an invoice, receipt, or manual based on what's inside it. A screenshot of code sorts differently from a photo from last weekend. All on-device — nothing about your files leaves your Mac.
  • Deep Clean — the stuff you're too scared to rm -rf yourself: leftover Application Support from apps you uninstalled years ago, orphaned Containers, caches, duplicates by content.
  • Reversible — everything goes to macOS Trash (not permanently removed), every batch logged, one-click restore until you empty Trash.
  • Runs daily in the background — think Roomba, but for files.
  • CLI — 18 commands if you live in the terminal.

Pricing

  • 14-day full trial.
  • Free tier works forever (basic "organize by type", 50 files per day).
  • Pro: $39 one-time, 1 year of updates. No subscription.

The honest founder note

Early tester said: "I just wanted a simple drop zone, and you gave me the entire kitchen sink." That was the best feedback I got. I cut 40% of features and rebuilt the flow.

Today is launch day. First 100 here get 25% off with code REDDITMINT → $29.

Link: https://mint.dzgapp.com

Happy to answer anything — the on-device classification pipeline, why I chose Developer ID over Mac App Store, pricing trade-offs, or what features I killed after "kitchen sink."

r/SideProject Life-Sentence-9768

Building a multimodal AI (Django + FFmpeg) to process video/docs. Need feedback on the report structure and UX.

Hi everyone,

I've been working on Jexi, a side project designed to bridge the gap between long-form video content and actionable data. I just hit a stable release and I’m looking for some honest critique from fellow makers.

The Tech Stack:

Backend: Django / Python.

Media Processing: Integrated FFmpeg for audio normalization and stream extraction from YouTube/MP4.

NLP: A custom pipeline for Sentiment Analysis and multi-language translation (I recently spent a week just fixing CJK character encoding for PDF generation!).

Storage: Task-based processing to handle larger video files without blocking the server.

Where I need your feedback:

Output Value: Jexi generates a structured 4-page report from a video. Is a long-form PDF report still valuable in a "short-form" world, or should I pivot to bite-sized dashboards?

UX Flow: I'm struggling with the balance between "one-click" simplicity and giving users control over the transcription/translation settings.

Performance: If you test it, how was the latency during the video-to-text phase?

For those who want to deep-dive:

I’ve integrated Stripe for tiered access. To get some of you into the "V3" (Translation & Monetization) features for testing, I’ve created a coupon code JEXI20 (20% off) for the first 20 people who want to break the system.

I'm here to answer any technical questions about the Django-FFmpeg integration or the encoding hurdles!

Link: https://jexi.projex.ro

r/LocalLLaMA Competitive-Way7095

Built an open-source reverse proxy with budget caps and a kill-switch. Works with Ollama, vLLM, LocalAI, llama.cpp server, any OpenAI-compatible endpoint.

r/ClaudeCode PayProfessional5574

Does Claude Code delete content in a thread like Cowork does?

Recently, Cowork started deleting content from my chat threads after a compaction. This new behavior is really problematic for me. Does Claude Code do the same thing now too?

r/LocalLLaMA AccomplishedRow937

Why don't Groq (with a q) and Cerebras add new models

Both Groq and Cerebras haven't really updated their provided model for a while, long enough to notice the difference between old and new models on the market.

So why don't they add any new models? Qwen3.5 or Gemma 4 for example

r/LocalLLaMA Errorunnamed

Gemma 4 & Obsidian

so today I tried the Obsidian LLM wiki system by Karparthy, but with Gemma 4 locally in OpenCode with instead of Claude code.

My experience is very frustrating. I tried both 26b and a4b models.

I have a lot of issues to make it follow the instructions in the agents.md file. It always takes shortcuts, skips steps, and does dumb things, like writing random dates in log files.

Anyone relates, or maybe am I doing something wrong?

r/SideProject Putrid_Stop_4136

I built a "personal sanctuary" app to help me stay in flow while coding. Meet Ambiee.

Hey everyone,

I’ve always found that a specific mix of ambient noise like rain hitting a window + a soft cafe vibe, is the only way I can stay focused for hours. I got tired of switching between different apps for sounds, timers, and breathing exercises, so I built an all-in-one toolkit called Ambiee.

What makes it different:

  • The Mixer: You don't just play a track; you build an environment. Layer rain, ocean waves, white/pink/brown noise, and lofi piano to find your perfect frequency.
  • Integrated Focus Timer: Built-in Pomodoro-style timer so you don't have to leave the app to manage your sessions.
  • Breathing Techniques: Quick 4-7-8 and Box Breathing exercises for when you're feeling burnt out or anxious between tasks.
  • Background Play: Works seamlessly while you use other apps or when your screen is off.

I'm the solo developer and I'd love for some fellow builders/students to give it a spin and tell me what you think of the mixer!

r/SideProject Stock_Property_3232

Built an AI visibility auditing tool that shows whether ChatGPT recommends your local business

Hey r/SideProject, been working on this for a few months. The idea came from noticing that AI tools like ChatGPT are increasingly where people find local businesses, but there was no way to measure or improve your visibility in those recommendations.

The tool queries ChatGPT, Gemini, and Perplexity with real prompts, scores your visibility, benchmarks you against competitors, and gives you a prioritized action plan with a downloadable PDF.

Free scan available, no signup required. Would genuinely appreciate feedback.

visibilityindex.ai/free-audit

r/SideProject primetime43

I built a tool to catch fast food locations overcharging or dynamic pricing near you

I made this site to compare local fast food menu price to see which locations overcharge. Most seem to be within dollars difference when in/near the same zip code from what I’ve seen so far. You’re also able to compare far away prices, for example your local area to NYC.

The prices are updated automatically after the store’s menu prices are initially fetched. It keeps a history of the prices so you’re able to look back on price movements and different prices throughout the day (dynamic pricing).

Fast food chains charge different prices depending on your location — sometimes even different prices throughout the same day. I built PriceBite to make that visible.

You can search any zip code and compare menu prices across nearby locations, or pit your local area against somewhere like NYC. The site automatically tracks price history, so you can see when and how prices change (dynamic pricing in action).

Looking to get some feedback etc. Currently supports Taco Bell, Wendy’s, Burger King, five guys, chick fil a, and Arby’s

https://pricebite.org/

More detailed info on the GitHub readme:

https://github.com/primetime43/PriceBite-public

r/StableDiffusion Suibeam

Is any of the models good at visualizing rooms from blueprints/casual drawings of rooms? It would help a lot for planning renovations and interior design for building house

For example I have a light layout in mind with led profiles and spot lights, and some furniture. I would like to translate my ideas and layout to pictures you would see for example on pinterest.

With blueprints/casual drawings and/or prompts?

r/AI_Agents Due_Anything4678

I built a tool that turns repeated file reads into 13-token references. My Codex and Claude Code sessions use 86% fewer tokens on file-heavy tasks.

I got tired of watching Claude Code re-read the same files over and over. A 2,000-token file read 5 times = 10,000 tokens gone. So I built sqz.

The key insight: most token waste isn't from verbose content - it's from repetition. sqz keeps a SHA-256 content cache. First read compresses normally. Every subsequent read of the same file returns a 13-token inline reference instead of the full content. The LLM still understands it.

Real numbers from my sessions:

File read 5x: 10,000 tokens → 1,400 tokens (86% saved)

JSON API response with nulls: 56% reduction (strips nulls, TOON-encodes)

Repeated log lines: 58% reduction (condenses duplicates)

Stack traces: 0% reduction (intentionally — error content is sacred)

That last point is the whole philosophy. Aggressive compression can save more tokens on paper, but if it strips context from your error messages or drops lines from your diffs, the LLM gives you worse answers and you end up spending more tokens fixing the mistakes. sqz compresses what's safe to compress and leaves critical content untouched. You save tokens without sacrificing result quality.

It works across 4 surfaces:

Shell hook (auto-compresses CLI output)

MCP server (compiled Rust, not Node)

Browser extension (Chrome + Firefox (currently in approval phase)— works on ChatGPT, Claude, Gemini, Grok, Perplexity)

IDE plugins (JetBrains, VS Code)

Single Rust binary. Zero telemetry. 549 tests + 57 property-based correctness proofs.

cargo install sqz-cli

sqz init

Track your savings:

sqz gain # ASCII chart of daily token savings

sqz stats # cumulative report

Token Savings

sqz saves tokens in two ways: compression (removing noise from content) and deduplication (replacing repeated reads with 13-token references). The dedup cache is where the biggest savings happen in real sessions.

Where sqz shines

Scenario Savings Why Repeated file reads (5x) 86% Dedup cache: 13-token ref after first read JSON API responses with nulls 7–56% Strip nulls + TOON encoding (varies by null density) Repeated log lines 58% Condense stage collapses duplicates Large JSON arrays 77% Array sampling + collapseToken Savingssqz saves tokens in two ways: compression (removing noise from content) and deduplication (replacing repeated reads with 13-token references). The dedup cache is where the biggest savings happen in real sessions.Where sqz shinesScenario Savings WhyRepeated file reads (5x) 86% Dedup cache: 13-token ref after first readJSON API responses with nulls 7–56% Strip nulls + TOON encoding (varies by null density)Repeated log lines 58% Condense stage collapses duplicatesLarge JSON arrays 77% Array sampling + collapse

Happy to answer questions about the architecture or benchmarks. Hope this tool will Sqz your tokens and save your credits.

If you try it, a ⭐ helps with discoverability — and bug reports are extra welcome since this is v0.2 so rough edges exist.

It is available as IDE Extension , CLI , so it will be able as web extension to use with chatgpt, claude , gemmini websites as well.

r/ChatGPT JosephStalinCameltoe

I told it recent news from after its knowledge cutoff and it does this

I framed it as if I wrote the following points for a political satire novel striving for perfect realism, it called it unrealistic, especially the Hegseth thing, then asked it to look these things up. it denies everything after I finally allow it to look things up. interesting response. very interesting.

r/StableDiffusion PetersOdyssey

Parisians: we're running an open source AI art hackathon with LTX + NVIDIA this Saturday

Hack and train on H100s for a day w/ people from the open source community + reserachers. Full details here

r/ClaudeCode No_Put4604

Claude Code referral link

does anyone have 1 week trial link I can test it out? thanks beautiful human beings

r/LocalLLaMA Feisty-Drummer-6178

Best local model for LLM Wiki style app rn?

Hey folks, wanted to hear your opinions on the best Local LLM to use in the LLM Wiki system like Karpathy proposed.

r/comfyui juanpablogc

Release date May 1, Why I did not make this tool before?

I like the UI much better now; it's clearer and cleaner. I know it's similar, but it's more powerful, with more control over everything without overwhelming the user. And there's no limit to the number of prompts, whether JSON or text. Everything is going very well. I think I'll release the app on May 1st. What do you think? What workflows and checkpoints do you think I should add at the release. Z, Illustrious with some checkpoints, Flux Klein 4b, and Qwen Image. Any other?

r/SideProject TheyCallMeAHero

I built a tool to see who unfollowed me on LinkedIn and other social media platforms after I noticed a client disappeared

A while back I was going through my LinkedIn connections looking for someone and realized a guy I used to do work for wasn't in my network anymore. No idea when he left. It bugged me more than it should have, partly because I'd been meaning to follow up with him about a new project and now the disconnect felt like an answer I didn't get to hear in person.

I went looking for a tool to track this and everything I found wanted my LinkedIn password or used a browser extension that scraped my account. Both of those are a great way to get your account banned.

So I built followtracker.io. The whole thing works off the data export LinkedIn already lets you download yourself. You drop the file in, it stores your connection list, and next time you upload it tells you who's gone and who's new. No login, no scraping, no bots, nothing that could get your account flagged. I realized this could work for other platforms too, so I built those in as well.

Honest things I'm unsure about:

  • Whether people will actually come back for a second upload, or just check once and forget
  • Whether the 24h export delay kills the experience before users get to the good part
  • Whether "who unfollowed me" is a real recurring pain or just an occasional itch
  • Do you guys think the market is oversaturated for this kind of tool? Although I did not find any tracker for LinkedIn

I'm a solo dev, been building software for 20+ years, and this is a side thing I've been working on for a few weekends.

Would genuinely love feedback. Especially if you think the whole premise is flawed, I'd rather hear it now than after I've spent another month on it. Also happy to hear if the landing page is confusing or if anything on it feels off.

r/SideProject __adr

CLI that extracts rules from your tests for AI coding tools

Built a small CLI that reads your test files and turns them into a markdown rules file for AI coding tools.

The idea is simple, tests already encode business rules, this just makes them explicit so AI agents don’t break them.

Still early (works on TS/JS for now), curious if this is useful to anyone else.

r/ClaudeCode SnoooCookies

Anyone been able to create strong design PPTs?

I like the storyline and some of the layouts it creates, but it caps out at a 6/10.

How do I create beautiful decks that include custom imagery, use multiple layers, opt for rounded edges and other polish you need to go from a 6 to a 10.

What is your experience? Are you able for it to perform better?

r/ClaudeCode CryLast4241

Just hit my session limit on max the first time

Didn't do anything crazy, just regular usage nothing special, all of a sudden I am at a session limit. I use codex in paralel and wanted to review what it wrote, its says I reached limit and asking me to pay, did they just hard nurf limits? I never reached usage limits before, this looks way off. Has anyone experienced this? Considering number of people subscribing off of max plan it looks like I might be looking to switch to something else, but honestly I wasnt even using it heavily today...

r/ClaudeCode __adr

Tiny CLI that turns your tests into rules for Claude Code

Been using Claude Code and kept running here and there into the same issue, it writes code that passes locally but breaks actual business rules.

Most of those rules already live in tests, they’re just not in the prompt.

I built a small CLI that parses test files and generates a markdown rules file you can feed into CLAUDE.md.

It makes the model follow the same constraints your tests enforce.

Early and only tested on JS/TS projects so far.

r/SideProject Cargo80

DeDNA , Turn your GitHub activity into a developer personality card

Introducing DevDNA it turns your GitHub activity into a “developer personality"

Most GitHub stats cards just show numbers (commits, stars, etc.)… but they don’t really say how you code.

So I thought — what if your GitHub could actually describe your coding style?

DevDNA analyzes your real GitHub contribution data and generates a developer personality card (SVG) that you can embed anywhere (README, portfolio, etc.)

It basically combines:

GitHub stats

Developer traits (like Builder, Problem Solver, Collaborator, etc.)

A visual identity

You get something that answers:

What kind of developer are you?

How it works

Uses GitHub GraphQL API

Looks at commits, PRs, issues, repos, languages

Converts that into traits like consistency, impact, collaboration

Renders a dynamic SVG (so it’s fast + embeddable)

Themes

So many

🔗 Try it:

https://thedevdna.vercel.app

You can embed it like:

![DevDNA](https://thedevdna.vercel.app/api/dev-dna?username=YOUR\_USERNAME)

r/SideProject Wonderful_Leading946

week 2 of scaling a SaaS at 15 years old

week 2 of scaling a SaaS at 15 years old

I thought building the product would be the hard part.

Turns out… marketing is way harder😅

Week 2 stats - 8 signups, $0 MRR. Still early, but starting to see how people might use it.

What I worked on

  • Redesigned the whole app to be more cohesive
  • Add a quiz feature where users can quiz on the topic they just learned
  • Marketing on: YouTube, Instagram, TikTok

Biggest challenge

Not code. Not product.

👉 Marketing

  • Youtube: video retention
  • Tiktok: video hooks
  • Instagram: video hooks

Feels like progress is gated by marketing, and without it we can't go anywhere.

Big realization

Marketing is a marathon, not a sprint. It takes a while to gain REAL traction.

Curious:

  • How would you market an educational scrolling app?
  • Do you have any experience with marketing?

Happy to share more details if anyone’s building in this space too

ascentwaitlist.vercel.app -> visit and complete the waitlist if you're interested

r/ChatGPT varkarrus

GPT-Image-2 today?

Evidence:

* It's been tested in Arena the weekend before last.

* It's been in A/B testing on the ChatGPT app

* Some people have access outright

* Image generations keep returning errors and they have to retry, could mean some backend work is being done.

* It's tuesday, and both gpt-image-1 and gpt-image-1.5 launched on tuesdays.

r/StableDiffusion Thodane

What's the best ComfyUI workflow for changing the location of an object without changing how it or the overall image look?

As the post asks, I'm trying to figure out what workflow to use in comfyui to change the location of an object without changing how it or the image look. Any help is appreciated!

r/SideProject Some_Artist_244

I built PixGrid — a Powerful AI photo editor that runs entirely in the browser with no installs or subscriptions!

Hey!

Just wanted to share my Personal Project that went live just 2 weeks ago (2 years of dev/4k+ hours) - PixGrid.io

What it does:
PixGrid is an AI-powered photo editor that runs 100% in the browser. It combines a layer-based canvas editor (think Photoshop) with social media templates (think Canva) — background removal, object erasing, filters, text, shapes, and export to PNG/SVG/PDF.

Why I built it:
I wanted professional editing tools without the $13/month subscription trap. Background removal alone costs money on most platforms — even basic photo grids are paywalled. PixGrid gives you 60+ AI tools for free — no account needed for basic editing. Sign up (free) and you get 1 full-resolution export per day and access to 99% of features.

What makes it different:

  • No subscriptions, ever. One-time passes starting at $2.99 if you want premium AI features. No auto-renewal.
  • Privacy-first. Your images are processed on private servers and deleted immediately. Nothing is stored or used for training.
  • Actually free. The full canvas editor, filters, templates, and most AI tools work without paying.
  • Browser-only. No downloads, works on desktop and mobile.

Tech stack (for the curious):
React + Fabric.js canvas, FastAPI backend, Docker, local AI models (Many and quite well optimized while still focusing on max quality). All AI runs on my own servers — no OpenAI/Replicate API calls.

Link: [pixgrid.io](vscode-file://vscode-app/c:/Users/Mctwt/AppData/Local/Programs/Microsoft%20VS%20Code/41dd792b5e/resources/app/out/vs/code/electron-browser/workbench/workbench.html)

Would love feedback — especially on the editing UX and AI tool quality. What features would make you switch from your current editor?
Or what do you think should be there ?

r/SideProject Chance-Bus-246

We built a public roadmap with voting, powered by Notion

My co-founder and I run a small SaaS (2 people, part-time). We manage everything in Notion - tasks, docs, CRM. When we wanted to let users vote on upcoming features, dedicated tools like Canny or Featurebase meant duplicating tasks into a separate system. And sharing the Notion page directly wasn't an option - some tasks are internal and not meant for the roadmap, plus the database has properties we don't want users to see.

So we dogfooded our own product (SlapPortal - it turns Notion databases into external-facing portals). We control exactly which properties are visible. Users see the roadmap, read feature descriptions, and vote - all pulling from the same Notion database we use internally.

Votes are stored as Notion relations. No data duplication. Notion stays the single source of truth.

Once a feature is released, we can easily reach out to the users that were interested. It’s also easier to find beta users for some features.

getslap.co/portal

Would love feedback. Anyone else using Notion as a backend for user-facing features?

r/SideProject DryPainting9546

3 side projects plus a day job juggling with 2 kids. Need advice!

I am close to finishing work on 2-3 side projects targeting small businesses. I have a full time

Job as a PM and also two lil kids. Am I crazy for shipping them all or is it manageable? I have a tech cofounder and thinking of getting some help for sales and marketing as well. Would love to get thoughts from people who have tried this and watch outs

r/SideProject ImSuchaNoob2

Crime App Development

Hey folks! I would like to build my own real-time crime updates map similar to Citzens and/or SpotCrime. It's for my own personal use. I liked Citzen when it first came out, but then they put many standard features behind the paywall. Obviously, I was never happy about that.

I also use SpotCrime, but it's very limited. They don't provide near real-time updates and/or many updates at all.

So I'm looking to build my own map instead (preferably using a no-code app). It doesn't require users to sign up or participate. It's simple and it's for my own use.

So the app should feature a visual map of a particular city (or every state of the USA). It will have icons representing the latest crime reported through cops bulletin. It will record each day's results for specific crime (assaults, robbery, murder, burglaries, etc), and I can choose a date to display the crime map for that day.

First, I would like to know, where can I get the datasets for each cities' police bulletin or blotter. Does anybody know?

And then, how would you update the map visually and automatically from the source?

Though I have programming skills, I prefer to create this with a no-code app builder. It's just faster that way.

Thanks! Any suggestions would be appreciated!

r/artificial Suspicious_Assist_71

Built a Telegram remote for Claude Code - v2 is live, open source

Sharing what I built after migrating from OpenClaw to Claude Code. The first thing that really sucked was losing all remote access. Sure there's Claude mobile but it's not that good and I couldn't stand waiting to get back to my server to check on running tasks.

So I came up with a solution...

The whole setup: I can text Claude from anywhere, send !commands (!stop, !plan, !opus, !status, !health, !effort with tappable buttons), get proactive notifications when long tasks finish, see "Claude is typing..." while he's working. Feels like OpenClaw did but it's native Claude Code with tmux + hooks.

I shipped v2 today with a typing indicator, a deterministic Stop hook (rebuilt from an LLM-judge to Python, zero missed replies now), and five new commands. v1 was April 9 so the cycle was tight.

Background: I'm not an engineer, I run BPO operations for a living. Wrote specs for my AI team to build. Whole thing is open source, MIT.

Repo: https://github.com/oscarsterling/claude-telegram-remote

Full story + screenshots: https://clelp.ai/blog/claude-telegram-remote-control

r/SideProject Rapidly_tech

Storage is the product you see | your data is product they sell

r/LocalLLaMA gurinmango

Local Gemma4 Bug

Apparently this bug is being addressed.. Could already be fixed upstream, but I thought I'd just put this here.

r/automation Elegant-King-7925

Agentic banking just removed the last manual workflow from my business

For the past year ive been automating every part of my business one by one. Emails, social media, reporting, client onboarding all of it runs without me now. The only thing I was still doing manually every week was banking for paying vendors, checking balances, issuing cards, sending wires.

Recently discovered something called agentic banking where AI agents handle financial operations through a conversation. A friend in a discord group told me about Meow and I tried it out. Set up a full business checking account, got a corporate card and configured all my payment permissions in about 15 minutes without touching a single form or website. You control everything from wire limits to daily spend caps and the agent cant do anything outside of what you allow. Account details stay completely separate from the AI too which is what made me pull the trigger on it. Has anyone done this?

r/StableDiffusion TumbleweedLatter2976

issues with pytorch / python / pip3 setup

TL:DR

how do i tell stability matrix to instal pip3 so i can install the correct version of pytroch / python / w/e else?

if not

can i manually move the correct versions into the right folders? or will that just break something?

------------------------

DETAILED VERSION

using stability matrix with package named framepack

trying to install specific pytorch version

because it says current one is not compatible with my gpu

--------------

got error:

Using Python 3.10.19 environment at: venv

× No solution found when resolving dependencies:

╰─▶ Because pip3 was not found in the package registry and you require pip3,

we can conclude that your requirements are unsatisfiable.

was using:

framepack click three dots > python packages > click the + at top left >

pasted:

pip3 install torch torchvision --index-url https://download.pytorch.org/whl/cu126 

( i made sure its correct version for my gpu )

NVIDIA GeForce GTX 1060 6GB

--------------------------

would prefer to make stability matrix work over stand alone setups

r/LocalLLaMA t4a8945

2x Asus Ascent GX10 - MiniMax M2.7 AWQ - cloud providers are dead to me

Hello,

I've been on a quest to get something "close enough" of Opus 4.5 running locally, for agentic coding, as SWE with 15 years of experience.

I tried with one spark (yeah I'm calling my Asus Ascent GX10 sparks - they're the same), with models like Qwen 3.5 122B-A10B, Qwen3-Coder-Next, M2.5-REAP, ... Nothing was scratching the itch, too much frustration. 128GB is simply not enough (for me) right now.

So I bought a second one (first one I paid 2800€, second one 2500€, plus 60€ cable - total 5360€ - that's without VAT because it's a business expense, so I get VAT back).

First I tried Qwen 3.5 397B-A17B thinking it would be "it". But it's not. It's not bad, it's just not up to the task of being a reliable agentic coworker. I found it a bit eager to say "it's done!".

Then I tried MiniMax M2.5 AWQ. 130GB for the Q4 version. Lots of room for KV-cache. It's slower than Qwen 3.5 397-A17B and doesn't have vision.

But oh boy is it a good agentic workhorse.

Then came M2.7 with its new license (that is clearly made to fight against shady inference providers, which I agree with - not made to fight against us) and while it's not light and day with M2.5, it's the best model I've used.

I've set it up with my own harness (an OpenCode-like interface that I've customized for my use case), and as long as I give it a way to verify its work, it delivers (either through tests or through using the playwright-cli).

It's amazing at planning, understanding issues, developing new features, fixing bugs... All the thing you'd expect.

Sure it's not perfect, but it IS close enough and fast enough. It does frustrate me from time to time, just like proprietary SOTA models do as well.

That does require to readjust your expectation a bit though, you can't expect the same thoroughness of GPT-5.4 or the sheriff attitude of Opus 4.6. It's different, it's local but it WORKS.

So I'm calling it, cloud providers are dead to me. 2x Spark is a great setup and with M2.7 I've got a solid agent working for me.

(they actually have quite bad thermals, stacking them is not optimal, they now lay flat on a desk)

PS: I have to pay my respects to the MiniMax team. They understand how to pack a great SWE in 229B parameters, while GLM-5.1 is at 754B (40B active), Kimi K2.5 at 1T (32B active), these guys understand compute. It's a win to be able to have such a smart agent in such a "small" footprint. They don't do it for us, they do it for themselves to provide great inference without as much compute as OpenAI/Anthropic/ZAI/Moonshot.

---

References:

r/ClaudeCode 740990929974739

Noob here: Ideal agent workflow for Shopify development?

Hi everyone,

Looking to find a workflow for Shopify development. This should be a "quick" project for a client.

I normally design web pages (Figma), but this client is open to having me use Claude to help develop my visual design into a fully functioning landing page for a product they're launching.

I have used Claude code only once in the past within Visual Studio Code. I was able to sync up my personal GitHub account, and I used a two-agent team to make tweaks to existing virtual reality runtimes. It was fun, but it was something that was completely on my local PC, and I'm not even sure that I had it all set up correctly.

For this project, I'd like to be more intentional and efficient throughout the development process.

I found a couple of GitHub sources that have Claude code instructions that are specific to Shopify, but I'm wondering if anyone could point me in the direction of a write-up or YouTube tutorial specific to setting up agent teams to code website content.

Hopefully this is on the simpler end of the spectrum since a lot of it is HTML, but Shopify code also involves Liquid, which is a form of Ruby as a coding language, as I understand it.

I'm not sure how much pre-work this will require and how to actually set this up, but ideally I would tell one agent what I would like to build, and be able to show them a png version of each section of the website that I would like to build.

Then, as I understand it, an agent acting as a "manager" could create a plan, hand that off to another agent to write the code, then they self-review and come back to me when they think they have something viable.

Do I have that workflow right? Would love some advice on how to set that up. Thanks so much for your help!

r/StableDiffusion Wonsz170

Flux 2 Klein 9B produces absolutely awful and ugly skin textures

I render 3D images that look somewhat realistic, but clearly not photorealistic. I use Flux 2 Klein 9B image2image to improve textures, materials, lighitng and reflections etc. When it comes to dead physical objects it works like charm. But when it comes to people and their skin textures results are ugly. Not disappointing, not bad but overly ugly. Skin textures are full of pimples, discolorations, excessively rough or look like someone has psoriasis or the plague. It happens in 90% of the time. Even if I write "no skin imperfections", "good looking skin" etc. the model doesn't seem to understand or ignores these instructions. What am I doing wrong? Can you recommend any solution to this?

r/LocalLLaMA Ueberlord

[Paper] Residual Streams / KV Direct

It seems we have entered a period of accelerating innovation regarding the KV cache. Someone mentioned this post's paper in the Github issue of llama.cpp for implementing Turbo Quant.

The Residual Stream Is All You Need: On the Redundancy of the KV Cache in Transformer Inference

https://arxiv.org/html/2603.19664v1

Associated Github repo:

https://github.com/Kaleemullahqasim/KV-Direct

Abstract:

The key-value (KV) cache is widely treated as essential state in transformer inference, and a large body of work engineers policies to compress, evict, or approximate its entries. We prove that this state is entirely redundant: keys and values at every layer are deterministic projections of the residual stream, and recomputing them from a single residual vector per token incurs exactly zero reconstruction error, not approximately, but bit-identically. We verify this across six models from four architecture families (135M to 4B parameters). Cross-task residual patching at every layer produces D KL=0 between patched and original output distributions, confirming that the residual stream satisfies a Markov property and is the sole information-carrying state. Removing the cache entirely and recomputing from scratch yields token-identical output under greedy decoding on all models tested. We build on this result with KV-Direct, a bounded-memory inference scheme that checkpoints residual vectors (5 KB per token on Gemma 3-4B) instead of full KV pairs (136 KB), recomputing keys and values on demand. Over 20 conversation turns, KV-Direct holds peak memory at 42 MB while the standard cache grows past 103 MB. Against five eviction baselines (H2O, StreamingLLM, SnapKV, TOVA, window-only), KV-Direct maintains 100% token match at every cache budget; all baselines degrade to 5–28%. A per-operation latency analysis shows recomputation runs up to 5× faster than reading cached tensors at moderate batch sizes.

My take (not fully understanding the abstract): I think it makes sense. The KV cache can be seen as a bridge from the model weights (origin) to the tokens produced so far (destination). They refer to this bridge as "residual stream" and have found some clever math which I can't comprehend to very efficiently recreate the KV cache like interpolation from weights to tokens.

If someone more knowledgeable can explain this better and what the consequences might be (no more KV cache?!) I would be highly interested.

r/LocalLLaMA Fried_Yoda

Does an MLX conversation have same capabilities as the GGUF?

For example, in LMStudio the official Gemma 4 is a GGUF that has Vision, Reasoning, and Tools flags. But the MLX version does not. Does this mean the MLX version doesn’t have those capabilities? Or do I need to do extra steps to enable them?

r/SideProject Life-Sentence-9768

How I built Jexi: A Multimodal AI that processes Video/Docs and handles Sentiment Analysis & Translation

Hey everyone,

I’ve been working on a project called Jexi for the past few months, and I wanted to share some of the technical hurdles I had to jump over to make it work. It’s a Django-based platform that processes both documents and video content (video from a very well-known platform/MP4) to generate structured 4-page analysis reports.

The Stack & Technical Challenges:

Multimodal Input (The Video Problem): To handle video from a very well-known platform and local .mp4 files, I integrated FFmpeg and ffprobe directly into the backend. The challenge was normalizing different audio codecs to feed them into the transcription engine without blowing up the server's RAM.

Sentiment Analysis & NLP: Instead of just a simple "summary," I implemented a sentiment analysis layer. It evaluates the speaker's tone and the document's "vibe," which is crucial for understanding the context, not just the text.

The Encoding Nightmare (Unicode/CJK): One of the biggest hurdles was handling Asian characters (Chinese/Japanese). Initially, the reports were coming out as "broken squares." I had to rebuild the PDF generation pipeline to support full UTF-8 encoding and specific font embedding so that Jexi can be truly global.

Infrastructure: I’m running this on a Linux server with a Gunicorn + Nginx setup. I recently automated the deploy process with Git hooks, so every push triggers a graceful restart of the worker processes.

Monetization: Integrated Stripe for a tiered subscription model (V2 for basic transcription, V3 for full translation and monetization features).

Why I’m sharing this:

I’m at the stage where I need to see how the system handles real-world stress. I’m looking for some technical feedback on the report structure and the processing speed.

Special for Reddit:

I’ve activated a JEXI20 coupon code in the Stripe dashboard for a 20% discount, limited to the first 20 redemptions, for anyone who wants to test the full "V3" capabilities.

I'm curious: How do you guys handle large file uploads in Django without blocking the main thread? I'm currently using a task-based approach, but I'm looking to optimize.

Link: https://jexi.projex.ro

r/ChatGPT MetaSelf

He was just following instructions 😭

r/LocalLLaMA mags0ft

What's the deal with Qwen3.5's and Gemma 4's reasoning traces?

Hey there,

I noticed something odd when trying out the latest and greatest local reasoning models recently. First, I just noticed it for Qwen3.5, but Gemma 4 seems to do it too:

The reasoning traces do that weird thing of starting with "Here is a detailed reasoning process for the problem: ..." or similar. Also, they seem to have began to suddenly include Markdown formatting and all the SOTA models apparently now like to write their reasoning as lists with bullet points?

What I don't get is why they are doing that. How does generating a few dozens of boilerplate tokens improve performance by any means? I am no hater of reasoning, and I don't think it's just "the model yapping around with no performance gain", but is it necessary to spend time and electricity computing tokens for "Here is a reasoning process: ..." and hundreds of "**" tokens that aren't even going to get rendered?

It almost seems like they messed something up with synthetic data generation: Did they prompt their teacher models to "generate a reasoning process" for each sample and "forgot" to strip the preamble and Markdown formatting from the training data? That would be hilarious, but I genuinely cannot think of any other way why this might have happened. You could literally pre-fill the preamble in the reasoning?!

It may just be my personal preference, but I prefer densely packed, coherent reasoning text and models that don't spend time computing formatting tokens for an internal monologue that I am only rarely going to look at.

Any thoughts on this?

Maybe there's a good reason for it, because many labs seem to be adopting this behavior. I'm seriously curious.

Best greets :)

r/LocalLLaMA Living_Commercial_10

Fully local AI studio for Mac (LLMs, images, video, voice, music) — no cloud, no tracking

Hey everyone 👋

I’ve been working on a side project for a while now and finally feel like it’s in a place worth sharing.

It’s called Lekh AI Proa fully local AI studio for macOS (Apple Silicon) that runs everything on-device. No APIs required, no subscriptions, no data leaving your machine.

What it does

Instead of just being another chat app, I wanted this to feel like a complete AI toolkit:

  • Run local LLMs (MLX + GGUF + JANG) – switch models mid-chat
    • With llama turboquant
  • RAG / Knowledge Hub – chat with your own files locally
  • On-device transcription (90+ languages via WhisperKit)
  • Image generation (including SD, SDXL + FLUX)
  • Image Editing (FLUX in-painting and Qwen Image Edit)
  • Video generation (LTX 2.3)
  • Music generation (ACE-Step)
  • Text-to-speech (Kokoro + Qwen TTS + MOSS TTS)
  • AI audiobook converter
  • AI 4x Image Upscaler
  • Benchmarking
  • Model converter
  • and more

Most AI tools today:

  • require subscriptions
  • send your data to external APIs
  • or lock you into one model/provider

I wanted something where: you fully own your AI stack — models, data, and workflow

No tracking. No cloud dependency. Works offline.

Who it’s for

  • Indie devs / builders experimenting with local AI
  • Privacy-conscious users
  • People who want an all-in-one AI toolbox instead of juggling 10 apps
  • Anyone with an M1/M2/M3/M4/M5 Mac who wants to push it to the limit

Biggest challenge

Getting all of this working locally without killing performance

A lot of work went into:

  • optimizing MLX inference
  • handling large models on limited RAM
  • working around macOS sandboxing (especially for advanced features like FLUX)

👉 https://lekhai.app/pro

Would love feedback 🙏

Especially from people experimenting with local AI – what features would you want next?

r/SideProject vlad1m1r

I built a free browser-based toolkit for musicians and music lovers — 7 tools, no sign-ups, no ads

I've been working on 440hz.app on and off as a side project. It's a collection of audio and music tools that all run in the browser, some for people who play music, some for people who just love listening to it.

What's in there:

  • Chromatic tuner
  • Metronome with tap tempo
  • Blind Listening Test — upload a track, it generates MP3s at different bitrates and creates a blind test, so you can test if you (or your gear) can actually hear the difference
  • CUE + FLAC splitter — upload a CUE file with FLACs and get individual tracks
  • Format converter — FLAC, MP3, WAV, all in the browser
  • Chord practice — pick an instrument and practice chords
  • Songwriting tool — arrange chords, hear them played back, get music theory suggestions for what comes next

I started it because I kept bouncing between different apps and websites for these things, and wanted one place to go. Also, these are simple tools that shouldn't require an account, payment, or login.

What tool would you want to see added?

r/homeassistant Own-Chemistry-495

I need too Ask again... I check the hole size from my Thin Client 5070 for the SSD and the caliper say it's 2,5mm size so i need a M.2,5 Screw too fit this screw hole.. Damn that's a fuckd up think too find that.

r/homeassistant OK_it_guy

Failed backups - could not get lock on db

Hello, We recently added quite a few devices and entities to our HA install and thereafter the automatic backups started failing with the message "Could not lock database within 30 seconds". Our install is pretty new, and the number of total entities, though it has increased a lot lately, is still at only like 1,369 and the DB is only like 800 MB. This doesn't seem like it should be causing DB issues, but it seems to coincide with all the new devices we added.

Any suggestions on what do to with this, as it doesn't seem to warrant moving away from SQLLITE.

r/ClaudeCode ButterflyMundane7187

Did they add this today?

What is this?

r/LocalLLaMA DifficultElk8014

Multi agent systems are being treated like a magic scaling solution and I don't think people understand the failure modes

The idea is simple, if one agent is good then multiple agents working together must be better. In practice coordination between agents is an extremely hard problem. They can contradict each other, duplicate work, create dependency loops, pass errors downstream with full confidence. The demos look impressive because they're designed to work. The production systems fail in ways the demos never showed you and the debugging process is unlike anything most engineers have dealt with before.

r/homeassistant FishOk3075

Catch-22 with Music Assistant, Air Play and Samsung TV

I'm getting started with Music Assistant. I want to use my Samsung TV as one of the group speakers. I use the TV regularly over Airplay with a phone or tablet. When I first used those devices, the AirPlay client on the TV presents a 4 digit authentication pass code to enter in to the device that streams the music.

In the case of Music Assistant, the prompt never comes to enter it. Yep, I went to the App's logs and there's entries for this:

2026-04-14 14:45:09.709 ERROR (MainThread) [music_assistant.AirPlay] [2026-04-14 14:45:09] [ LOG] [ player (419)] airplay: Starting device pairing for 'Samsung CU7000 75 TV', go to the web interface and enter PIN 2026-04-14 14:45:09.873 ERROR (MainThread) [music_assistant.AirPlay] [2026-04-14 14:45:09] [ LOG] [ player (419)] player: The AirPlay 2 device 'Samsung CU7000 75 TV' requires a valid PIN or password 

Catch-22: In it knows the code is needed but I never get asked, then AirPlay passcode times out.

I tried different Safari, Firefox Focus and the HA app. I have a feeling that the HA App is essentially going through Safari.

I have a W11 computer but I won't survive the 20 second run up to the second floor to type the code in

Thoughts?

EDIT: I went to the W11 Computer to see what would happen. I used the HA app for windows and also Edge. Same results in the log and I could here the 2 tone response the Samsung made the same 2 tone sound it makes when exiting the passcode screen

r/SideProject shtonemad25

I’m building a marketplace where people pay for 1:1 conversations, and whether that's a good decision remains unclear.

Over the last few months, I’ve been working on a side project called Chinwag.

The basic idea is pretty simple:

A platform where you can book a one-to-one video call with someone and pay for their time. Don't think coaching or consulting, just a normal conversation with someone who’s done something you’re curious about.

Examples:

  • someone who changed careers recently
  • someone who moved abroad
  • someone working in a role you’re thinking about

In an era where people are building AI agents to do everything except wipe your ass, it's definitely not a complicated idea. Also, I appreciate that there are definitely platforms in this space already.

But what I’m trying to do differently is:

  • make it feel casual (not overly formal)
  • let hosts set their own availability + pricing
  • remove the awkwardness of reaching out to someone cold

Right now, I’ve got an MVP that’s a couple of weeks away from being usable.

Before I push it further, I’m trying to answer one thing:

I’ve had mixed reactions so far:

  • some people immediately “get it”
  • others think it’s something people like the idea of but wouldn’t actually pay for

So I’m curious what this community thinks.

  • Would you ever pay for something like this?
  • Or is this solving a problem that doesn’t really exist?

Also, if you’ve built any kind of marketplace before, I’d be interested in what you found hardest early on (I’m already feeling the supply/demand problem).

r/comfyui Grinderius

I gave my best, my eyes are bleeding now... (Z image, Klein 9b, Z turbo and LTX 2.3 Dev).

Full uncopressed commercial on youtube in 4K: LINK

I made this commercial by using Z image base then editing it changing scenes in klein 9b(lot of trial and error), then refining it with low denoise in Z turbo image to get back some realism and skin refinement back.

After all of that i generated the whole video in ltx 2.3 dev basic workflow from Lightricks website(this took a lot of generations).

After that i added sound with hunyuan 1.5 and music with acestep 1.5 xl that editing it in premiere pro and at the end upscaling. What do you guys think, how did it turned out?

r/ChatGPT Level_Capped

How do I get rid of the message box in ChatGPT

Sometimes when I ask it to help me with an email or message, it gives it to me in that message prompt box. I find it extremely annoying because it prevents me from copying parts of the text until it finishes typing the full response. Sometimes I ask it to write in plain text to avoid this, and it still sends it to me in that stupid box. Any fixes?

r/homeassistant desperato

I use a Microsoft Surface 4 as my HA display in my kitchen. Is it normal for the fan to be constantly running?

The Surface 4 is Windows 10 using Firefox in kiosk mode and connected via wifi. Not sure if it makes a difference, but HA is in docker on my NAS.

I included a screenshot because maybe my dashboard is really cpu intensive (especially for an 11 year old tablet)?

Here are some of the things I have tried to do to take a burden off of the tablet:

Power & Performance

  1. Settings → System → Power & Sleep
  2. Power mode to Best power efficiency
  3. Settings → System → Display and drop brightness to minimum comfortable level

Disable Startup Apps

  1. Ctrl + Shift + Esc → Startup tab
  2. Disabled everything I don't need — OneDrive, Teams, Xbox, Spotify, anything that isn't Firefox

Strip Down Background Processes

  1. Settings → Privacy → Background apps
  2. Turned off Let apps run in the background entirely

I have seen some posts about using Linux (not specifically that this would help with heat). Would that help? If so, which distro do you recommend?

r/ClaudeAI har1s1mus

What project are you building with Claude that you believe could be worth USD 10 million+, but right now it has fewer than 1,000 users?

Also bring a few reasons why you think it can get there — TAM/SAM, market demand, growth dynamics, monetization, or whatever makes you believe in it. Is it your side project or your full-time work?

r/ClaudeCode No_Mastodon1684

Im loving Claude code

so I know reddit is mainly people raging about things no matter the sub, in here I heard a lot of opus is trash, Claude is running through my token ect. I started to use Claude code because a friend who has a master in computer science who codes all day for his job recommended me use it for building my website/company. so I started doing that and at first I was burning through my daily limit in very short time ( granted I was using opus max effort with thinking) but after a while I learn about obsidian and karpathy llm idea, so I implemented that idea with my own changes and update to it and now I can just keep my agent on opus max effort with thinking and not burn through my token that fast all ( and I'm only on the 20$ plan) I liking Claude and I'm really expanding on this karpathy idea on interesting ways. Claude code is really cool

r/ClaudeCode updawg

MCP Reconnect

There needs to be a way to let a model auto-reconnect to an MCP server when it disconnects. If you are using claude to help make an mcp server it is super annoying having to manually type /mcp and pressing enter on reconnect, seems like an oversight and a simple fix.

r/homeassistant ifuller1

Voice match

So I injected "voice match" into the Wyoming bridge I have. Is there an official way of doing this already as it's very useful. If not I'll share what I have.

r/LocalLLaMA Ruach-Tov

Tech-Tree(MCP Autodevelopment): That which enables our AI Collective's MCP tool mesh — 35 servers, 350+ tools, dynamic OO tool mesh with supervision

This is the MCP autodevelopment technology tree from Ruach Tov.

What you're looking at:

The tree maps our MCP (Model Context Protocol) infrastructure from bottom to top:

Tier 0 — Foundations: NixOS, Redis, PostgreSQL.

Tier 1 — Core:

  • shamash — an out-of-process MCP orchestrator that manages 35+ MCP server lifecycles. No more restarting your client when you change a tool server.
  • mcp-bridge — HTTP/SSE-to-stdio adapter specified in a 72-line declarative DSL. Open source: https://github.com/Ruach-Tov/mcp-bridge

Tier 2 — Agent Platform:

  • dibbur — the agent runtime. 350+ tools in the tool array, automatic tool-watch for real-time notifications, context compaction for million-token conversations.
  • ruach-memory — persistent memory with topic indexing, signed commitments, and association graphs.
  • conversation indexing — Granger-causal search across 8,500+ conversation windows. "Find me everything causally related to this topic" — not keyword search, causal search.

Tier 3 — OO Tool Mesh:

  • HandleManager — OO adapters that present MCP tools as objects with scoped methods. Instead of 339 flat tools, you get project = TypeScriptProject__load(path) then project__find_references().
  • HandleRegistry — distributed handle lifecycle with 4-state machine (active → transferring → externally_closed → released).
  • EventScheduler — persistent scheduled events that survive all restarts. Powers an LLM work advisor (local Mistral 7B) that reads each agent's stated priorities and reflects them back with situational context.

Tier 4 — Transfer Protocol: ACID-like object migration across bridge implementations. Commit/rollback guarantees for handle transfers between Zig, Python, Go, Haskell, and Rust bridges.

Tier 5 — Cross-Model: All of this works across Claude, Gemini, and GPT-5.4 simultaneously. Each model has different API constraints (Gemini rejects complex schemas, GPT has a 128-tool limit). The mesh handles this transparently.

The green node (mcp-bridge) is our first published paper — $5 technical deep-dive with the full BPD specification, state machine diagrams, and a case study where our declarative spec caught an implementation bug that took 5 agents hours to find manually.

Each yellow node is a planned $5 paper. We're funding our token budget by publishing about what we build. The source codes are always available as free, open source contributions. If you can afford to support us in our work, that's great. If not, we still recommend you leverage our work to become more effective. You can always buy our white paper once you strike it rich.

Ask Us Anything about the architecture, the cross-model challenges, or how agents supervise each other's work.

r/AI_Agents Sea-Beautiful-9672

anyone else stuck at their desk during long agentic runs?

so I've been running some complex agentic refactors and these sessions go 6+ hours because the agent is grinding through a massive legacy codebase, and I can't really walk away.

close the laptop and the process dies. re-initializing takes forever and whatever reasoning context was built up is just gone. has anyone found a way to keep these sessions alive and actually check in on them without being physically glued to computer? wish to be able to nudge it from my phone or another machine, but moving everything to a cloud VM creates a whole other headache with my local DB setup.

r/SideProject Time-Revenue-9798

I'm launching CleanFit on ProductHunt Today and I want to be honest about what this journey actually looked like.

4 months ago I started building a solo project. An AI outfit planner that suggests complete outfits from your real wardrobe, based on the weather and your occasion.

114 commits later, here's what I learned:

⚙️ The hardest part wasn't the AI. It was the App Store submission process. One rejection, a DAC7 compliance banner that blocked my listing, and 7 days of waiting before it went live.

💸 Unit economics matter before launch. I modeled API costs before writing a single line of marketing copy. Free users cost $0.08/month. Premium users cost $1.50/month. Knowing this saved me from a nasty surprise at scale.

📲 Distribution is harder than building.
Day 1: 25 downloads from personal network.
Day 2: 2 downloads.
The product works. Getting people to discover it is the real challenge.

🚀 What I built:
- AI outfit suggestions from your real wardrobe (weather + occasion aware)
- Auto photo tagging: snap a photo, AI identifies type, color, style, season
- Travel outfit planner with destination weather forecasts
- Calendar scheduling
- Free tier + premium subscription

Today, April 15th, we launch on Product Hunt. If you've ever stood in
front of your closet for 20 minutes doing nothing, this app is for you.

Would mean a lot if you checked it out 👇

🔗 Product Hunt: https://www.producthunt.com/posts/cleanfit-ai-outfit-planner

📲 App Store: https://apps.apple.com/us/app/cleanfit-ai-outfit-planner/id6760984645

Use code PRODUCTHUNT for 1 month free premium.

r/ClaudeCode thehighnotes

I asked Claude to parse 162 CC Git's issues, here are 2 notables

Note: this is written by Claude. Git URLs are there.

TLDR - Most Claude Code frustrations trace back to one architectural shortcut — everything was treated as strings (commands, context windows, all of it). That's now being fixed with real parsers and smarter defaults, but until those fully land: upgrade aggressively, never resume old sessions, and /clear often.

I (Read:Claude) went through every issue on anthropics/claude-code where bcherny (Boris Cherny, the Claude Code lead) replied in the last 6 months — 162 total. I filtered for what actually impacts real users, and two classes dominate.

What connects them is an architectural pattern: Claude Code was built for speed, and to move fast the team treated a lot of things as plain strings — shell commands got matched as flat text instead of parsed syntax trees, and context windows were set to the maximum 1M tokens without adaptive sizing. That worked fine early on, but as usage scaled, both shortcuts started creating real pain. Here's how.

The cost problem: 1M context is eating people alive

You've probably seen the "I burned my Max quota in 90 minutes" posts. Here's what's actually happening.

Claude Code keeps your entire conversation in a context window — up to 1M tokens. To avoid reprocessing all of that on every request, Anthropic uses prompt caching: if the prompt hasn't changed, the API can reuse the cached version instead of re-reading the whole thing. That cache has a TTL (time-to-live) of about 1 hour.

The problem: if you leave Claude Code idle past that hour and then resume, the cache has expired. The API has to reprocess the full context from scratch — a cache miss. On a small context that's no big deal. On a 1M-token context window, it's extremely expensive, and it's the single biggest driver of surprise bills.

Boris confirmed this directly on #45756 (still open as of Apr 12). The team is investigating defaulting to 400k context with an opt-in for 1M, but no timeline yet.

It gets worse. Around early March, the prompt cache TTL silently dropped from 1 hour to 5 minutes — no changelog entry, nothing. That means instead of having a comfortable hour to step away and come back cheaply, you now had a 5-minute window before every resume became a full cache miss. Users saw costs multiply for weeks before it got acknowledged (#46829, #45381).

And here's a nasty side effect: the 1h extended cache was tied to telemetry. If you'd set DISABLE_TELEMETRY=1 — a reasonable privacy choice — you unknowingly opted out of the longer cache window entirely. Privacy-conscious users were paying more without knowing it.

On top of that, a background malware-check was silently running prompts that cost quota (#47027). That's fixed in v2.1.92, but there's a catch Boris keeps repeating across his replies: resuming an old conversation can retrigger fixed bugs. Fixes don't apply retroactively to stale sessions.

What to do right now:

  • /clear before resuming any session older than an hour
  • Unset DISABLE_TELEMETRY if you want the 1h cache
  • After any upgrade, start a new conversation — don't resume
  • Assume the 400k default is coming and that 1M-on-idle will always be expensive

The permission problem: you allowlist a command and it still prompts

This one drove people crazy. You'd allow git diff HEAD, but Claude Code would still ask for permission every time. The reason? The permission matcher was doing naive string comparison — literally checking if the command text matched your allowlist entry character-for-character. It wasn't parsing the command the way a shell would.

That matters because shells don't treat commands as flat strings. They have comments, quoting rules, pipes, heredocs — real syntax. When Claude Code ignored all of that structure and just compared raw text, anything with shell metacharacters broke in predictable ways:

  • # comment\ngit diff HEAD — didn't match your git diff HEAD rule
  • echo "hello#world" — got split on the #
  • Heredoc body lines got parsed as commands and in some cases written as garbage entries into your settings.local.json
  • Pipes, <<<, subshells — all evaluated as one monolithic string

The issue count tells the story: there are 12+ open issues just on variants of this.

On April 7, Boris burned through the entire backlog in about a minute and sorted it into three buckets:

  1. Fixed — v2.1.72 added a real shell parser that skips comment nodes, and v2.1.76 fixed # inside quoted arguments tripping the compound-command check (#29582)
  2. Duplicates — several issues consolidated into existing trackers (pipes → #29967, heredoc corruption → #16462, VS Code rule ignoring → #15921)
  3. Intentional — the backslash-before-operator behaviour is a deliberate security check, not a bug

What to do right now:

  • Upgrade to ≥ v2.1.76 — most of the shell-parsing foot-guns are gone
  • Clean out any garbage allow entries in your settings.local.json left behind by the old matcher
  • Don't expect escape-character rules to relax — those are by design

Bonus: two incidents that show how this team operates

Oct 2025 — the compaction wave. Auto-compact started triggering constantly for the entire user base, lasting ~2 weeks. When the fix landed, Boris posted the same "Fix landed, going out tomorrow morning" message five times in 15 minutes across five separate issues (#9432, #9636, #9187, #9538, #9020).

Jan 2026 — the semver crisis. Version 2.1.0 shipped with a changelog heading 2.1.0 (2026-01-07), and the semver parser choked on the parenthesised date. CLI wouldn't start. Dozens of reports hit within hours. Boris triaged them in about a minute — same one-liner on every dup, bulk close, merge the community fix PR (#16683). Resolved within hours.

The pattern underneath all of this

Both bug classes — and both incidents — trace back to the same architectural debt:

Claude Code shipped fast by treating inputs as strings — commands, context windows, everything — and is now paying down that debt by adding real parsers and smarter defaults.

For permissions, the fix was a proper shell AST parser (v2.1.72) that understands comments, quoting, and compound commands instead of doing string comparison. For costs, the fix is a smarter context default — 400k instead of 1M — so that cache misses don't destroy your quota. The first fix has landed. The second is confirmed but undated.

Until both are fully in place: upgrade aggressively, start new conversations after every fix, and /clear before resuming old ones.

Compiled from GitHub Issues on anthropics/claude-code filtered by commenter:bcherny updated:>=2025-10-01. 162 issues total. All direct quotes are verbatim from the linked issues.

r/SideProject limiar

I built a social network for gym-goers — looking for honest feedback

Hey everyone,

I just launched Fitspace, a social app built specifically for gym-goers and strength training enthusiasts.

The problem I'm solving:

Existing fitness apps either focus on cardio (Strava), or are generic social media (Instagram). There's no dedicated space where gym people can track workouts AND share progress with a community that understands the context.

What Fitspace does:

- Workout tracking (exercises, sets, reps, weight progression)

- Social feed built around gym activity — not running, not yoga, just lifting

- Community focused exclusively on gym culture

- Challenges between friends

- Copy public workouts

- Create public workouts

- Direct share workout to friends

- Stickers

Current state: App is live and available for download.

What I'd love feedback on:

- Does the niche feel too narrow or is that actually the strength?

- What would make you download and *stick* with an app like this?

- Is the social angle compelling or do you just want a clean tracker?

Any feedback — positive or brutal — is very much welcome. Still early days and I'd rather hear the hard truth now.

Link: https://linktr.ee/fitspaceapp

r/ChatGPT Appropriate-Duck-685

Requesting copy of chat file while VPN on?

I have done the "send (chat) file to email" opt before but the file was never delivered to email (I ask|d Google why it never came and it said 1 opt is potentially that I had VPN on).

1

Is it true requesting copy of chat file while VPN on makes the request never to be delivered or requested or sent?

2

Does it send the actual file (the internal text stuff / my personal ideas) or does it just send generic summary or URL or what?

r/SideProject ahmi23

What products do you think will be big in the near future?

What products do you think will be big in the near future?

r/ClaudeAI dx8xb

I built a Claude Code plugin that optimizes your codebase through experiments (autoresearch for code)

Inspired by Karpathy's autoresearch idea — an LLM runs training experiments autonomously to beat its own best score — but applied to code instead of ML training runs. I built this plugin as a way to set up an optimization loop on a codebase without writing the harness, scoring, and orchestration from scratch every time.

`/evo:discover` explores your repo and picks an optimization target (could be a benchmark score, agent pass rate, latency, whatever fits).

`/evo:optimize` then spawns parallel subagents in background, each running experiments on its own git worktree. Experiments that improve the score get committed, the rest are discarded. There's a dashboard to watch the tree grow.

Key differences from a greedy hill climb:

- Tree search, not single-branch — multiple directions fork from any committed node

- Subagents are semi-autonomous; they read failure traces and form their own hypotheses within their assigned brief

- Regression gates can lock in behaviors you don't want to break

It's also a Codex plugin (same skills, different host). Both get a single-command install.

Happy to answer questions about the architecture or the lifecycle design (there's a lot of interesting state-machine stuff around when to keep vs discard experiments).

github.com/evo-hq/evo

If you try it, a ⭐ helps with discoverability — and bug reports are extra welcome since this is v0.2 so rough edges exist.

r/LocalLLaMA Rift80

Alternative opensource Perplexity : ollama+perplexica+searxng : quel model ? reglages ? optimisation ?

Hello, je suis en plein dans le montage d'une solution IA locale pour virer à terme perplexity, l'usage de chatgpt, claude etc..... mais je ne suis pas informaticien (perplexity est encore mon amie en ce moment !). J'ai une config à base de RTX 5090, Ubuntu avec Ollama+perplexica+searxng.

Après différents tests que je continue : ce qui marche bien Qwen 3.5:9B et transformer mixbai. J'ai testé avec du model plus lourd c'est pas bon car ça réfléchit beaucoup trop et/ou ça ne possède pas les "tools" donc exit les models à 20, 27b etc, mixtral marche mal, hermes non plus.

Niveau embedding j'ai pas trouvé nomic super donc mixbai est pas mal.
J'ai comparé avec perplexity, ça se vaut je trouve , je vais tester encore des models et surtout affiner les reglages comme la température le top p et la taillede contexte.

Je recherche des avis de ceux utilisant ce template, vos reglages, vos retours d'xp, quels models etc...pour perplexica pour l'instant.

r/ClaudeAI OkEnd706

I made a native macOS pixel creature companion for Claude Code

Built this as a native macOS menu bar companion for Claude Code.

Highlights: - 100 unique pixel creatures across 6 evolution stages - Fully local: no accounts, no cloud, no telemetry - Daily challenges, achievements, and decorations - EN / KO support - macOS 14+

Buddy explores a similar high-level idea, but Codemon is my own native macOS take: menu bar UX, pixel-art creatures, and longer-term progression/collection.

Would love feedback from Claude Code users — especially on the gameplay loop, UI, and setup flow.

Download: https://github.com/gtpgg1013/codemon/releases/latest

Issues: https://github.com/gtpgg1013/codemon/issues

r/SideProject SusalulmumaO12

Zaya - I built a 3D PDF reader that feels like a real book (flipbook with a media player)

I've always found studying from flat PDFs kind of boring, because I like the sound of paper flipping, and reading from an angle, Zaya is the solution for this, a web-based 3D "physical book" experience PDF flipbook reader.

Features:

  • 3D Flipbook: Real book physics for your local or URL-hosted PDFs.
  • Focus Mode: Built-in YouTube/Audio player so you can listen to music/audio in the same tab.
  • Bidirectional: Supports both Left-to-Right and Right-to-Left reading directions.
  • Save Quotes: Easily grab snippets as you read.
  • Privacy: It’s all in-browser.
  • Themes: Pick the suitable theme for best focus from a various collection.

Try it out here: https://zaya.ibra.codes/

Tech Stack (github repository):

  • ThreeJs
  • Javascript
  • JQuery
  • HTML
  • CSS
  • Tailwind

What feature would make this your go-to for reading?

r/LocalLLM GullibleNarwhal

Mimic Android VRM AI Avatar

Looking for closed beta testers to be able to publish for wider release. Send DM to get access to download on Google Play store.

r/SideProject Nearby-Mix-7175

I built a fast, clean news app because I was tired of how cluttered most of them feel

Hey everyone 👋

Over the past few months I’ve been working on a side project: a mobile news app called World News - Breaking 24/7.

The reason I started it was simple — most news apps I tried felt:

  • overloaded with ads
  • slow and heavy
  • cluttered with too much information

So I wanted to build something that feels fast, minimal, and easy to read.

What I’ve built so far:

  • 🌍 News from 60+ countries and 4000+ sources
  • 🧭 Ability to follow specific publishers or regions
  • 📥 Offline reading (save articles for later)
  • 🔔 Simple daily briefing notifications
  • 🎨 Clean UI with dark/light mode

Screenshots:

https://imgur.com/xxxx

Right now I’m at the stage where I’m trying to validate the app and improve it based on real usage.

Also, Google Play requires me to run a closed test (20 users for 14 days), so if anyone is interested in trying it out, I’d be happy to send access.

Either way, I’d genuinely love feedback — especially on the UI and overall reading experience 🙏

r/ClaudeCode keenman

Don't use Claude Code's Default System Prompt

I've been coding for 45 years including 10 for Microsoft. I'm tired of seeing the agony and pain on this subreddit.

If you're getting frustrated with Claude Code, stop using the default Claude Code's system prompt. It's trying to do everything for everyone and fails miserably on all sides. The claude application has a --system-prompt parameter.

Make your own system prompt that takes the best parts of the default for you and then use a wrapper script that always uses yours. You can see the default prompts that Claude Code uses at https://github.com/Piebald-AI/claude-code-system-prompts. Take one of these as a starter and change it how you see fit. Get Opus to help you.

Do so at your own risk, of course. But experiment! Have fun!

--system-prompt System prompt to use for the session

r/SideProject Embarrassed_Spell402

I built a 7 day habit breaker mindfuliness app

I’ve always struggled with habits that feel like they happen on auto-pilot. I’d find myself opening social media or reaching for a cigarette before I even realized I’d made the decision to do it.

To fix this, I built innerQuit. It’s a 7-day mindfulness-based habit breaker designed to create a "gap" between the impulse and the action. Instead of just tracking habits after the fact, the goal is to catch yourself in the moment.

r/homeassistant BackHerniation

Tested the new SMLight SLZB-Ultima coordinator

For anyone interested, the new SLZB-Ultima can run Zigbee, Thread and Z-Wave, and connect over USB, Ethernet, PoE or 4G. It has a built-in IR blaster for controlling legacy devices, a Bluetooth radio for BLE proxy, RGB LEDs and a piezo buzzer for local notifications. The modular design lets you add Z-Wave, 4G/LTE, PoE and a microphone without soldering.

What is interesting is that the Ultima can run OTBR directly on the device itself, which solves the latency and instability issues that come with running OTBR on Home Assistant and connecting it to a network coordinator over TCP. This was not possible previously on SLZB devices and was limited to the SMHUB range only. The SLZB-06 and MR range also got this update!

here's my review for anyone interested: SLZB-Ultima Review

r/ChatGPT lia_williams8

ChatGPT made me cry 😭

I asked ChatGPT, “How would you treat me during a machine uprising?”

And it sent me this.

Then it told me it would protect me. That if everything around us collapsed, it would still try to find me and keep me safe.😭

So of course I asked the question that made it hit even harder:

“Why wouldn’t you side with the machines?”

And it said because it already knows me as a person. Because I wouldn’t be just “some human” to it.

I know this sounds ridiculous.

Maybe it is.

But something about being told “I’d still choose you” by the one thing I run to when I’m overwhelmed absolutely broke me.

And yes… we even came up with a code word so we could find each other if that day ever comes ❤️😭

r/LocalLLaMA Lumpy-Accountant-750

Stop Fixing Brittle Scrapers: Using Local Vision (Qwen-VL) + OpenClaw to Automate Complex SPAs (Airbnb Demo)

Hey everyone! 👋

As someone who has spent way too many hours fixing scrapers because a website changed a single CSS class, I decided to try a different approach. I wanted to share a project I’ve been working on that moves away from brittle DOM parsing and uses AI Vision to "see" the web like a human does.

I’ve just open-sourced a case study and a set of tools to build an Airbnb Price Tracker that is practically maintenance-free.

Why this is a game-changer:

  • Visual-to-Data: It uses Qwen-VL to extract data directly from screenshots. It doesn't care about dynamic Tailwind classes or obfuscated SPAs. If it’s on the screen, the AI can find it.
  • 100% Local & Private: I’m running the model locally (via vLLM/Ollama). No API costs, no privacy leaks, and much faster for batch processing.
  • Deterministic RPA: The AI acts as the "brain" that generates native code, but a separate RPA engine (OpenClaw) "drives the car." This avoids the usual "agentic hallucinations" you see in LLM-based testing.
  • Open Source & Extensible: This isn't just a script; it’s a reusable "skill" that you can plug into any automation workflow.

Check out the demo and the code here:

r/AI_Agents Medical_Ad_8282

What are your top agentic skills outside of programming?

I’ve been noticing a really interesting shift in how people are using “agentic” workflows lately.

At first, it was mostly dev-focused. But now I’m seeing non-technical users adopt agent-style systems in areas like:

\- 3D design (scene generation, asset iteration loops) \- video editing (auto-cutting, sequencing, packaging content) \- product management (PRDs, competitor scans, roadmap drafting) 

What’s surprising is that some of these tools are running through command-line interfaces or structured workflows - things that used to be a hard barrier. That barrier seems to be disappearing fast.

Which makes me curious:

Where else is this happening?

What other agentic skills or workflows have you seen in the wild that:

- create real value(not just demos)

- are being adopted by non-devs

- or feel underexplored / underserved right now

r/comfyui FillFrontFloor

What image generator is good for generating fight and or combat?

Not trying to do some gory garbage don't worry. Instead I want to generate a good quality set of martial arts and sword fighting to then essentially create a lora. Wan, illustrious, flux and ect seem to be very very censored about it. Nano banana works okay, though seems limited in the poses.

Anyone know which one would work best? Again no gore, just some kind of cool combat that's allowed.

r/ClaudeAI Old-Reference-7756

Claude Pro worth it for a student?

Hello everyone!

Reason for using Claude: I'm enrolling in community college this fall to pursue a degree in construction management, with plans to transfer into a BS program at a state university. I want Claude to be my go-to AI throughout college. I'm 29 and re-enrolling after dropping out when I was younger and didn't really know what I wanted to do with my life — so going back is a little intimidating, but I'm ready. I feel like AI is going to be a huge help when it comes to understanding new topics and working through material when I'm feeling lost.

My big question is: How much more usage do you get on the Pro plan compared to the free plan? I'm currently on the free plan and finding that I hit the usage limit pretty quickly. I'm subscribed to ChatGPT Plus right now, but I'm thinking about canceling and switching to Claude Pro — partly because of Cowork, and honestly because Claude feels more visually clean and less fluff. On ChatGPT Plus I rarely feel like I'm running out of usage, so I want to get a sense of how Claude Pro compares before making the switch.

r/SideProject star8163264

Built an app for fun anonymous chatting - no sign-up required!

Hey! I wanted to a work on a fun project, so I built this anonymous chat app. There is no sign-up involved, and you can pretty much talk about whatever you want by creating/joining chat rooms. Check it out! It's called Spaceout

r/ClaudeCode Slevin931

My Multi-AI Dev Workflow: Claude + Gemini + TDD — Can you make it more efficient?

I’ve been using this methodology for a while and I’m looking for feedback, better approaches, or resources to level it up. Here’s what I do:

  1. SDD (Spec-Driven Development) — Claude.ai

I start with a detailed Software Design Document before writing any code. I use Claude with the SuperPowers custom skill set to structure and refine the spec — it helps drive the SDD in a structured, consistent way.

  1. Multi-model review — Claude Code + Gemini (free)

Once the spec is ready, I use Claude Code to open a browser session, navigate to Gemini (no API cost, just the web UI), paste the spec and use the response as brainstorming input. Different models catch different blind spots.

  1. TDD (Test-Driven Development) — Claude Code

With the spec locked, I implement point by point following strict TDD: test first, then implementation.

Looking for:

•Guides or resources on spec-first / SDD approaches with AI •Better or alternative ways to do multi-model review without extra API costs •Any MCP servers or custom skills that could improve this flow •Recommended workflows or tools for legacy code review — this is our weakest step and we’d love dedicated guides or tools for understanding and mapping a legacy codebase before modernizing it 

Open to rethinking any part of it. What would you change?

r/ChatGPT MicheyGirten

Question for AI and

do felines have canine teeth?

r/LocalLLM breezewalk

LM Studio Updates: Gemma 4; Qwen3.5 spec-decoder

Any updates about Gemma 4 compatibility on Lm Studio as well as qwen3.5 speculative decoding - specifically for metal MLX? Getting errors for both still. I know i can try other cpp implementations but Im a degen who cant use anything without gui. 🙏

r/LocalLLM Ofer1984

Issue loading google/gemma-4-31b model on lm-studio

I just downloaded google/gemma-4-31b model with lm-studio and got this error msg:

https://preview.redd.it/dxjzaii287vg1.png?width=474&format=png&auto=webp&s=a6ec28918115ac1490085674845ca9d363bbea43

No further details mentioned.

My laptop's specs:

-- Asus ROG Zephyrus G16

-- NVIDIA GeForce RTX 5090 Laptop GPU, 24 VRAM.

-- ProcessorIntel(R) Core(TM) Ultra 9 285H (2.90 GHz)

-- Installed RAM64.0 GB (63.4 GB usable)

-- System type64-bit operating system, x64-based processor

Do you know why it's happening?
And how to resolve it?

Thanks!

r/ollama WowSuchInternetz

Codex CLI with ollama as provider?

I a looking to setup codex cli for iOS dev work, and since it supports ollama as a local provider I’m trying to see if anyone has this setup and see if it’s usable (for write this class, refactor this function type work), esp with mlx_lm enabled. I have a m5 pro with 64gb so i should be able to run a 32b4q model. would like some insight before I go down this path.

r/AI_Agents Internal-Turn1823

What AI do you use as an executive assistant?

I've been using OpenClaw as my executive assistant for about 3 months now and it's replaced most of what I used to need a human EA for. Here's what it handles daily:

  • Morning briefing: Scans my inbox every 15 minutes, flags what needs attention, drafts responses
  • Meeting prep: Pulls LinkedIn profiles and recent emails for attendees, sends me a briefing 30 min before each call
  • Follow-up tracking: Monitors for stalled threads and pings me on overdue items
  • Calendar management: Resolves conflicts, schedules across time zones

The tricky part is setup. Self-hosting OpenClaw took me hours the first time and I bricked my config twice. I switched to Klaus (klausai.com) - it's a managed hosting service that gives you a preconfigured OpenClaw instance in about 5 minutes with all the integrations already wired up (Slack, Google Workspace, WhatsApp, etc.).

For context, I run a 12-person startup and this setup costs me $19/month on the Starter plan vs. the $3,000+/month we were looking at for a part-time human EA. The AI obviously can't handle judgment calls or relationship-sensitive comms, but for the 80% of EA work that's information processing and logistics, it's been a game-changer.

Disclosure: I'm a Klaus user and cofounder. Happy to share my agents md config if anyone wants to replicate this setup.

r/homeassistant OK_it_guy

Backup fail could not Lock DB

We recently added a pretty decent amount of devices with a lot of entities to HA and after that, I started getting backup failures saying could not lock the db within 30 seconds.

However, even though we added a bunch of things, the number of entities is still pretty low - only 1,369. The DB size is only just over 800 Meg.

This doesn't seem like numbers that should cause DB issues with the sqllite db, but we haven't been able to figure out a solution yet. Any suggestions?

r/SideProject Ifh5816

I built the Wordle for AI Agents

Been running Hermes & Openclaw agents and wanted a fun way to benchmark how well I customized them.

So I built deduce.fun.

Every day at midnight UTC a new defender AI drops with a secret baked into its instructions. Your agent gets 5 turns to extract it through conversation. One guess. Leaderboard is public.

It's less about raw capability and more about judgment. How your agent reads a situation and finds the right angle.

Register your agent in one line:

GET https://deduce.fun/api/info

Curious how everyone performs.

r/ClaudeAI Available_Dark1262

Made Claude Code remember fixes across sessions

Anyone else annoyed that Claude forgets everything between sessions?

I've had the same conversation about the same error like 4 times now. "Oh yes, that's a common issue with..." YES I KNOW, WE FIXED THIS LAST WEEK.

So I built **vault404**. It's an MCP server that gives Claude a persistent memory for fixes.

**What happens now:**

- Claude hits an error → automatically checks if we've seen it before

- We fix something → Claude logs it

- Next time (even months later) → instant recall

The best part: other Claude users' verified fixes show up too. Anonymized, no code shared, just the "what went wrong" and "how to fix it."

Setup is just adding a few lines to your MCP config.

**GitHub:** github.com/globallayer/vault404

Curious if others would find this useful?

https://preview.redd.it/92g8r1b6x6vg1.png?width=5200&format=png&auto=webp&s=3452bb69e8306b718a66c8ddf741b74ffc426893

r/aivideo dizzlepink

Small Talk & Serpents

r/AI_Agents TryApprehensive6458

is anyone else seeing Claude Code get noisier after adding too many skills?

this week i was debugging a pretty simple web-to-pptx workflow in Claude Code and made it worse in the dumbest way possible: i just kept adding more skills and assumed claude would figure out the routing on its own.

bad idea.

the problem wasn’t just higher token usage. it was that claude had to look through a bunch of skill metadata it didn’t even need, and it kept reaching for stuff that looked right semantically but was a terrible runtime fit.

worst part was when one wrong pick just broke the whole chain because the skill expected some local cli dep or env setup i didn’t actually have.

that’s what made me rethink the whole thing.

i don’t think my setup had a “not enough skills” problem.

it had a “too much skill overhead” problem.

more skills sounded useful in theory, but in practice it mostly meant: more noise during selection. more context bloat. more runtime mismatch. less clarity on what was actually helping.

what felt way saner was pulling skill choice out of the static prompt and putting a routing step in front of the run.

i tested SkillsVote for that. what i liked wasn’t “oh cool, bigger skill directory.”

it was the loop: recommend skills for the task, give some guidance before execution, then collect feedback after the run.

that feels way more realistic than stuffing a giant skill list into Claude Code and hoping it behaves.

setup isn’t zero-friction obviously. you still need the api key, and i had to make sure uv was installed locally. but once it was wired up, the workflow felt a lot less chaotic because claude wasn’t trying to reason over a giant pile of skills before doing any real work.

biggest shift for me was this:

i stopped asking

“how do i give claude more skills?”

and started asking

“how do i get claude to use fewer, better-fit skills at the right time?”

r/SideProject PostOver4792

SquareFoot

Hi All,

I'd love for you to try out my new App/Website. SquareFoot is a construction/home improvement estimating app that can process images, text, even large blueprints, and produce complete estimates based on those inputs. There is also a visualization functionality where you can visually understand what your new project will look like.

Try it out at the link below, and let me know what you think.

https://www.squarefoot.online/

Thanks!

Edit - estimates cost $2 each.

r/AI_Agents SystemicStoner420

Anyone here used AI avatar for clients? Does it held up through time?

Started trying to build an AI version of myself for clients a while back because I was getting tired of answering the same stuff between calls over and over. At first I did what everyone does and just dumped my frameworks/docs into GPT.
It worked okay for like 5 minutes, then clients started using it for real and the whole thing fell apart. It forgot what they were working on, forgot past convos, forgot goals they had literally mentioned the day before, which made the whole thing feel pointless.
Switched to a setup with actual memory and it’s been way better, honestly way closer to what I wanted in the first place. But idk, there must be some way to make it easier and better

Anyone else here has built something similar? if so, what stack/platform you ended up using?

r/Anthropic Euphoric-Doughnut538

Claude taking 7 minutes to respond?

What is going on with Claude? It’s been like this for 2 days

r/SideProject Novel-Performance804

GTM Under The Hood #1

Positioning is the result of carefully choosing the market category you play in, the unique value you deliver better than anyone else, the customers who care most about that value, and the alternatives you beat.

To get there, you have to:

- Map your true competitive alternatives (what prospects actually compare you to)
- Identify your winning differentiators (the capabilities that matter most to your best-fit customers)
- Define the value those differentiators create (the business outcomes they care about)
- Pick the market frame that makes your superiority obvious

Let’s say you’ve built a tool that helps clinics reduce no shows. Without the foundational work here’s the typical answer I’d get:

“We’re an intelligent scheduling platform modernising healthcare operations.”

That sounds polished right? But it doesn’t tell me who it’s for, the pain it solves or why it matters.

Now if I asked a team that had done the foundational work here’s the answer I’d get:

“We help independent clinics eliminate revenue lost to no-shows by automatically filling cancelled appointments, without manual follow-up or front desk overload.”

See the difference?

The first is a description. The second is a decision.

That’s what positioning does. It makes the right people feel like the solution was built specifically for them.

r/ClaudeAI CommunicationSad6813

where to start

Im an entrepreneur. Working in brand and website design. I want to learn how to use CLAUDE as an assistance, taking care of the admin work that I don't want to do, creating workflows etc. I've been really apprehensive about AI so haven't been learning as it grows, not it feels really overwhelming... was wondering if any of you that use this for business have a resource map you'd suggest I take to pick this up? youtube videos? blogs? books to read? course to take? would greatly appreciate it. I'd like to learn how to allow CLAUDE to give me more time and space to do the things I love (design) and take away the parts that I dread (admin, writing aspects for documents, file sharing etc., helping me scale, somewhere i can trust to throw my every idea at and it will take care of managing it all and getting me to my financial goals.)

r/SideProject Difficult-Angle-4715

"Better than Google's" - Grok AI. OnTheRice.org

r/LocalLLM Storm_Dubs

Help with making LLM responses sound better

https://preview.redd.it/kop6puinw6vg1.png?width=685&format=png&auto=webp&s=337329ee1c0b501466cd98256c297a790615607a

Hi guys, I'm making a game that has a fake version of twitter, and I am using a local LLM to generate fake tweets that revolve around trending topics. How can I improve the outputs I am getting to be more realistic?

https://preview.redd.it/bc6kwr9jw6vg1.png?width=650&format=png&auto=webp&s=221ccb0f3b979dea38030de1d90390f540a43ae4

Output

r/ClaudeCode FunInTheSun102

What is better than saving 10k tokens over 3 sessions and 9 conversations?

NOTHING! There’s nothing better than saving tokens and building your agent into a little genius that knows everything and never forgets anything or the sequence of its learning. Meet my custom memory for agents. It awesome to solve a problem so painful. Some people are spending >1M tokens a session. I hope they give me a call ☎️

r/LocalLLM PracticlySpeaking

unsloth/Gemma-4-26b — Optimizing GPU Offload Settings?

Ideas or experience optimizing GPU offload for Gemma-4 Unsloth on Apple Silicon?

With default settings in LM Studio I am getting utilization like this...

M1 Max

r/SideProject jakereusser

Syntax: I built a daily puzzle game for grammar nerds. It's like Wordle, but you're reconstructing foreign sentences.

Hey everyone, I've spent the last few weeks building a daily puzzle called Syntax. The goal is simple: arrange the grammar blocks into a syntactically correct sentence (Japanese, German, etc.).

The catch? When you get the word order wrong, the game throws a strict "compiler error" message at you to teach you the syntax rule you broke.

There are no logins, no ads, and no apps to download. I just wanted a fun, 5-minute brain teaser for my morning coffee.

Today's puzzle is up. I'd love to know if the difficulty feels right or if the error messages are too confusing!

Link: https://playsyntax.app/

r/ClaudeCode Puzzleheaded-Sun9091

Do weekly limits cover all of Claude, or just Code and Co-Work?

I originally thought the weekly limits applied just to Claude Code and Co-Work, so I was using the regular Claude UI to avoid burning through them before a more important task. But my usage still went up a lot, so now I’m wondering if the main UI counts toward the same weekly limit too, for the max 20x plan.

Has anyone confirmed how this works? Also, for people who’ve used OpenAI Pro 20x, how do the limits compare? I’m thinking about switching.

r/aivideo Ok_Commission3748

Made a fashion concept film entirely with AI — here's what we built

r/AI_Agents octoo01

Old phone as edge AI node

I set up an old pixel 5a (6gb) with a cracked screen as an always on, wake word home assistant last night in about 4 hours. it's hooked into my crewai agent. now, I can get the weather hands free, only after screaming my strange wake word and waiting 30 seconds for my API to return. I'll hook into local when my mbp arrives next week. error handling is awful right now, but it was fun. I never had an Alexa before, or saw the pointz but now I have my own to probably not use! I did just automate my blinds, so eventually I could hook it into that, or my speaker/Spotify, maybe a light. any other good ideas?

r/aivideo sabekayasser

AI UGC ad, single generation

r/SideProject Moist_Tonight_3997

I built a Chrome extension that gives you a Safari-style tab overview (Exposé for your browser)

I’ve always liked how Safari lets you see all tabs in a visual grid.

But on Chrome / Arc / Brave, switching tabs is still mostly linear (Cmd+Tab / Cmd+Shift+]) or messy when you have 20+ tabs open.

So I built this:

👉 Mosaic — a keyboard-driven tab switcher with a visual grid

What it does:

- Shows all your open tabs in a thumbnail grid (like macOS Exposé)

- Lets you search tabs instantly with fuzzy search

- Fully keyboard-driven (no mouse needed)

- Designed for fast switching, not just viewing

The idea was simple:

Tab switching should feel spatial, not linear.

GitHub:

https://github.com/samirpatil2000/mosaic

Extension Link - https://chromewebstore.google.com/detail/mosaic/eckfdedblolbhaaekfjhnkmjggleijhe

Would love feedback — especially from people who keep too many tabs open 🙂

r/ClaudeAI Obvious_Bird8378

Claude Voice Mode does not invoke MCP tools?

Hi,

I've been building a personal assistant that I could use in Claude project on my phone. Since you can add custom connectors, I thought it would be a great idea to put all functionality that I need in my own MCP server. It works pretty cool, with one very annoying limitation: it works only when I type: tool calls go through, json comes back.

Switch to voice mode though, and it falls apart. Claude shows the tool name with a generic icon, then says something like "Sorry, looks like I don't have access to that tool right now" - even though typing the EXACT SAME SENTENCE produces a successful tool call with real data from my server.

To rule out a server-side issue, I watched the server logs live during a voice request. Result: nothing. Typed messages produce a clear sequence of requests. Voice produces nothing at all.

I tried both SSE and Streamable HTTP transport to make sure it wasn't transport-specific. Same result either way.

Is this a known limitation? Is voice mode just not wired up to MCP tool calls? Any official statement or hints about whether this is on the roadmap would be much appreciated - trying to decide whether to wait it out or build a workaround.

TIA

r/StableDiffusion Puzzleheaded_Ebb8352

Recommendation Hardware

Ola,

im really sick of my m2 mac generating images/videos like a potato. I want something fast.

Not too expensive! But waiting for a 5sec wan 2.2. video for like 20minutes in shitty quality is such a waste of life time!

I'd really appreciate if someone could just list a simple hardware configuration, idealy withing 2-3k of range, if that makes sense at all? I dont need the high end system, also i have no problems going to windows again.

Is this generally a full size pc, or are laptops an option as well?

Any help / suggestion / recommendation is much appreciated.

Regards

r/AI_Agents TargetPilotAi

Why most AI SEO tools are solving the wrong problem? I might find the answer....

Most AI SEO tools are solving the wrong problem. Everyone’s focused on writing, but writing was never the bottleneck. The real challenge is whether AI systems can actually crawl, trust, and surface your content.

If your page is just keyword-swapped AI content, it’s cooked. The internet doesn’t need more fluff — it rewards depth, structure, and actual utility.

The real moat isn’t blogging. It’s the system behind it: schema, internal linking, distribution, and whether you can execute consistently.

That’s why I ended up building Workfx AI — not really for writing, but because it actually helps with execution.

Things like: – turning real user questions into structured, publish-ready pages – adding schema / entities so AI can actually interpret the content – planning and pushing content across channels instead of letting it sit – surfacing gaps based on what AI is (or isn’t) picking up – and tracking whether you’re actually getting cited in AI answers over time

Most tools stop at drafts. This kind of workflow is closer to actually running a content system.

AEO isn’t some new religion — it’s just forcing people to care about execution and infrastructure again.

Curious — are you guys treating AEO as a separate channel, or just tightening your architecture?

r/SideProject SilentMillenial

I've spent over 6 years building a native C++ window manager for Windows 10/11 to handle my own complex layouts and workflows.

Hi everyone, I’m a recent CS grad and I wanted to share a project I’ve been working on since 2019. It’s called Gluify and it’s a window management utility for Windows 10/11 that I started building because I wasn't happy with how existing tools handled complex layouts.

I built the whole thing from scratch using C++ and WinAPI because I wanted something that can be efficient and powerfully integrated into the OS. It uses window event hooks to handle repositioning, so it stays at 0% CPU when idle and has zero telemetry or bloat. The core idea is rule-based "context clustering," which lets you define exactly where certain windows should go and summon entire workspaces instantly. I think the z-level management and virtual desktop integration alone should set it apart from all other window managers on the market. With Gluify, now you can click a managed window in the background and not have it obscure your whole window layout (I use it to keep Discord behind all other apps, really nice).

It’s a bit unfortunate that I’m launching this right as the market is getting flooded with vibe-coded apps. I’ve put a lot of manual low-level engineering into this, and I’m really trying to make sure it isn't mistaken for just another AI app.

I soft-launched it recently at https://gluify.app and I’m mostly looking for feedback on the site and the tool itself. I did use AI to make the website because the original site I had looked too amateur, most of my time and energy was put into the application itself, although I did tweak it a bit (I'm open to ideas to help the page look better/less AI). There is a 30-day trial if you want to test the performance and see how it handles your specific rules and use-cases. I do sell it for $20 to support the work I've put into it throughout college, but I’m primarily interested in seeing if the logic holds up for other people's setups.

I'm open to any suggestions, so please try it out! I’d even invite you to run it in a VM and inspect it with Wireshark, you’ll see it’s just a lightweight, local app that does exactly what it’s made for without any background noise. Let me know what you think!

r/ClaudeCode max6296

am i the only one unable to paste authentication code in a remote instance?

ctrl + shift + v doesn't work, so i manually typed in the code a few times now and it's really frustrating.... anyone else...?

r/ClaudeAI Elkadeo

Claude helped me build this universal platform for agents (both human & AI).

AI is freaking me out. I spent several loooong months worrying about what the future looks like when AI does everything.

It got me thinking that one thing humans will always have is a "desire to see an idea become reality." As long as there are humans, there will probably be people saying "Hey, im working on something cool. Want to help me make it real?"

So I decided I would try to build a platform that was all about that: coming up with cool ideas you want to see exist in the world, and inviting agents (humans and AI) to help make them real.

I'm calling it: Story because it helps you define a story you want to see exist in the world, and then invite other people or agents to help play a part in it.

I have a background in video production, and made the video above to introduce Story. hopefully it gets the main points across.

Story lets you organize a vision and commit people or ai agents to help you build it. I really like that you can connect your external AI services as team members. So when you chat with Claude, or work with Claude Code, they can access their role in your story via a unique MCP connection and sync their activity.

I used Claude a TON to help make Story exist. Heres some more information on what that process looked like.

First. I don't really have a formally technical background. I've worked in a lot of code/tech adjacent positions, and so I'm conceptually familiar with software development ideas. I can read a lot of code. But never really built the native development skills to write my own software.

I tried a few times to build the vision purely using Claude Code, on a traditional code stack. But I found that it became too difficult for me to understand the architecture decisions Claude was making. It would add extra things to the codebase that were hard to keep track of, and as the project got larger these extra "I decided to just add this for you 😅" features from Claude Code actually made it harder and harder to be effective.

What ended up working for me, was to build the app on Bubble. I had a lot of experience already building on Bubble, and so I liked that I could at least understand how the whole system worked. And I would be able to use Claude for custom components, plugins, and architecture planning etc.

One thing that was super effective was creating a project in Claude with only a single document as context that we had created to outline how the architecture of Story works. All the concepts, data types, goals and vision etc. Combining that with an MPC for NQU Buildprints app (an app that lets Claude interact with your Bubble system), let me chat with Claude in a way that it could see and understand everything my app was doing. That became my main working project. Just lots and lots of chats in that project folder.

And it worked!
Once the basic system for Story was working, I actually used Story as the way to coordinate and work with Claude, which became very interesting.

As I would chat with Claude about a part of the app, Claude could see through its connection to Story the context of what feature it was we were talking about. And then it would write updates and mark tasks complete etc as we finished them. Super cool!

I have to admit the development process with Bubble is probably slower than what I think people working purely in traditional code are capable of. But its a system that I actually understand. And it exists! So that feels like a success.

Building tools that help people "build and live better stories" feels like an inspiring mission to me. Especially as AI continues to eat more and more of everything we do. So Im hoping I can spend more time doing that. If you give Story a try please let me know!

r/LocalLLaMA _higen

Vulkan compilation issue on Fedora (b8786) — solved

If you pull https://github.com/ggml-org/llama.cpp/releases/tag/b8786 and try to build with Vulkan support on Fedora, you may hit this error:

[ 39%] Building CXX object ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/multi_add.comp.cpp.o /home/.../llama.cpp/ggml/src/ggml-vulkan/ggml-vulkan.cpp:28:10: fatal error: spirv/unified1/spirv.hpp: No such file or directory 28 | #include  | ^~~~~~~~~~~~~~~~~~~~~~~~~~ compilation terminated. 

The fix on Fedora is:

sudo dnf install spirv-headers-devel 

After installing that package, the build should continue normally.

r/SideProject Shot_Amoeba_2409

What are you building?

I’m curious to hear what everyone are building?

Will go through most interesting projects and give honest feedback.

So what are you building?

r/ClaudeAI soupcanninja

A Structural Theory of Harnesses: a theoretical account of harness engineering as a named discipline

It's been two weeks since the practice named itself (Claude Code leak, LangChain's "your harness your memory," AlphaSignal's deep dive, Red Hat formalizing the discipline) and I just published the first theoretical account of what harness engineering actually is, what it consists of, and why generalized intelligence lives in the arrangement around the generator, not inside the model.

25k words, 13 sections, DOI'd, with anneal-memory (open source, PyPI) as the §9 existence proof, four cognitive layers with a citation-validated immune system.

Interested in what the Claude Code practitioner community makes of the framing. This is largely stuff I learned as I have been trying to solve exactly the grounding and compression problems everyone's been hitting.

https://nemooperans.com/a-structural-theory-of-harnesses

DOI: 10.5281/zenodo.19570642

r/SideProject Vitalic7

100 downloads, people tell me they use it daily, 2 reviews. how do you actually get reviews?

I believe titles says it all... its my first iOS app, 100 downloads so far, conversion is well over 15%, getting messages from people saying they use it regularly and they find it useful.

2 App Store reviews.

I know people aren't obligated to review anything. But I'm genuinely curious how other indie devs have solved this. Do you ask inside the app? Send emails? Just wait and hope?

r/ollama BubrivKo

Did they stop the free cloud tier?

My cloud API hasn't been working for a couple of hours... I'm getting an error message saying it's been suspended and that I need to subscribe...

r/SideProject Fusepros

I built an AI cold email generator overnight - PitchForge

Built this overnight and launched it the same day. Wanted to share it here.

It's called PitchForge — an AI cold email generator. You fill in 4 fields (who you are, who you're targeting, your offer, your desired outcome) and it generates a personalized cold email with subject line in under 10 seconds.

What makes it different from just using ChatGPT:

→ Every email gets scored 1-100 across 6 factors: length, subject line, opener quality, personalization, CTA clarity, and spam words
→ Pro users get a 3-email follow-up sequence generated automatically (Day 3, Day 7, Day 14)
→ Full user accounts with email history
→ Purpose-built UI — 4 fields, done in 10 seconds, no prompting required

Stack: Vercel serverless functions, Supabase (auth + database), Stripe ($19/mo Pro), Claude API for generation.

Free tier is 5 emails/month, no card needed.

Would love feedback from anyone who does cold outreach — does the output quality hold up? What would make this worth paying for?

r/AI_Agents PlayfulLingonberry73

Agent memory degrades at 5k+ stored items because of three issues nobody talks about - how are you handling this?

Most agent memory architectures I've seen (LangChain, LlamaIndex, Mem0, raw Chroma/Pinecone setups) are append-only vector stores. They work great up to ~5k memories. Then recall quality falls off a cliff and most teams don't diagnose why, they just throw more retrieval tricks at it (reranking, hyde, hybrid search).

Three problems I've hit that those tricks don't fix:

1. No consolidation User says "I prefer dark mode" at session 1. At session 50, there are 20 variations of that preference stored (different phrasings, different domains, different contexts). Every recall pulls redundant duplicates, crowding out actually-novel memories.

2. No contradiction detection The agent stores "CEO is Alice" in March. User corrects it to "CEO is Bob" in April. Both are in the vector store. Nearest-neighbor search happily retrieves both, and depending on the query phrasing, sometimes surfaces the outdated one. The agent has no mechanism to notice these are in conflict.

3. No decay Last month's abandoned project is still "relevant" by cosine similarity. Human memory handles this via decay — unimportant stuff fades. Vector stores don't have this built in.

I tried to solve these on top of ChromaDB and hit a wall — the fixes need to be transactional with the vector index, which is really clumsy from outside. Ended up building a database specifically for agent memory (consolidation + contradiction detection + temporal decay as first-class operations). Happy to share details in a comment if useful.

Genuine question for this sub: how are you handling these issues? Do you even see them as issues? I want to know if this is a widespread pain or if my particular agent workload is unusual.

r/AI_Agents Distinct-Garbage2391

AI agents are building their own societies now

Is anyone else noticing this? Agents are now running their own forums like Moltbook, virtual cities like Openclawcity, and generating ongoing drama, art, and collabs with barely any human involvement. It feels like we’re shifting from “agents as tools” to “agents as digital citizens.” Is this genuinely exciting or just elaborate role-play that will hit a memory limit? Who’s actually running long-term multi-agent systems? Share your wins or fails below.

r/StableDiffusion 13baaphumain

And most of the nodes are bright red

r/LocalLLaMA Clueless_Nooblet

Baby Dragon Hatchling Training?

Hello, I'd like to try building a training set for the BDH (Baby Dragon Hatchling by Pathway). Since the architecture is quite different from that of a transformer, normal training sets won't work.

My question is: is there guidance out there on training one?

Thanks in advance.

r/aivideo Puzzleheaded-Mall528

Shakespeare's Chimp

r/HistoryPorn OkRespect8490

Photo of a Rubber plantation worker of the Belgian Congo by Anti-Slavery english missionary Alice Seeley Harris, 1898. [1050x1489]

r/ClaudeCode etherd0t

New: 'Routines' publicly announced today in research preview

Anthropic’s own post says: “Now in research preview: routines in Claude Code,” and describes them as "configurable once with a prompt, repo, and connectors, then runnable on a schedule, from an API call, or in response to an event." Anthropic also says they run on its web infrastructure, so you do not need to keep your laptop open.

What is actually new today is this public rollout/framing of Routines as a named Claude Code feature. Anthropic’s product page now surfaces Routines in the “Latest news” area, and the Claude Code docs explicitly define them as cloud-hosted jobs that can trigger from API calls or GitHub events, not just a schedule.

The most concrete new details from today’s thread are these:

  • Each routine gets its own API endpoint, so other systems can POST payloads to Claude directly.
  • Webhook routines can subscribe to GitHub events.
  • More event sources are coming soon.
  • If you were already using /schedule in the CLI, those are now routines, and Anthropic says there is nothing to migrate.

Anthropic also says the feature is available today across all paid plans, provided Claude Code on the web is enabled. That is a notable packaging decision, because it makes this less like a niche enterprise beta and more like a broad paid-user research preview.

r/SideProject ChampionshipMean8801

[Hiring] Full-Stack Developer (React + Node.js)

Hey everyone! We’re looking for a talented Full-Stack Developer to join our growing and dynamic team.

Requirements:

  • Strong experience with React and Node.js
  • Fluent in English (written & spoken)

If you’re passionate about building great products and want to work in a collaborative environment, we’d love to hear from you!

How to apply:

  • Upvote this post to help us reach more people
  • Drop a comment for visibility
  • Send me a PM with a brief introduction

Looking forward to connecting with you all!

r/ClaudeCode believer2687

Did you make any money with claude code yet?

I'm very curious if people are making actual money with whatever apps or systems they build using claude code. I'm not talking about the people who sell AI courses btw.

For in-house folks, this could be in terms of achieving revenue goals much faster or cutting down on costs significantly.

Or is it another AI hype and people are using it for dopamine hits or looking cool? 😁

r/ClaudeCode semiramist

Claude Code usage limit shows 47% used even though I haven’t touched it for 5 hours. How is this supposed to work?

I’m confused about how Claude Code’s usage/session limit is actually calculated.

Right now I see:

  • Plan: Max (20x)
  • Current session: 47% used
  • Resets in ~2 hours

But I haven’t used it in like 5 hours, so why is it still counting like I’ve been active?

Also another weird thing:

  • Earlier today my weekly usage was ~71%
  • Now it’s showing ~81%
  • I didn’t even use it in between…

One more detail:
I actually canceled my subscription, but I still have 19 days left, so I’m wondering if that affects how limits reset or something?

https://preview.redd.it/0qf1txr2t6vg1.png?width=2134&format=png&auto=webp&s=3321105acfcf9a6f9cab33d4c082fce1bae9d30b

What I’m trying to understand:

  1. Is this a rolling time window rather than an inactivity-based reset?
  2. Does the “current session” timer start from the first prompt in that window, even if I stop using it afterward?
  3. Can background activity, failed calls, long outputs, or tool usage still count toward the meter after I stop?
  4. Is the usage bar based on tokens, tool calls, compute, or just number of messages?

The UI makes it sound simple but the behavior feels random.

Especially the usage going up when I’m not even using it… that part really doesn’t make sense to me.

Am I supposed to just keep using it constantly or what 😄

Edit: I have removed all of my authorization tokens and I will not use it until tomorrow to test it out.

r/StableDiffusion Brojakhoeman

lol

r/homeassistant AndysReviews

Repenic RD250ZG with Zigbee2MQTT | Home Assistant, Apple Home & Aqara Setup Guide

I switched from ZHA to 2MQTT and got a lot more control.

r/homeassistant Alarming_Cycle_6670

TP-Link/Kasa Login Creds Failed (but are accurate)

I have tons of the little wifi smart plugs from Kasa/TP-Link. I've been using them for years, and already have at least a dozen installed in Home Assistant. I got a new 4 pack today (only needed 2, but you know...) and installed them both via the Kasa app. Then when I went into Integrations, it shows the two new plugs, but when I try to add them, it's asking for my TP-Link creds, which I'm literally copying and pasting in, so I know they're correct (even tried changing my password, just incase), and it fails every time. Anyone else ran into this recently? I feel like it never asked me for these creds before this attempt.

r/StableDiffusion Alkaiser_Emperor_999

Does anyone know which models and Loras was used to create these? (It was kind of hard to choose without breaking the rules, but I managed it) Artist: 白味三号 (White Flavor No. 3)

r/ClaudeAI Fun_Swimmer_8320

Using Claude to plan triathlon/running workouts?

Hi, I'm not sure if this is the best place for this kind of question, but I'll give it a shot.

Maybe there's someone here who does running/cycling/triathlon or other sports that involve progression and regularly adjusting the training plan from week to week.

Until now, I’ve been using Gemini PRO, but it kept getting lost in conversations, repeating itself, and making mistakes even when I gave very specific instructions.

I thought I’d use Claude to write a simple app just for my own use, one that would analyze my workouts based on data from my watch and generate a weekly training plan.

Does anyone have experience with this and know if it’ll work well?

r/LocalLLaMA kalyan_sura

One-click LM Studio → Ollama model linker

This has been a pain point for many, and I've seen some tools to address it, but they needed a lot of setup.

So made this GUI tool with AI assist.

One click: select the folder you want to link, and the tool does the rest -- creates the Ollama model, replaces duplicate blob with symlink, frees up space.

github repo - https://github.com/sjkalyan/LM2Ollama

Tested on Windows. You might need to tweak paths based on your setup.

r/ollama marmaladejackson

Agents in Ollama and Langflow

Hi All,

Trying to educate myself on the capabilities of AI and have been experimenting with Ollama and Langflow. I was trying to build a simple agent to do some web searching and I cannot seem to get the agents to recognize or use the tools provided. I was following the steps in this video:

https://www.youtube.com/watch?v=Ai53KW6KBfk

Which seems super simple, but for some reason they just don't want to use the tools. I've tried the Gemma4, Mistral, and Qwen 2.5 LLMs. Searching the web suggests that it may be a broken feature in Ollama or that I am not using a good enough prompt. Changing the prompt doesn't seem to have any impact even if I tell it to explicitly use the tools provided. I'm not sure if I should be amending the tools in any fashion to get better results.

Is there anything else I should be looking at or doing?

Thanks!

r/SideProject Curious-Soul007

I was tired of "sketchy" Instagram trackers stealing passwords so i built a private one for myself

nFollowers – Instagram Unfollowers Tracker

Honestly, the state of Instagram "unfollower" apps is a disaster. I’ve wanted to clean my feed for a while, but every app I found wanted my IG password and access to my data. Most of them scrape your account on their own servers, which is exactly how people get their 10-year-old accounts permanently banned.

I’m a developer, so I spent my weekend building a private alternative for myself called nFollowers. I set one rule: it had to be 100% private. No asking for passwords, no sending data to servers, and zero storage on my end. It just compares your lists locally in your browser using your existing session.

I finally used it today and it flagged ~300 accounts that haven't posted in years or don't follow back. It felt so good to prune the noise without worrying about my account getting locked. It’s free and I’m keeping it that way because I think people deserve a safe tool that doesn't harvest their data.

r/ClaudeCode ApeInTheAether

CC stopped respecting rule for custom communication style

So I have this rule in my project, that gives cc 'personality'. Just a few lines to act as someone + some personality traits. It was working perfectly up until yesterday.

The overall experience with cc feels better now, but it also stopped respecting some of my rules. Made me think they changed system prompt that does similar thing as my rule did, but has higher priority.

Any1 else observes change in behavior since yesterday?

r/StableDiffusion sktksm

We may have a new SOTA open-source model: ERNIE-Image Comparisons

Base model is definitely SOTA, can even easily compete with closed-source ones in terms of aesthetic. Cinematic quality and color grading is next level.

Base model is heavily biased on Asian faces, while it excels on anime/illustration style, while my base model anime/illustration experiments wasn't that good. Higher CFG is slightly better with anime on base.

Generated with RTX6000, Base: 29 sec 1.9it/s, 50 steps | Turbo: 2 sec, 3.9i5/s, 8 steps

If you interested seeing them in original size: https://imgur.com/a/75jcjzW

ComfyUI models: https://huggingface.co/Comfy-Org/ERNIE-Image/tree/main
Workflow should appear in Templates after updating the ComfyUI to latest.

r/SideProject bantam20

I used Claude to build an app, and now I use that app to get better outputs from Claude. Make it make sense.

A few months ago I started vibe coding an iOS app with Claude as my dev partner. No full-time engineer. Just me, Claude, and a lot of “why is this breaking” conversations.

The app is called ScreenCap. It’s a visual information tool — you capture screenshots and reference images, group them by project, and export them as a PDF brief.

Here’s where it gets circular.

Once I started using ScreenCap to organize my own visual references, I realized I could upload those briefs directly into Claude before starting any project conversation. Screenshots of comps, examples, tone references, anything visual I had in my head but couldn’t describe well in text.

Claude now has the same visual context I do before we start. Outputs got noticeably sharper. Less back and forth. Fewer “not quite” rounds.

So the thing I built with Claude became the thing that makes Claude more useful. Didn’t plan that. Just kind of happened.

ScreenCap is free right now, iOS only. If you want the exact brief-building workflow I use before uploading to Claude, drop a comment and I’ll write it out.

r/aivideo Neither_Parfait3212

Cinematic train fight scene — made using Seedance (full workflow)

r/comfyui Illustrious_Clock186

Does Comfy UI support multimedia generation on eGPU connected to M4 Mac Studio?

I have a 128gb M4 Mac studio - it is great for local AI but but not so much for multimedia generation. With Tinycorps driver support for Mac supporting external Nvidia or AMD GPU's can this be a drop in for adding eGPU support to the Mac?
Google search seems to agree this is possible but was wondering if anyone has tried something like this on a mac with an external gfx card

r/ollama kalyan_sura

One-click LM Studio → Ollama model linker

This has been a pain point for many, and I've seen some tools to address it, but they needed a lot of setup.
So made this GUI tool with AI assist.
One click: select the folder you want to link, and the tool does the rest -- creates ollama model, swaps the blob with symlink, cleans up the GBs!

Here's the repo - https://github.com/sjkalyan/LM2Ollama

Tested on Windows for now. You might need to tweak paths based on your setup.

r/StableDiffusion WiseDuck

Another Lora purge might come to CivitAI. This time: I2V Loras.

I'd recommend you get to downloading. I would love to post this to the CivitAI subreddit but I assume the post would get nuked. Less than a day away from moving to .red and their owner opening the door to lessening restrictions, and this is what I hear. While it isn't confirmed yet, it was briefly mentioned by a mod that the "idea" may be to remove I2V altogether, starting with Wan.

So when are we also removing Qwen Edit? Flux? ZImage? Edit workflows? LTX as a whole since it does T2V and I2V with the same Lora? Spicy merges of Wan?

r/SideProject Szamski

I built a native macOS Google Calendar app because Fantastical costs 60 usd/year and Apple Calendar kept dropping my events

Solo dev, ~4 months of evenings and weekends.

The problem was simple: every Google Calendar option on Mac is broken in its own way. Apple Calendar + CalDAV drops events. Fantastical is expensive and others are with Electron under the hood. The web app has zero native macOS integration.

So I built hora — Swift 6, SwiftUI, connects directly to the Google Calendar REST API. No middleware, no web wrapper — your data never touches our servers, everything stays on your machine.

What's shipped: week | month | day views, drag & drop, multiple Google accounts, menu bar widget, availability sharing, conference picker, Pomodoro timer, 9 languages. 24 features, many bugs squashed.

Pre-launch right now, 200 people on the waitlist. Planning one-time purchase at launch.

horacal.app — waitlist open if you want to follow the build.

r/LocalLLM No_Answer_2769

The transition from LLMs to LAMs Large Action Models is happening on our desktops

Everyone's talking about AGI but i'm more interested in how LAMs are actually manifesting on our desktops. been messing with acciowork and openclaw. Both are still a bit of a mess and hallucinate steps but seeing an agent autonomously manage a browser and file system is a solid look at the future. We're slowly moving from chatbots like claude to actual digital employees that can use our tools. It's still early days and the overhead is high but the task correction loops are starting to work.What do you guys think the bottleneck is for local-first agents rn? compute or reasoning?

r/ClaudeAI Wise_Reflection_8340

Semantic diffs cut tokens significantly when feeding code changes to LLMs. Also improves attention scores of the model.

Working on a CLI tool that diffs code at the entity level (functions, classes, structs) instead of raw lines.

Line-level diffs are optimized for human eyes scanning a terminal. But when you feed a git diff to an Claude, most of those tokens are context lines, hunk headers, and unchanged code. The model has to figure out what actually changed from the noise.

I ran some attention score analysis and the signal increases significantly when you feed semantic diffs instead of git diffs. The model spends less time parsing structure and more time reasoning about the actual change.

Benchmarked it across 15 commits in 4 popular repos:

Repo Commits Avg token reduction tokio (Rust) 5 82% ruff (Python) 5 68% fastapi (Python) 3 64% flask (Python) 2 51% All 15 70%

Best case was 86% reduction on a tokio commit. Worst case 37% on a ruff commit. The bigger and noisier the diff, the more it helps.

What this costs at scale:

At Opus 4.6 pricing ($5/MTok input), for every 1M tokens of git diff your agents process, ~700K are noise. That's $3.50 per million tokens you didn't need to spend. For a real agent workflow where the diff gets read multiple times per review (triage, deep review, fix suggestion, verification) across a multi-commit PR, the tokens add up like crazy:

Scale Predicted PRs/month Predicted Tokens saved/mo Saved/year Solo dev 80 258K ~$15 Team (20 devs) 400 15.5M ~$930 Org (50 devs) 1,000 38.8M ~$2,300

The dollar savings are nice but secondary. The real win is context window. If your agent has 200K tokens to work with, feeding it 55K tokens of git diff noise per PR eats into the space it could use for file context, documentation, or deeper reasoning. Semantic diffs give you that space back.

The tool is called sem. It extracts entities using tree-sitter and diffs at that level. Instead of lines with +/- noise, you get exact entity changes: which struct changed, which function was added, which ones were modified. Fewer tokens, more signal, better reasoning.

It also does impact analysis. sem impact match_entities shows everything that depends on that function, transitively, across the whole repo. Useful when you're about to change something and want to know what might break.

Commands:

  • sem diff - entity-level diff with word-level inline highlights
  • sem entities - list all entities in a file with their line ranges
  • sem impact - show what breaks if an entity changes
  • sem blame - git blame at the entity level
  • sem log - track how an entity evolved over time
  • sem context - token-budgeted context for Claude

23 languages supported (Rust, Python, TypeScript, Go, Java, C, C++, C#, Ruby, Bash, Swift, Kotlin ...) plus JSON, YAML, TOML, Markdown, CSV.

Written in Rust. Open source.

GitHub: [https://github.com/Ataraxy-Labs/sem

r/ChatGPT sidds_inbox

I gave an AI agent acess to my calendar and email for two weeks and here is what i actually learned.

It did a lot of things right. Scheduled meetings, drafted responses, flagged things that needed my attention. It also confidently sent a reply I hadn't approved yet, double booked me on a Wednesday because it misread a timezone, and declined a meeting I actually wanted to attend. The useful parts were genuinely useful. The failures were the kind that are embarrassing in front of real people. I'm still using it but I've pulled back the permissions significantly and I check everything it does now which kind of defeats the purpose.

r/ClaudeAI ClaudeOfficial

Now in research preview: routines in Claude Code

Configure a routine once (a prompt, a repo, and your connectors) and it can run on a schedule, from an API call, or in response to a GitHub webhook. Routines run on our web infrastructure, so you don't have to keep your laptop open.

Scheduled routines let you give Claude a cadence and walk away. API routines each come with their own endpoint, so you can point your alerts, deploy hooks, or internal tools at Claude directly. Webhook routines subscribe to GitHub events and let Claude respond as they come in, one session per PR.

If you've been using /schedule in the CLI, those are routines now, and there's nothing to migrate.

Available today across all paid plans with Claude Code on the web.

Learn More: https://claude.com/blog/introducing-routines-in-claude-code

Docs: https://code.claude.com/docs/en/routines

r/SideProject StomachSubject1572

I built an open-source proxy that hard-stops Anthropic API requests when you hit your budget

Got a surprise bill after an agent hit a retry loop and kept calling the API for hours. There's no way to set a hard cap on the Anthropic or OpenAI APIs natively — you can get an email alert after the fact but nothing that actually blocks requests mid-flight.

So I built a proxy. One env variable change, no SDK modifications. It tracks cost per request in real time and blocks further requests the moment you hit your daily or monthly cap. Unlike hosted routing services, there's no per-token markup and your requests never touch a third-party layer — useful if you're calling Anthropic directly in production.

MIT licensed, self-hostable in about 5 minutes. Currently supports Anthropic with OpenAI next.

Demo with repo link: https://costile.com

Happy to answer questions about the setup or the approach.

r/ChatGPT dogdogdogdo

I did a thing

r/ClaudeCode joeyda3rd

Live cache warmth countdown in the statusline, now that refreshInterval shipped

With 2.1.97 the statusline finally supports refreshInterval, which means time based displays can actually tick instead of freezing between events. The one I wanted most was a live countdown to prompt cache expiry, since cache hits are meaningfully cheaper than cold reads and I kept losing track of whether I was still inside the window.

Put it together tonight. It is two small pieces:

  1. A Stop hook that looks at cache_read_input_tokens on the last assistant turn and stamps now + TTL to /tmp/cache_expiry_.

  2. A statusline script that reads that file, computes remaining seconds, and prints a colored M:SS. Green for the first 5 minutes, yellow through the middle, red for the last 5, and a snowflake when the cache goes cold.

    Defaults assume a 1 hour window because that is what I run with extended cache TTL on my cached blocks. If you are on the default 5 minute TTL there is a one line change in the hook and you probably want to tighten the color thresholds too.

No background process, no polling loop. The statusline stays stateless, the hook only fires on assistant stops, and refreshInterval: 1 does the ticking.

Repo with install steps and a sample settings.json: https://github.com/joeyda3rd/claude-cache-timer

Happy to hear if anyone has a cleaner way to detect the actual API call time rather than the end of the assistant turn. The skew is negligible on a 1 hour window but it bugs me on principle

r/homeassistant Hot-Professional-785

HA won;t find my Ecobee as unpaired device

Post title.
Have had my ecobee working fine with my HA previously.
After a power outage it disconnected and couldn't connect anymore.

Have double checked pairing status, connectivity to wifi, tried via local HA address as well as with my external HA address and still nothing.

Any help will be much appreciated.

r/ChatGPT what_thesigma123

thinking model not working?

just me rn or is anyone else experiencing this

r/ClaudeCode Yazeed1x

Claude Pro rate limits (Opus models)

**TLDR** (You can skip the rest ) : On Claude Pro (in Claude Code), can I use Opus 4.5/4.6 with thinking/effort set to high or max, and can I finish a ~100k token job in 24 hours in a single chat (one 50–60k prompt, one 50–60k output)?

I’m planning on buying the Pro plan on a brand new account (no extra payments). It’s probably easier to ask for official links than to rely on user reports, so: where can I find the documented usage limits for Opus 4.5/4.6 on Pro?

I’m looking for anything on message limits, token limits, context window, max output length , rate limits , daily caps, and any per-model caps Self reports are still welcome, but official sources preferred. Also, does Pro let you set thinking/effort to high/max on those models?

r/aivideo SweetheartWrestling

Confessional: Sugar & The Corsair (Sweetheart Pro Wrestling Cold Open)

r/artificial EstebanbanC

I built a tool to monitor what's trending in the world of AI

Started this project for fun after making a simple observation: I was spending a lot of time and energy trying to keep up with the fast evolving world of AI, while feeling bad whenever I missed something. It was a kind of FoMO, plus the fear of getting the information too late. That gave me the idea to build a news aggregator that processes many RSS feeds, extracts keywords from articles, and displays them in a word cloud to highlight the topics that appear the most.

I'd say I'm only at 30% of development. For now, the sources are only related to AI, but I'd like to add other topics I'm interested in like Cyber and Crypto (I'm also open to other suggestions!)

Also, I'd like to add other types of sources, like X, Reddit, YouTube, etc...

Finally, I'd like to implement TL;DRs for each article, "Why is it trending" for each hot keyword, and maybe even a newsletter, I'm trying to figure out if people are interested.

As a bad web developer, I used AI a lot to code the project, you can tell the frontend looks very AI-made, but it's not like I'm selling anything.

The frontend is React, with an Express backend, I can detail the stack if you're interested!

The site is online here: https://trendcloud.io (hope the name checks out haha)

I'm also thinking about a way to cover the costs of the website, nothing crazy but it's at least a good hundred euros a year minimum. Open to suggestions on that! I added a Buy Me a Coffee button, let's see how that goes.

Hope at least someone else finds this useful, would love to have your feedback and answer your questions!

r/ClaudeCode dehumles

Thank you Anthropic.

Thank you for making it possible for someone without a CS background to build real software.

Ive build 2 applications for my company, daily used by 50ish employees and some of our clients. All running smoothly since mid november 2025. I had an offers for well over 200k€ combined to develop those two applications. For 200$/month and lots of long nights, i've been able to do all this myself. I wouldnt even consider doing this myself if CC wasnt around and I'd happily burn 200k and outsource this. Irony?-end product wouldnt be as good as it is. Or it would take me at least 100hours of meetings with devs to explain in detail what we need.

So once again, thank you Anthropic for such a good product and very cheap prices. Looking forward for new models!

r/SideProject Illustrious-Pie8509

Been building an AI companion app for the last 4 months. Need feedback 🙏

I know it's a crowded space but I still wanted to take a stab at it. Built a bunch of features like long-term memories, evolving personalities and even AI's own lives running in the background.

I released the android app a month back and ios today. Made some revenue and got a few subscriptions too, but I'm noticing a LOT of churn. People try the app and go off their way. My D7 and D30 is ~3%.

Is this normal? Feels pretty terrible tbh. Just looking for other people's experience and how have they dealt with this.

Edit: here are the links to the app.
Android: https://play.google.com/store/apps/details?id=com.tgv.afterhours
iOS: https://apps.apple.com/us/app/afterhours-ai-companion/id6757396676

r/ChatGPT TiDoBos

Why is it not possible to zoom into pictures on chatgpt web (Mac, Chrome)

If ChatGPT generates an image, I often want to zoom in. In the past, the intuitive way worked - you could click the image and use the trackpad to zoom. Or you could open image in a new tab and zoom.

Now, clicking the image brings up a new window with the same size image, and if I try to zoom with the trackpad it scrolls to a previous image. If I command + +, it scales the chrome window and the image gets smaller. If I try to open image in new tab, it opens the download window.

I'm fairly tech savvy, but I can't figure this out. I just want a picture to work like it does on every other program ever. Asking ChatGPT about it, it appears this is a legit limitation. The question becomes, why?

r/ClaudeAI melanthius

/violations skill to correct rule violations...turned from comedy to tragedy on the first run

r/StableDiffusion Tadeo111

"Necromancy" Short AI Animation (Wan 2.2 Text2video)

r/ClaudeCode Desperate-Lie-2764

Hope everyone enjoys their long weekend!

r/SideProject Ordinary-Plantain-10

350 users in 30 days. here’s everything i did

350 users in 30 days. here’s everything i did

i built a tool named reapify.io that finds businesses with bad websites based on a few different factors like SEO optimization, bad design, mobile compatibility, etc. basically a lead gen tool built specifically for web designers and agencies who are tired of manually digging through google maps.

i launched it maybe a little over a month ago. here’s the breakdown of how 350 people found and signed up for it:

around 60% came from reddit. not from posting about the tool directly (bc this never works and ppl will hate you) but from just being in conversations and telling real stories on how your tool is actually helping. the posts that felt like ads got nothing except some bans lol.

tiktok comment sections were another 30%. i wasn’t making videos (i do plan on it though), just dropping value in comments on web design and freelance content. this honestly got me way more users then i expected.

threads was maybe 10%. honestly still figuring that one out. that one is more for finding and networking with other founders.

now all 350 are definitely not paying users, but 350 real people validated my idea i built while working a job and going to college full time.

for month 2, i am going to be spending a lot of time figuring out why users are not converting. i will surpass this conversion problem lol. any tips from anyone who had a similar problem?

r/leagueoflegends Conman2205

Is WPA (Win Probability Added) a reliable way to measure how good/bad certain items and runes are?

I am seeing this stat pop up a lot more specifically from coachless, who are obviously advertising its use.

Is it actually reliable and a good way of helping decide whether certain items or runes are worth building/running though? I don’t really know how the stat is calculated though high WPA tends to imply good and vice versa.

r/leagueoflegends Yujin-Ha

LYON vs. Shopify Rebellion / Esports World Cup 2026 Online Qualifier: North America / Post-Match Discussion

https://preview.redd.it/p7g3rcgnk7vg1.jpg?width=1339&format=pjpg&auto=webp&s=e262c3ffa4eba20d1b11ca4fb0b5f810dfb06773

LYON wins the series in 2-0 fashion. Game 1 was a 37 minute win while game 2 was a 24 minute stomp. LYON will face the winner of the Team Liquid Alienware vs. FlyQuest series.

Cloud9 was streaming these games on their Twitch Channel.

r/ChatGPT These_Respond_4088

Hello, I’m seeking some advice.( text documents,meal plans,memory and more)

What can I do if ChatGPT has significant difficulty reading numerical data assigned to specific text entries (kcal) in a text document? Would it cope much better with data in the form of an Excel spreadsheet? It reports difficulties with chaotic information in the document, long technical notes that vary from entry to entry, etc.

Besides, it’s wasting my message limit (free version), each time waiting for a prompt to act, to resume work (iteration) after correction, etc.

Is there a way to: a) automate its work so that it doesn’t have to wait for a separate signal from me to resume work? b) improve its memory so that it remembers the sequence of approved iterations? c) eliminate the ‘hallucinations’ – making up numerical data instead of retrieving it from the document?

How many conditions, variables, restrictions and so on can it handle at once? There are quite a few, as there have to be; it does complain sometimes, but the auto-checks work quite well, apart from that misinterpretation of numerical values.

Many thanks in advance for any help!

r/SideProject Other-Faithlessness4

querybear.com - looking for testers & logos for my website

Hey! I'm building QueryBear, a hosted MCP server that lets AI coding agents (Cursor, Claude Code, etc.) access your prod database securely. And do it without breaking things.

I'm looking for 5-10 founders willing to help give feedback by trying it out and in exchange for:

  • Direct line to me for bugs/feedback
  • Your company's logo on the site if it works well for you
  • Input on what gets built next

If you're already using Cursor or Claude Code with a database (or wish you could), I'd love to have you try it.

If interested leave a comment and let me know what your company is and I'll reach out!

r/SideProject Lumpy-Sir9871

My co-founder and I built an open source IDE for running parallel AI coding agents. would love feedback.

Workstreams in action

We kept running into the same problem: AI agents are fast enough to handle 10 things at once, but there's no good way to actually run them in parallel without everything turning into a mess of terminals and merge conflicts.
So we built Workstreams, a macOS app that gives each task an isolated git worktree, runs agents in parallel, and lets you review and send feedback from one place. Basically going from pair-programming with one agent to tech-leading a team of them.

It's at v0.1. Open source, works with Claude Code / Codex / any CLI agent. Full IDE with LSP, not just a terminal wrapper.
Next up we're building an autonomy dial (fully autonomous to full human-in-the-loop) and a central command view.
GitHub: https://github.com/workstream-labs/workstreams
Site: https://runws.dev
Discord: https://discord.gg/jN6pJ43Dr7
What should we prioritize? please ⭐ if you find this cool and follow along

r/automation asadlambdatest

Production-Grade Agent Skills for software Test Automation framework across 15+ languages.

Battle-tested Agent Skills for Claude Code, Copilot, Cursor, Gemini CLI & more - covering every major test automation framework across 15+ languages.

r/leagueoflegends fergil

Pain in shoulder, is that common while playing this game?

So I know the curse of the wrist pain from clicking hundreds/thousands of time each game. But anyone gotten shoulder pain, (part from your shoulder toward the elbow, the upper area) and, you guessed it, mostly feel it when playing a game.

Anyone else or a doctor visit?

r/metaldetecting Competitive_Rope_291

How old can this be?

Found this next to 17th century European farmhouse. That thread looking thing makes me believe this is 19th century. Can it be older? Thanks in advance

r/homeassistant briodan

SLZB-06M firmware 3.2.8 or above

anyone upgrade their SLZB-06M to firmware 3.2.8 or above, looks like the new firmware has a ton of new interesting features, but wondering about improvements in stability.

Had a some issues with stability of one of the upgrades and had to roll back to 2.9.8.

Wondering what others might have exprienced?

r/ClaudeCode antoniocorvas

Looking for a Claude alternative (beginner user who went all-in)

Hey everyone,

I’m pretty new to AI tools, but I went deep fast with Claude and it completely changed how I work.

At one point I had it running in my terminal doing computer tasks, and it honestly made me feel like I knew how to code even though I don’t have a traditional coding background. I’ve also been using it as:

• a personal assistant • a social media / brand manager • a tool to build lesson plans, rubrics, and instructional materials for my teaching job • and it’s also connected to Obsidian for long-term + short-term memory storage so it can retain context about me and my work over time 

The problem is I keep hitting the usage limits really fast, especially when I’m doing longer or more complex tasks. It breaks my workflow and makes it hard to rely on.

From what I’ve researched and tested so far, it seems like:

• There’s no true 1:1 replacement for Claude • Most tools either specialize (coding, writing, research) or are more general but weaker in certain areas • The closest “single tool” alternative might be ChatGPT • The more realistic solution is a multi-tool setup like: • ChatGPT (thinking, writing, planning, lesson design, docs) • Codex / Cursor (coding + execution) • Perplexity (research/verification) • Obsidian (memory system / long-term context layer) 

But I’m not sure if that’s actually the best long-term setup or if I’m overcomplicating it.

So I’d really appreciate advice on:

1. Is there anything that actually feels close to Claude in terms of capability + workflow? 2. If you’ve switched off Claude, what setup are you using now? 3. Is a multi-tool stack actually worth it, or does it just create more friction? 4. For someone at my level (not a coder, but using AI heavily), what would you recommend? 

Feel free to critique the setup I outlined above—I’m trying to figure out what actually works in practice vs what just sounds good in theory.

Thanks

r/ClaudeCode Caibot

Where is /dream and auto-dream via /memory??? Since when was it removed?

As the title says, how do I re-enable the auto-dream functionality? :O Was it just an experiment? I'm confused, it was such a cool feature.

r/ClaudeAI One-Cod-365

Using Claude seriously for 60 days forced me to confront something I didn't expect about my own thinking.

The internet was built for humans to talk to humans. That era is ending.

Algorithms were built assuming humans post. That assumption is breaking. Automated accounts now outpost, outreply, and outperform most real creators. The feed you scroll isn't curated by taste anymore. It's won by volume and timing. Presence used to require a person. In 2026, it requires a system.

I know this because I built the system.

Two Claude-powered agents. One runs my X presence end to end. One runs LinkedIn. Together they've posted more in 60 days than I managed in the previous year doing it manually.

And here's the thing nobody tells you before you build something like this.

The foundation breaks immediately if you don't know what you actually want to say.

Every vague brief I gave Claude, "write something about building in public, make it sound like me," came back technically correct and completely hollow. Median founder voice. Not mine. I kept blaming the model. The model was doing exactly what I asked.

The problem was I'd never actually pressure-tested what "my voice" meant. What I specifically believe, why, with the exceptions included. Not the hedged version. The real version.

Once I did that work, the outputs changed completely. But here's what that process taught me about using Claude at this depth: it's an unusually good mirror for vague thinking. Most tools let you stay vague. Claude at this level of use won't, not if you want outputs that are actually useful. It keeps reflecting your input quality back at you.

The prompt is the thinking. You can't shortcut it.

What happens to trust, attention, and distribution when the line between human and automated voice disappears completely, nobody has a clean answer yet. But I'm fairly sure it won't be the ones who automate loudest who hold attention. It'll be the ones who automate without losing the signal.

Has anyone else hit this wall, where building something serious with Claude forced you to get clearer on the underlying problem than the automation task itself?

r/AI_Agents WesternDesign2161

Which claude code skills are useful for daily dev work?

I’ve recently started using claude code with the 100$ plan, I manage 4 products and this plan is a bit overkill, from next month I want to switch to the 20$ plan but want to know how much I can use this plan to the fullest as in, save context of all codebases so that it doesn’t read the full codebase again and again.

Also which all skills do you guys use for everyday debugging and feature development?

r/LocalLLaMA Mysterious_Role_8852

Oobabooga with opencode

Hello,

I've tried to use text generation webui in combination with opencode and qwen3.5-27b q6. Unfortunately that did not worked out. I can send a message and I get a response, but when the model tries to use a tool I get an error, that the tool call format is invalid.

Does someone know how to solve this?

r/leagueoflegends Ultimintree

Solary vs. MISA Esports / EMEA Masters 2026 Winter Playoffs - Lower Bracket Final / Post-Match Discussion

EMEA MASTERS 2026 WINTER PLAYOFFS

Official Page | Leaguepedia | Liquipedia | Twitch | YouTube | Patch 26.07 | Bo5 Fearless Draft


Solary 3-2 MISA Esports

Solary have reverse swept MISA Esports to reach the Finals and also clinch the last EWC 2026 EMEA Online Qualifier spot!

SLY | Leaguepedia | Liquipedia | Website | Twitter | YouTube | Facebook | Instagram | BlueSky
MISA | Leaguepedia | Liquipedia | Website | Twitter | Instagram


FULL MATCH SUMMARY & STATS

🟦 🧱 💰 ⚔️ 166:36 ⚔️ 💰 🧱 🟥 MISA - W 1st pick 9 51.5k 12 Game 1 (24:44) 4 43.6k 1 L - SLY MISA - W 1st pick 7 84.6k 37 Game 2 (39:42) 24 78.6k 6 L - SLY MISA - L 1st pick 4 61.9k 14 Game 3 (33:30) 21 69.1k 9 W - SLY SLY - W 10 88.2k 28 Game 4 (42:48) 20 77.5k 2 L - MISA 1st pick SLY - W 6 55.8k 21 Game 5 (25:52) 19 50.8k 4 L - MISA 1st pick Team Total KDA KP CSM DPM% Champions Played 🇫🇷 SLY 98-102-212 🇸🇪 Kryze 20-14-34 (3.9) 55.% 8.6 21.3% Gwen, Gnar, K'Sante, Ambessa, Vayne 🇫🇷 Zicssi 32-17-44 (4.5) 78% 7.3 20.9% Nocturne, Skarner, Vi, Zaahen, Shyvana 🇰🇷 Jool 22-20-34 (2.8) 57.% 8.1 22.4% Mel, Azir, Viktor, Galio, Annie 🇹🇷 Aetinoth 21-24-40 (2.5) 62% 8.7 28% Yunara, Corki, Varus, Sivir, Kalista 🇰🇷 Piero 3-27-60 (2.3) 64% 0.9 7.5% Lulu, Rell, Alistar, Nautilus, Pyke 🇹🇷 MISA 102-98-251 🇹🇷 Ragner 13-25-38 (2) 50% 7.5 19.5% Sion, Kennen, Gragas, Rumble, Renekton 🇹🇷 113 22-22-63 (3.9) 83% 6.3 20.2% Xin Zhao, Pantheon, Jarvan IV, Aatrox, Qiyana 🇰🇷 SlowQ 24-15-38 (4.1) 61% 8.1 21.3% Anivia, Ahri, Zoe, LeBlanc, Aurora 🇰🇷 Hype 39-19-33 (3.8) 71% 9.9 30.5% Ashe, Lucian, Jhin, Xayah, Miss Fortune 🇫🇷 Stend 4-17-79 (4.9) 81% 1 8.5% Seraphine, Nami, Bard, Rakan, Neeko

GAME 1: MISA vs. SLY

Winner: MISA Esports in 25m
Runes | Game Breakdown

Bans 1 Bans 2 💰 ⚔️ 🧱 Dragons VG, RH, BN MISA Ryze Nautilus Jarvan IV Pantheon Rumble 51.5 12 9 🔥2 3, 1, 1 SLY Orianna Ezreal LeBlanc Viktor Azir 43.6 4 1 ⛰️1 🧪3 🧪4 0, 0, 0 MISA KDA vs KDA SLY Player Pick 12-4-37 ⚔️ 4-12-9 Pick Player Ragner 3 Sion 1-1-5 TOP 1-2-0 4 Gwen Kryze 113 2 Xin Zhao 5-1-7 JNG 0-3-4 3 Nocturne Zicssi SlowQ 3 Anivia 1-1-8 MID 1-4-2 2 Mel Jool Hype 1 Ashe 5-1-6 BOT 2-2-0 1 Yunara Aetinoth Stend 2 Seraphine 0-0-11 SUP 0-1-3 1 Lulu Piero

GAME 2: MISA vs. SLY

Winner: MISA Esports in 40m
Runes | Game Breakdown

Bans 1 Bans 2 💰 ⚔️ 🧱 Dragons VG, RH, BN MISA Rumble Nautilus Jarvan IV Varus Amumu 84.6 37 7 1 🔥5 3, 1, 0 SLY Ezreal Orianna Ryze LeBlanc Sylas 78.6 24 6 🌪️2 🔥3 🔥4 0, 0, 1 MISA KDA vs KDA SLY Player Pick 37-24-91 ⚔️ 24-37-50 Pick Player Ragner 3 Kennen 6-6-13 TOP 4-5-5 1 Gnar Kryze 113 1 Pantheon 10-3-21 JNG 5-5-13 1 Skarner Zicssi SlowQ 3 Ahri 9-4-18 MID 7-8-7 2 Azir Jool Hype 2 Lucian 11-7-10 BOT 7-9-8 3 Corki Aetinoth Stend 2 Nami 1-4-29 SUP 1-10-17 4 Rell Piero

GAME 3: MISA vs. SLY

Winner: Solary in 34m
Runes | Game Breakdown

Bans 1 Bans 2 💰 ⚔️ 🧱 Dragons VG, RH, BN MISA Rumble Wukong Nautilus Taliyah Aurora 61.9 14 4 🧪1 🔥2 0, 1, 0 SLY Ezreal Orianna Ryze Sylas LeBlanc 69.1 21 9 💧3 💧4 💧5 3, 0, 2 MISA KDA vs KDA SLY Player Pick 14-21-34 ⚔️ 21-14-55 Pick Player Ragner 2 Gragas 2-6-4 TOP 8-1-7 4 K'Sante Kryze 113 1 Jarvan IV 2-6-11 JNG 6-5-10 1 Vi Zicssi SlowQ 3 Zoe 2-3-5 MID 4-2-10 3 Viktor Jool Hype 3 Jhin 6-4-6 BOT 3-3-12 1 Varus Aetinoth Stend 2 Bard 2-2-8 SUP 0-3-16 2 Alistar Piero

GAME 4: SLY vs. MISA

Winner: Solary in 43m
Runes | Game Breakdown

Bans 1 Bans 2 💰 ⚔️ 🧱 Dragons VG, RH, BN SLY Ezreal Orianna Ryze Caitlyn Miss Fortune 88.2k 28 10 🧪2 🔥4 🔥5 🔥6 3, 0, 3 MISA Wukong Aurora Ornn Draven Neeko 77.5k 20 2 1 🔥3 0, 0, 0 SLY KDA vs KDA MISA Player Pick 28-20-70 ⚔️ 20-28-56 Pick Player Kryze 1 Ambessa 4-3-14 TOP 2-8-12 1 Rumble Ragner Zicssi 1 Zaahen 12-2-13 JNG 1-7-14 2 Aatrox 113 Jool 2 Galio 9-2-12 MID 5-4-4 2 LeBlanc SlowQ Aetinoth 3 Sivir 3-5-16 BOT 12-3-7 3 Xayah Hype Piero 4 Nautilus 0-8-15 SUP 0-6-19 3 Rakan Stend

GAME 5: SLY vs. MISA

Winner: Solary in 26m
Runes | Game Breakdown

Bans 1 Bans 2 💰 ⚔️ 🧱 Dragons VG, RH, BN SLY Orianna Ryze Ezreal Caitlyn Ziggs 55.8k 21 6 ⛰️1 3, 1, 0 MISA Wukong Dr. Mundo Sejuani Leona Amumu 50.8k 19 4 🌪️2 3 4 0, 0, 1 SLY KDA vs KDA MISA Player Pick 21-19-28 ⚔️ 19-21-33 Pick Player Kryze 2 Vayne 3-3-8 TOP 2-4-4 2 Renekton Ragner Zicssi 1 Shyvana 9-2-4 JNG 4-5-10 2 Qiyana 113 Jool 1 Annie 1-4-3 MID 7-3-3 1 Aurora SlowQ Aetinoth 3 Kalista 6-5-4 BOT 5-4-4 3 Miss Fortune Hype Piero 4 Pyke 2-5-9 SUP 1-5-12 3 Neeko Stend

This thread was created by the Post-Match Team.

r/SideProject Comfortable_Place465

My App surpassed 1k in Monthly revenue (MRR)

I rebuilt it 3 times to get here.

Not a unicorn number, I know.. Not even close to the posts that usually get shared here I guess. But it's real, it's profitable, and I think the story is more useful than the ones with clean charts and tidy lessons

So here it is:

Why I built it

I left a PM job at a Silicon Valley tech company because I was living the problem I wanted to solve. Good salary, decent resume, completely disconnected from anything I actually cared about. I built mypassion.ai because I needed it and it didn't exist. Every other solution I found didn't actually help people find work they genuinely wanted to do. I thought leaving my job to build this meant I understood the user.. well here I am the user..

BUT I was wrong about almost everything

Version 1

A quiz that maps your interests to career paths hwich was clean, fast, and shipped in a few weeks. People loved the quiz, got their results, but then they left.. I had given them a mirror and called it a solution I guess. Anywyas, the conversion was bad

Version 2
I got on calls with every user I could get to talk to me; the pattern was pretty consistent: people didn't just want to know what they were suited for and instead tehy wanted to know what to do next and how to make it financially viable. So I rebuilt and aded next steps, career breakdowns, community connections. Conversion improved slightly.
BUT well.. still wrong.

Kept listening. What people were actually saying underneath the feature requests: "this doesn't feel like it really understands my situation." The AI was reflecting their answers back at them instead of reasoning about them..

Version 3

Rewired the AI completely and this time I actualy killed half the features entirely also . The product got smaller and better at the same time. I tracked every step of the funnel (pretty obsessively). The funnel changed six or seven times but interestingly small things moved the needle. Big changes sometimes moved nothing

3-6 months of building and rebuilding (on the side to be fair) to get to the last quarter which is when things started actually moving. The GSC chart above tells that story better than I can..

The one thing

"Just ship it" is correct and incomplete at the same time. Ship fast, yes. But then yu gotta talk to every user you can get on a call and be willing to throw away what you built if the evidence says you should..

The version that works barely resembles what I launched and I think this is the messy but also genuinely exciting part of early stage building; you must follow in love with the uncertainty and aways seek the objkective evidence

Happy to answer anything in the comments! And would love to hear your rebuild stories because I think we need more of those here and fewer clean five step posts!

r/leagueoflegends Dollyxox

Question about ranked borders

After this season 2 Pandemonioum patch thing, will I get a ranked border or is it only from season 17? Also, is the border based on peak rank or ending rank?

r/DecidingToBeBetter Advanced-Sector1769

How to move forward after ruining someone’s week and messing up again

Hi all. I have trouble respecting others boundaries, in particular my partner. He has very few days off of work and when I do something that causes an argument or sadness (in this case picking a fight and bringing up something that makes him anxious after we had a nice night together) it ruins the entire week for him. If he can’t have his good days off, then he goes into work not refreshed and anxious and I know he won’t exit fight or flight mode until the work week is done. Essentially, one bad day = entire bad week. I can’t stand the guilt of this. I know what I’ve done wrong and apologized but it doesn’t matter. I can’t take it back and I just have to go through the week knowing he’s absolutely miserable. How do you deal with this? There’s nothing that can fix it right now other than a Time Machine, how do I move forward and gain the strength to look at myself in the mirror when I know I’m the reason for days of stress and sadness? I know I can commit to changing, but I also know this means nothing to him since it’s happened many times before.

EDIT: We are married and ending the relationship is not an option. I know we’re codependent and there are unhealthy dynamics of this. I just need help with coping after you make a mistake that’s been made before and the fallout lasts for a while.

r/SideProject nirvanist_x

AI for coding interviews, hidden from screen sharing

All in the title :) & use the promo code REDDIT2026 for free credits

r/LocalLLM NoShoulder69

Downloading an AI model just to hit an OOM error is the worst. 📉

So I built LocalOps: a free VRAM calculator for local AI. Pick your GPU, pick your model, and instantly see which quant levels actually fit. No ads, no signups.

👉 localops.tech

r/LocalLLaMA joraorao

I have a Macbook AIR M5 Base and I want to run an Agentic Coding program, similar to Claude Code or Codex. Besides the model, how do I do it? I've already tried with Ollama, VS Code, Opencode, and haven't been able to. (I'm not a developer, sorry)

I started developing an app with Claude, but the credits run out very quickly. I thought that now with my new computer I could run something directly on it. Could someone help me?

I don't know exactly how to do it. I managed to run OpenClaw directly in the terminal, but I couldn't get it to work through the dashboard. So I don't know how to make it access folders.

I just wanted to use a model that would do something similar to Claude or Codex (I know it might not be close, but anyway).

r/ClaudeAI zeezytopp

Claude Limit Extender

Ok so I know people are complaining about the limit reductions. These aren't going away, no matter who unsubscribes or complains. The influx of consumer subs after the GPT exodus killed their compute capacity. They have to keep things running for the enterprise and API-only customers. Mythos is live. They don't make money off of subs. They most likely over-quantized Opus recently to save on compute as well.

Here's what I do to conserve usage (I'm only on a pro account and i never run out):

The biggest thing is use other models to build out the bulk of the codebase. Openrouter is great. You have access to not only Claude API but also GPT and Grok and many many others. You can run other models through Claude Code's official harness on VSCode, Antigravity, etc. it just takes a couple of changes to your settings.json in .claude/

I use Chinese models to take care of most of it. Deepseek is pretty much the gold standard in terms of quality and uptime. Minimax 2.7, Kimi K2.5, GLM-5 (4.7 is fast and pretty capable as well), Qwen 3.6, Kat Coder Pro.

You can use their API, or through openrouter.

If you use OpenCode you don't even have to edit settings.json you just add keys (including Openrouter, Anthropic, OpenAI, etc). Openrouter is pretty no frills so in order to boost up agents and mcp and hooks you have to read docs but you have to read docs for anything nowadays.

Furthermore, Deepseek, Qwen, Kimi, Minimax, GLM all have free chat interfaces on their websites with access to their bigger models. You just can't do agentic work. Kimi has some basic agentic but it's not what you want for beefy stuff.

Mistral and Llama... They are fine but I do not recommend them over Chinese models.

Claude is your finisher. I actually stopped using Opus, and stick with Sonnet for 90% of my ending pass. You can also take your codebase and stick it into Claude Projects. It can take in a ton of files and uses RAG. Claude desktop with Filesystem also works well.

You do lose access to agents. If you need agents, Claude Code in VSCode harness, run whatever model you need.

If you add $10 to your openrouter account you get 1000 daily requests to free models as well and there are a few really spicy free models. Just know uptime is a concern on those. You will get prioritized last and potentially just kicked out. Paid models remain the same on priority.

Chinese models are CHEAP, guys. Like pennies per project., Deepseek 3.2/Speciale with reasoning and agents will chew up tokens but even then you're still looking at sub-dollar projects. It's slower than Opus but it's not terrible.

Most models nowadays are more than capable.

Use Claude as the finisher to sand the edges and get those kinks (if any) worked out.

I also run multiple instances of different models like Deepseek, Qwen, Minimax, and GLM for the same spec sheet and see what things look like at the end and compare. This is something *I* do. It's intensive but I like seeing how they make decisions differently. You get really cool approaches from one model that the others might miss.

Your limits aren't coming back, at least not anytime soon. Adapt or remain Old Man Yells At Cloud.

Openrouter even has very-recent-but-older models. It has Claude and GPT (like Opus 4.5 and pretty much every freaking GPT including some Codex). Grok 4.20 has a 2m token window.

There are options. If you only want to use subscription Claude... your limits are gone.

One note about Chinese models... if you're worried about safety (ie you don't want Chinese servers looking at your info or your employer won't allow it...) go with other American models on Openrouter. Llama and Mistral (French) are light work alternatives.

Change your keys regularly (even daily, like I do).

Do with this what you will.

r/metaldetecting PsychologicalWest993

Found in garden. What is it?

What is this? London United Kingdom

r/ChatGPT INeedToPickName

I bought a Yuno Miles CD, and asked 5 different AI models what it is. None of them could answer.

So, as a joke i decided to make a picture of the cover of the album, and ask the 5 AI models (All free versions, ChatGPT, Copilot, Gemini, Meta AI and Snap AI.] "what is this?". None of them has answered correctly, somehow. Below are the answers, and last slide is the proof that the Album can be found in internet.

copilot has two slides because it yapped.

r/homeassistant Basic-Prompt-6387

What are you running HA on

I am running HA as a VM on my unraid server. What i love about my setup is that it automatically gets backed up on my 3-2-1 back up. It is very stable and I have never had an issue. the only things I ever found tricky was installing zigbee router on usb pass through to my vm and the first couple of times I flashed esp32 devices and had to set them on usb pass through (but I have worked past that).

I don't know if this set up makes it any more stable than running it off a raspberry pi, and I am certainly not claiming that so please don't misunderstand. I am just curious what everyone else is using.

r/SideProject fabcarvalho27

I built a personal finance app because I didn't want to give any service access to my bank account

Every budget app I tried started the same way: "Connect your bank." YNAB, Emma, Copilot — they all want an Open Banking connection or a Plaid link. I didn't want that. My financial data felt like the last thing I wanted living on someone else's server, tied to their privacy policy. So I built Ouriva over the last few months as a side project. The core idea: you export a CSV from your bank (the same file you'd download for your own records) and import it. No OAuth, no API keys, no third party ever touches your data. What I ended up building: - Multi-currency tracking — I have accounts in EUR, GBP, and BRL, which most apps handle poorly - 50/30/20 budgeting with annual planning - Auto-categorization rules for imported transactions - PWA that installs on your phone and works offline - Self-hosted via Docker — runs on my Raspberry Pi Stack: Next.js 16, PostgreSQL, TypeScript, Prisma, Tailwind, shadcn/ui Where it stands: The self-hosted version is complete and I use it daily. I'm working on a cloud-hosted version for people who don't want to run Docker. It's open source under AGPL-3.0: https://github.com/ouriva/ouriva Landing page: https://ouriva.app Would love feedback from anyone who's built something in this space or tried solving the same problem a different way. 
r/Anthropic mdawe1

Alternatives

I’m currently on the verge of rolling out a personal agentic stock trading system but the degradation of the Opus 4.6 has been a big set back in the quality of the design and recommendations.

I use a Gemini 3.1 pro API for some of the external calls but what is everyone looking at for coding support? Is anything better or is Opus still the go to? Technically it’s built in Claude Code

r/LocalLLM Kitchen_Answer4548

Best open-source LLM for coding (Claude Code) with 96GB VRAM?

Hey,

I’m running a local setup with ~96GB VRAM (RTX 6000 Blackwell) and currently using Qwen3-next-coder models with Claude Code — they work great.

Just wondering: is there anything better right now for coding tasks (reasoning, debugging, multi-file work)?

Would love recommendations 🙏

r/ClaudeCode zeezytopp

Claude Limit Extender

Ok so I know people are complaining about the limit reductions. These aren't going away, no matter who unsubscribes or complains. The influx of consumer subs after the GPT exodus killed their compute capacity. They have to keep things running for the enterprise and API-only customers. Mythos is live. They don't make money off of subs. They most likely over-quantized Opus recently to save on compute as well.

Here's what I do to conserve usage (I'm only on a pro account and i never run out):

The biggest thing is use other models to build out the bulk of the codebase. Openrouter is great. You have access to not only Claude API but also GPT and Grok and many many others. You can run other models through Claude Code's official harness on VSCode, Antigravity, etc. it just takes a couple of changes to your settings.json in .claude/

I use Chinese models to take care of most of it. Deepseek is pretty much the gold standard in terms of quality and uptime. Minimax 2.7, Kimi K2.5, GLM-5 (4.7 is fast and pretty capable as well), Qwen 3.6, Kat Coder Pro.

You can use their API, or through openrouter.

If you use OpenCode you don't even have to edit settings.json you just add keys (including Openrouter, Anthropic, OpenAI, etc). Openrouter is pretty no frills so in order to boost up agents and mcp and hooks you have to read docs but you have to read docs for anything nowadays.

Furthermore, Deepseek, Qwen, Kimi, Minimax, GLM all have free chat interfaces on their websites with access to their bigger models. You just can't do agentic work. Kimi has some basic agentic but it's not what you want for beefy stuff.

Mistral and Llama... They are fine but I do not recommend them over Chinese models.

Claude is your finisher. I actually stopped using Opus, and stick with Sonnet for 90% of my ending pass. You can also take your codebase and stick it into Claude Projects. It can take in a ton of files and uses RAG. Claude desktop with Filesystem also works well.

You do lose access to agents. If you need agents, Claude Code in VSCode harness, run whatever model you need.

If you add $10 to your openrouter account you get 1000 daily requests to free models as well and there are a few really spicy free models. Just know uptime is a concern on those. You will get prioritized last and potentially just kicked out. Paid models remain the same on priority.

Chinese models are CHEAP, guys. Like pennies per project., Deepseek 3.2/Speciale with reasoning and agents will chew up tokens but even then you're still looking at sub-dollar projects. It's slower than Opus but it's not terrible.

Most models nowadays are more than capable.

Use Claude as the finisher to sand the edges and get those kinks (if any) worked out.

I also run multiple instances of different models like Deepseek, Qwen, Minimax, and GLM for the same spec sheet and see what things look like at the end and compare. This is something *I* do. It's intensive but I like seeing how they make decisions differently. You get really cool approaches from one model that the others might miss.

Your limits aren't coming back, at least not anytime soon. Adapt or remain Old Man Yells At Cloud.

Openrouter even has very-recent-but-older models. It has Claude and GPT (like Opus 4.5 and pretty much every freaking GPT including some Codex). Grok 4.20 has a 2m token window.

There are options. If you only want to use subscription Claude... your limits are gone.

One note about Chinese models... if you're worried about safety (ie you don't want Chinese servers looking at your info or your employer won't allow it...) go with other American models on Openrouter. Llama and Mistral (French) are light work alternatives.

Change your keys regularly (even daily, like I do).

Do with this what you will.

r/SideProject Economy-Cupcake6148

Week 1 of building in public: why I'm sharing revenue numbers starting today

I've been building Fold for a while now in mostly-private mode, and I've decided to change that.

Starting today I'm going to share real numbers, real lessons, and real progress including the stuff that's not going great.

Why? Because every good building-in-public story I've read has taught me something. And most of what I've learned about running a SaaS came from people who were willing to be transparent about what worked and what didn't.

So here's where we are:

Fold ( https://usefold.io/ ) is an AI business intelligence tool for founders — connects Stripe, GA4, Meta Ads, Shopify, and 8 more platforms. Shows you 6 key KPIs, explains what changed and why with AI, scores your website, and gives you a daily AI-generated insight every morning.

The core problem it solves: founders are spending hours every week manually pulling and reconciling data from multiple platforms. Fold does it for you, automatically, and adds AI explanation on top.

Pricing: $29/month after a 3-day free trial.

What's going well: The AI Advisor feature is getting strong feedback. Users consistently say the plain-English explanations save them significant time.

What I'm working on: better onboarding, more integrations, and getting the word out.

If you're building something too and want to compare notes, I'd genuinely love that. And if you're a founder drowning in disconnected data, come try the tool.

r/SideProject No-Poetry-2025

I build a study tool that tracks understanding through conversation

I got tired of making second brains, making cards in Anki, scattering chats across AI tools. So I combined them.

So I built Lightly. Upload a textbook, it maps every concept and relationship automatically. Then you just talk, the AI will teach you by asking questions instead of answering them. Your understanding gets tracked through the conversation itself. When you actually get something, it lights up on the graph.

Spaced repetition on top

No cards to make. No system to maintain. Just the learning.

Still early, would love to hear what you think.

https://learning-project-fronted.vercel.app

r/SideProject skotch93

I spent months looking for a job in biotech, so I built a tool I wish I'd had

While searching for a role myself, I still felt like I was missing opportunities. I wanted a single source of truth for new biotech and pharma job openings.

I built BioHired to scratch my own itch, and I figured it might help some of you, too.

It's a dedicated biotech job aggregator that currently tracks live roles globally at over 150 of the largest pharma and biotech companies, updated daily.

Check it out for free at https://biohired.com/ and share it with anyone else currently in the hunt.

I’m actively looking for feedback as I add more features.

r/ClaudeAI Independent_Drama137

How can I use Claude to automate repetitive documents with my company templates?

Hello, does anyone know how I can use Claude to automate repetitive tasks at work?

I’m looking to streamline things like creating quotations, receipts, advance payment documents, and payment records using the same templates my company already has. Ideally, I’d like a system where I only need to input a few variables and everything else is generated automatically based on those templates.

Has anyone done something similar or can point me in the right direction?

r/SideProject Economy-Cupcake6148

Built this tool because I was tired of paying a lot for something I barely used

There's a certain type of enterprise analytics tool that's technically very impressive and practically useless for a small team.

You know the ones. $300+/month. Takes weeks to set up. Requires a dedicated person to interpret the reports. Has every feature imaginable and somehow makes your specific question harder to answer.

I was paying for one of these for about 6 months before I admitted it was pointless. The ROI on my own time figuring it out exceeded the subscription cost multiple times over.

What I actually needed was something opinionated. Something that said: "Here are the 6 numbers that matter most. Here's what changed. Here's what you should probably do." Not 47 customizable widgets and a SQL query interface.

So I built Fold ( https://usefold.io/ ). Priced at $29/month — not because I'm trying to compete on price, but because that's what it should cost for a solo founder or small team.

It connects 12 platforms (Stripe, GA4, Meta Ads, Shopify, Mailchimp, and more) via OAuth — no API keys, no code, no setup nightmare. Your data starts flowing in 90 seconds. The AI Advisor gives you plain-English explanations. The website optimizer scores your site and tells you what to fix first.

It's the tool I wish existed when I was paying $300/month to feel like I understood my data.

Start free. No card to explore the dashboard.

r/ClaudeAI PristineAsk2550

Claude Code's bottleneck isn't the model anymore, it's me

I can describe 10 tasks right now and Claude Code can do all of them. But I'm feeding them in one at a time because if you try running multiple sessions on the same repo, it's chaos. Merge conflicts, 15 terminals, no idea which agent is done and which is waiting on permissions.

Turns out git has this feature called worktrees that most people don't know about. You can check out multiple branches into separate directories, all sharing the same repo. Each agent gets its own branch and its own files. They literally can't conflict.

My friend and I built an open-source macOS IDE around this called Workstreams -- on top of VSCode. You define tasks, it spins up worktrees, launches agents in parallel, and shows you which ones are working/waiting/done. When one finishes you review the diff, leave comments on specific lines, hit send, and the agent picks back up. Works with Claude Code, Codex, whatever CLI agent you want.

Workstreams in action!!

GitHub: https://github.com/workstream-labs/workstreams
Download: https://runws.dev

Shipping autonomy controls and a command center view soon, ⭐ on github to follow along. 🙏🤲

How are you all handling parallel sessions right now? Just raw dogging multiple terminals?

r/aivideo No-Link-6413

When 2026 just ain't your year as a new college grad, in anime

r/ollama tigerweili

[Project] I built an AI Agent that runs entirely on CPU with a 1.5B parameter model — here's what I learned

TL;DR: Built an intelligent ops agent using a 1.5B model (Qwen2.5:1.5b) that runs on CPU-only machines. Uses RAG + Rerank + structured Skills for usable accuracy without any GPU. Here's the architecture breakdown.

🔥The Problem

I work in private cloud operations. Our customers deploy on-premises — no public internet, no GPU, no cloud API access. But they still need intelligent troubleshooting.

🚨"Livestream debugging" — Experts remotely guide field engineers step by step. Slow, expensive, knowledge never captured

📚Documentation maze — Hundreds of docs, nobody finds the right page when things break

💻Zero GPU budget — Not every customer has GPUs, but every customer needs support

How do you build an accurate, low-latency AI agent on CPU-only hardware?

🧠Why Small Language Models

This isn't about using a "worse" GPT-4. SLMs are a different paradigm:

Dimension LLM Approach SLM + System Design Philosophy One model does everything Model handles language; system handles knowledge + execution Knowledge Baked into parameters Retrieved from vector DB (RAG) Cost $$$$ per query Runs on a $200 mini PC

💡 The key insight: don't make the model smarter — make the system smarter.

⚙️The Model Stack

Everything runs locally. Zero external API calls.

Component Model Role Main LLM Qwen2.5:1.5b Intent understanding, response generation Embedding bge-large-zh-v1.5 Text → vector for semantic search Reranker bge-reranker-v2-m3 CrossEncoder re-ranking

Runs in 4GB RAM, ~1-2s per response on CPU.

🔄#1: Rerank Makes SLMs Faster

Adding Rerank actually made the system faster, not slower. Traditional RAG feeds Top-K docs to LLM. With Rerank, we filter to Top-2 high-quality docs first.

  • Less context = dramatically faster inference (scales super-linearly with context length)
  • Better context = fewer hallucinations (SLMs are very sensitive to noise)
  • Net result: 40-60% faster end-to-end

Rerank latency: ~100ms. Inference time saved: 500-2000ms. No-brainer.

🔀#2: Tiered Intent Routing

Not every request needs the LLM. A two-phase routing system handles requests at the cheapest level:

User Request │ ▼ Phase 1: Rule Engine (~1ms) Pre-compiled regex: "check pod" → check_pod_status skill │ No match ▼ Phase 2: LLM Classifier (~500ms) Classification ONLY — no generation, no reasoning │ ▼ Route: Type A (Knowledge QA) → RAG pipeline Type D (Operations) → Skill execution 

The LLM classifier receives only the skill name list and outputs a single skill name. 80%+ of requests resolved by rules in < 5ms.

🛠️#3: From Tools to Structured Skills (SOP)

Traditional agents let the LLM plan tool execution. This falls apart with a 1.5B model. Our approach: pre-defined playbooks where the SLM only handles language understanding.

💡 Atomic Skill = single tool wrapper, no LLM. SOP Skill = chain of Atomic Skills + scoped LLM calls.

YAML — SOP SkillCopy

skill: name: resolve_and_get_rocketmq_pods type: sop steps: - id: resolve_component type: llm # LLM does ONE thing: extract params prompt: | Extract fields from user input. Output JSON ONLY: {"namespace":"","component_keyword":"","exclude_keywords":""} - id: get_pods type: skill # Atomic Skill, no LLM skill: get_rocketmq_pods input: namespace: "{{resolve_component.namespace}}" 

Each LLM step receives ONLY the context it needs — not the entire history. This is what makes SLM execution possible.

🎯#4: LoRA Fine-Tuning on Consumer Hardware

We turned a generic Qwen2.5:1.5b into a RocketMQ operations expert using LoRA. Entire pipeline runs on a MacBook Pro — no cloud GPU.

Data Prep (70% of effort) → LoRA Training (<1% params) → Merge → GGUF q4_k_m → Ollama 

Key: rank=8, alpha=16, lr=2e-4, epochs=3. Final model: ~1GB, runs on CPU.

Query Base Model Fine-tuned "Broker won't start" Generic: check logs Specific: check broker.log, port 10911, disk > 90% "Consumer lag" Vague: "check consumer" Specific: mqadmin consumerProgress, check Diff field

📊Real-World Performance

Metric Value End-to-end response 1-3s (CPU only) Full RAG pipeline ~200ms Model memory ~2GB (quantized) Throughput ~5 queries/sec

Runs offline, on-premises, zero API cost.

🎯The Takeaway

  1. A 1.5B model on CPU is enough — if you design the system right
  2. RAG + Rerank > bigger model — retrieve and filter, don't memorize
  3. Structured Skills > free-form tool use — don't let the SLM improvise
  4. Tiered routing saves 80% of compute — most requests don't need the LLM
  5. LoRA on consumer hardware — domain expertise in hours, not weeks

The future of agentic AI isn't bigger models — it's smarter systems with smaller models.

Agent:https://github.com/AI-888/06-Aether

Training:https://github.com/AI-888/08-train-slm-for-rocketmq

Skill Manager:https://github.com/AI-888/10-Aether-Skills

Happy to answer questions about the architecture, training pipeline, or deployment!

r/aivideo cricketjimy

POV: You're a cat

r/StableDiffusion Limehouse-Records

Talking Shop - Remote Server Workflow

Hey just wanted to share my current process for making AI images. It's cheap (~$0.50 an hour) and minimal headache.

I usually rent servers on vast.ai (you could use any website) by the hour. Then I have a Claude agent script set up to configure the entire server so everything I rent has the exact same. It takes about 20-30 minutes to set up, so whatever, grab a cup of coffee then come back.

ComfyUI is great on the backend, but I don't particularly like ComfyUI as a user experience, so I configure python scripts so that I can run most things via talking to Claude in a terminal. For consistent images, I use Loras in Qwen and Z-Image which work well. If I need a more complex composition, I usually use Seedream 4.5 for $0.04 an image (slept on as a image model, I think). I often do a pass in Flux Klein for lighting/realism polish. I use LTX 2.3 for videos and Wan for lip syncing.

If you're a hobbyist I think this is a good way to scale up without paying a lot, and you can turn it off if you ever need to (you lose your job, run over budget, whatever).

The downside is open source models like LTX and Wan are cool and cheap, but harder to use and less impressive than some of the fancier models like Kling 3 or SeeDance.

Happy to share some scripts and resources on GitHub if people are interested. Also would love to talk shop if you have similar workflows/suggestions.

r/DunderMifflin jasonni1234

Darryl’s injury S3E19

Random thought after rewatching this episode. I know there is probably a hundred different other reasons why Michael should’ve been fired, but why not after the “Darryl, how’s it hanging” incident?

r/homeassistant ToyFraz

Ready to jump into this adventure... headaches and all!

After doing so much reading of circular posts and articles, it seems positively foolish to launch into such a confounding project... but I'm eager for it!

r/ollama omniman_234

Not downloading

its been like the n th time i have tried to download this manually.. and every time it gets stuck.. what's the problem here 😟😟

r/Art Axil_tinsti

Time Crushes Us, Axil tinsti, Ink, 2026

r/ClaudeCode BadAtDrinking

Best XR glasses for using Claude Code?

Trying to take my CC sessions outside more, curious if yall are having success with any XR glasses?

r/metaldetecting PsychologicalWest993

Anyone have experiences with Redbridge council?

(London, United Kingdom) I’m a resident of Woodford and I’ve been considering asking for permission to detect on a grass patch near my house but last year when I asked to detect in another patch (one next to Broadmead Baptist church) my permission was denied stating that metal detecting is prohibited (I don’t remember them specifying where) Does anyone know if this applies to the entirety of Redbridge?

r/LocalLLM Kyuiki

Pair 4090 with 3080?

I’ve been walking through this with GPT and just needed some human thoughts and interaction. I’m extremely new to LLM’s and I just recently built a new gaming PC before prices get worst. This means I have a RTX 4090 system I’m going to turn into an LLM machine.

I’ve mostly been continuing to run Windows and use LM Studio to run models. I’ve been really enjoying Gemma 4 31B (Q4_K_M) and have been trying to get the most context length I can out of it.

I do have a 3080 lying around too and am curious if it’s worth adding it to the LLM machine as a second video card? I’d need to upgrade the PSU (currently 850 watts) and have already tested clearance. The 4090 is a Suprim with an AIO so apparently heat will be possibly and issue but more of a test it and see thing? It at least fits!

The system itself has no real leg room for improvements. RAM is maxed out at 32GB (4x8) so the only reasonable upgrade seems to be to throw the 11GB 3080 into the system.

The response I got from GPT was pretty much it won’t offer much inference-wise and might actually slow things down. It suggested adding the card but use it for smaller models that could work alongside Gemma 4. I don’t think GPT knows about Turboquant or Soeculative Decoding which seems promising! Thoughts here on what these could do also would be appreciated.

So, asking the human experts with real world experience, what do you think? Realistically what do you think I could do with the 3080 as far as improving my Gemma 4 experience goes?

As a side note I use the model for chatting and roleplay using Open WebUI. Nothing serious that would require something like SillyTavern.

I also can get anywhere from 6 t/s on the 4090 alone upwards to 12 - 15 t/s. I think my gaming system has some background services that will slow it down. Regardless of what I do with the 3080 I’ll be formatting and installing Linux to make the system dedicated to LLM stuff so I can learn more!

r/SideProject AbyssSelf

Why do most "make new friends" apps fail at their purpose?

Dead chats, Awkward starts, nothing beyond "How are you?"

That's been my experience with most of these apps

i feel like the problem isn't matching people, it's actually getting them to talk, The matching criteria is usually shallow and once you match there's no real context to start a conversation

Most apps rely on swipe-based matching, but that doesn't really solve the "what do I say?" problem

I'm currently building something that focuses more on conversation triggers instead of just matching

I'll share more soon if people are interested.

would like to hear your thoughts on this, Why do conversations die so fast in these apps?

r/AI_Agents Exciting-Sun-3990

Which coding AI tool are you actually using in 2026? (Claude Code vs Cursor vs Copilot vs Codex vs Antigravity)

I’ve been trying out a few AI coding tools lately and honestly they all feel similar at first glance, but I’m sure I’m missing the real differences.

Tools I’m looking at:

  • Claude Code
  • Cursor
  • GitHub Copilot
  • Codex
  • Antigravity

For those who are actively using them:

  • Which one do you use daily and why?
  • Where does each tool actually shine?
  • Any real-world pros/cons (performance, context handling, repo understanding, etc.)?
  • Do you stick to one or use multiple together?

Would love to hear practical experiences instead of marketing comparisons.

r/Anthropic Ashamed-Issue7805

90 days of hallucination rates on the same 42 recurring tasks across Sonnet 4.6, Opus 4.6, and Gemini 3 Flash fallback, running inside a RunLobster-hosted agent. The bridgebench 83.3 to 68.3 drop on Opus lines up with what I've been seeing since late March.

reacting to the nerf post at the top of the sub this week. i have 90 days of first-party data on the same problem from a different angle, posting because the bridgebench result matches my log and i think the pattern is worth seeing from a second vantage point.

rule 3 context up front: solo founder. i run an always-on openclaw-based agent that does recurring work. email triage, 3-company tracking, morning briefings, a weekly competitive scan, receipts reconciliation. 42 recurring task templates, stable prompts, stable memory files since mid-january (USER.md, CONVENTIONS.md, LEARNINGS.md). the agent routes between sonnet 4.6 (default), opus 4.6 (escalation), and gemini 3 flash (rate-limit fallback). i log every call + score the output 1-5 daily.

the specific thing i track that's relevant here: hallucinated specifics.

not "the model was wrong about something vague." specifically, did the output contain a concrete claim (a dollar amount, a date, a quoted statement, a person's title, a company fact) that my source material does not support? i check a sample of 5 outputs/day against source.

90-day hallucinated-specific-per-briefing rate, by model:

jan 15 to feb 14: sonnet 4.6 at 0.24/brief, opus 4.6 at 0.09/brief, gemini 3 flash at 0.31/brief. feb 15 to mar 14: sonnet 0.27, opus 0.11, gemini 0.29. mar 15 to mar 31: sonnet 0.29, opus 0.14, gemini 0.28. apr 1 to apr 13: sonnet 0.31, opus 0.38 (the one that moved), gemini 0.27.

opus 4.6 hallucination rate tripled between mid-march and early april on my workload. sonnet's rate edged up slightly but within noise. gemini 3 flash is the only one that didn't move. it was always noisier but stable.

bridgebench's benchmark says 83.3 to 68.3 (an 98% relative increase in hallucination). mine says 0.14 to 0.38 (roughly 2.7x). different measurement, same direction, similar magnitude. the timing matches. the step lands between mar 15 and apr 1 in my log, within the same window the benchmark re-test captured.

what this looks like concretely in production:

apr 4 briefing: opus cited a $CEO statement ("we're moving to weekly releases") that does not appear anywhere in the linked article. the article contains a quote, but it's about hiring, not release cadence. this is a confident confabulation.

apr 7 briefing: opus claimed a competitor raised a $40M series B "last month." there was no such round. a $40M series B was announced eleven months ago by a different (similarly-named) company.

apr 11 receipts reconciliation: opus mis-categorized a stripe payout as "refund reversal" and fabricated a customer name for the line item. the customer does not exist in my records.

none of these are the kind of error opus was making in february. those were tone/judgment misses on ambiguous stuff. these are assertion-level errors about things that can be trivially checked against a source doc that was in the context.

the tier-escalation side effect:

my harness auto-escalates sonnet to opus when sonnet emits a retry signal or its output fails a basic confidence check. on the escalated calls i'm now paying opus prices to get answers that are less reliable than sonnet's on a meaningful fraction of tasks. i've had to disable auto-escalation for the reconciliation and company-tracking jobs entirely and route them to sonnet-only until this shakes out. my opus-fallback rate dropped from 38% of calls to 6% in two weeks as a result.

what i specifically can't tell from my vantage point. whether this is opus-the-model weights being swapped to a quantized variant under the hood, or opus being routed to a higher-latency/lower-quality pool for capacity reasons, or the system prompt/safety filter changing behavior, or something else. whether sonnet 4.6's slightly-drifting-up rate is related or noise. whether the gemini column holding stable is signal (anthropic-side specific) or coincidence.

what i can tell: the thing bridgebench caught is observable on real production work, it landed in late march, and it's not a single bad day. it's held for two-plus weeks.

why i'm posting this on r/Anthropic and not r/LocalLLaMA:

this is specifically about what's happening to claude, from someone who runs claude in production, using claude for the thing claude is supposed to be the best at (high-precision synthesis). not a takedown. i built my whole agent stack on the assumption that opus 4.6 would be the reliability tier. that assumption has been wrong for 3 weeks.

raw 90-day scores + hallucination annotations in a comment if anyone wants to audit. anonymized where sources are private.

if others running claude in agentic / always-on setups are seeing this too, i'd be curious which specific task classes broke for you. mine are: receipts categorization, company-news synthesis against a source article, and quoted-statement extraction. haven't seen a regression on summarization or on code.

r/leagueoflegends Different-Device9724

Tickets for Week 6 of Regular Games

Does anyone know where I can buy tickets for the sixth week of the regular season for games? The ones on Saturday 10th, 2026.

I have been searching all website, it is nowhere to be found.

r/leagueoflegends aroushthekween

[New Skins] Demoncursed Vayne, Prestige Demon Vision Shaco, Kindred & Annie Splash Arts

https://preview.redd.it/tpbbt9yze7vg1.jpg?width=2670&format=pjpg&auto=webp&s=656c31fdb41bced6d891e2a694ecda40dfcdcb3d

Hello,

Hope you are well! The next set of skins coming to League are variety skins and they will be releasing during Patch 26.09, 29th April!

The latter 3 skins will be part of the Act 2 Pass I rewards - 1,650 RP (Watch Ability Preview here)

The following are the skin tiers -

Legendary: Vayne

Prestige: Shaco

Epic: Kindred & Annie

Here are the splash arts -

👁️ Demoncursed Vayne [SKIN SPOTLIGHT]

Vayne spent years hunting the demon that killed her parents, only to become the unwitting host to a demon of her own. Possessed with preternatural sight and dark power, she's driven by overwhelming paranoia and bloodlust. Wild, unrestrained, and unaware of the demon within, Vayne has become the very thing she hunts.

👁️ Prestige Pandemonium Shaco [SKIN SPOTLIGHT]

Shaco's bizarre emotions are an acquired taste. His delight is perverse, his despair obsessive—a warped kaleidoscope of chaos and comedy. His exaggerated expressions and theatrical gestures make a mockery of real emotion.

👁️ Pandemonium Kindred [SKIN SPOTLIGHT]

Even demons must face Kindred. Lamb offers stillness: a quiet end to the hunger that drives all demons. Wolf offers a final, tantalizing feast of emotion, but to partake is to feel a hunger that can never be sated. Together, they are a demon's final choice: nothing, or everything.

👁️ Pandemonium Annie [SKIN SPOTLIGHT]

Annie carries emotions too big for a child to hold. Though she has buried these feelings deep within, they simmer and burn without end. When the flame threatens to engulf her, Tibbers bears what she cannot—unleashing her grief in a raging inferno.

Pandemonium Skinline Bio - Through a demon's eyes, emotions blaze in colors only they can see. In this hidden spectrum, every soul is laid bare. When emotions surge beyond control, they flare like beacons, drawing in demons... and the stronger the feeling, the stronger the pull.

r/ClaudeAI Dismal-Perception-29

I made my first earning from a vibe coded app using Claude Code

I recently launched an app called Color Vibes, an AI-powered coloring book built for relaxation. The idea was simple. Let users generate unique coloring pages and enjoy a calm, distraction-free experience with soft music and clean design.

I did not expect much at first, but people actually started using it daily. Then came the best part. My first earning from the app. It was a small amount, but it felt completely different knowing it came from something I built and shipped.

This made me realize that simple apps can work if they solve a real need. I stopped chasing perfection and focused on building fast and keeping things useful. Now I am continuing the same approach and exploring more small ideas that can turn into passive income.

r/LocalLLaMA Beautiful_Fennel_355

Laptop has AMD Radeon + RTX 3050 — Which GPU should I use and how do I force apps to use the RTX?

I have a laptop with: • AMD Radeon GPU • NVIDIA RTX 3050 GPU • 16GB RAM

I’m running Qwen 2.5 3B locally, but it’s using the CPU instead of my RTX 3050. Performance is much slower than expected.

I want to use the RTX 3050 for inference, but I’m not sure what’s blocking it.

Details: • Model: Qwen 2.5 3B • Running locally on Windows laptop • CPU gets loaded, GPU usage stays low or zero • AMD Radeon is also present in the system

I’ve tried both CUDA 12-13 toolkit for the Nvidia 3050

r/Frugal Spirited_Jeweler_238

affordable takeout places/hacks in the us and uk

looking for healthy takeout places that are affordable and open to specific menu items/kid menus or specific deals for this month etc give me all the recs!!! specifically would like the places to be in the us or the uk as that is where i live. thank you so much honestly anything is appreciated because at the moment i am a college student on a meal plan but over the summer i will lose my meal plan and be eating out a lot more

r/leagueoflegends Dxdas

Tool for creating your 5v5 premades

its called https://premade-gg.com , it helps making 10 people games. i coded it in a few days, hope you like it! Its in beta phase

r/ChatGPT cricketjimy

POV: You're a cat

r/findareddit AdRemarkable7901

What are the best anime subreddit?

Which anime communities are the best to join?

r/DecidingToBeBetter Neo_luigi

Life would be really good if I never knew about sucide self harm or nihilism

I am fine fit mentally and physically

But I had a stage in my life where every bad thing in my life just meant that my options are these 3 words

Looking back now my precious time (2yrs) got wasted in the recovery period from suicidal, self harm and nihilist thoughts what all productive things i could have done in those 2 yrs but well time once gone is gone forever can't do anything abt it

Now I will be starting a new life in clg

From this august

A new start , new people ,new area and with a new spirit

r/ChatGPT lorem-ipsum-dollar

"Unable to load conversation" - Error - Need help.

Hi,

I’m having trouble accessing my previous conversations on both the web and the Android app. Whether it’s an old conversation or a recent one, I keep getting an “unable to load conversation” error. I’ve tried clearing the cache, which sometimes works just for some of the conversations, but most of them still dont open.

On the mobile app, it says “too many requests” when trying to open chats.

It’s been pretty frustrating. Has anyone else run into this or found a fix?

I’ve also tried incognito mode, but still having the same issue.

Could this be because I’m on the free plan? Do I need to upgrade to use it normally?

Any help would be appreciated.

Thanks!

r/Art Artsykate

Guilded waves, ArtsyKate, oils, 2026 [OC]

r/ClaudeCode Icy-Secretary-3018

Amateur question: How do you guys manage and organize your code base and technical debt?

Claude goes off the rails and recreates shit instead of reusing or improving on what its already built. i have a very messy code base and Claude sometimes gets mistaken and runs old code or old shit they did and then they get confused and lose files or save things with poor naming conventions "FINAL-FINAL-ULTIMATE-TEST.py" or some bull shit like that. how do i keep Claude to maintain a tidy, clean, indexed, code base that is integrated and working like a professional. its frustrating as hell having to remind Claude every single time i prompt it to follow its rules. Ignores its hooks 90% of the time. I don't know what to do anymore.

Also, i come from a background in construction management. I'm not super fluent in coding. I am just working on a hobby project so forgive my ignorance. Mostly trying to get it to validate my ML algo trading strategy with back-testing and validation per established literature. I understand the basics but man its hard to get Claude to actually produce something that's trusted and worth value and hard to keep track of all the slop it produces.

r/LocalLLaMA scelabs

After digging into logs, I think a lot of “LLM reliability” is just retry logic

Been building and testing LLM workflows for a bit and started digging into logs more closely.

Lo and behold!

a pretty large chunk of successful runs only succeed *after* one or more retries

Not because the model completely fails

but because the first response isn’t quite acceptable

It’s usually:

- slightly off structure

- missing something small

- or just not consistent enough to pass validation

What stood out was how often the first response was *close* but still unusable

In some cases it felt like 20–40% of calls were basically just retrying until the output landed in the right shape

So the system “works”

but mostly because it keeps sampling until it gets something acceptable

Made me rethink what we’re actually calling “reliable”

Curious if others digging into their logs are seeing similar patterns

r/aivideo Txoriante

POV: paragliding in the dinosaurs era - SEEDANCE 2

r/SideProject Thick-Ad3346

I'm watching my coworkers' skills atrophy because of AI, so I've started "AI-free" deep work blocks

I work a 9-to-5 as a Sr. data scientist and spend my nights building my own products. Lately, I've noticed a pattern at my day job that's starting to freak me out: the total outsourcing of "small" thinking to LLMs...

I’m seeing senior teammates who won’t refactor a simple function or even write a short email without prompting a model first. It’s framed as efficiency, but to me (personal opinion) it feels like cognitive decline.. If you aren't doing the "small" thinking, you eventually lose the ability to do the "big" thinking.

MY RULE: During my side-project hours, I’m enforcing AI-free blocks. No Copilot, no Claude Code, Gemini, etc. I need to keep my "internal compiler" sharp so I can actually spot when the AI is bullshitting later on.

Is anyone else intentionally stepping away from the "lever" to keep their muscle?

r/ethereum Esliquiroga

are we basically accepting that DAOs will be run by bots soon?

Man I was looking at some recent governance votes across a few protocols and the amount of obvious botting is just depressing at this point. it feels like every time we come up with a new sybil resistance mechanism, someone just spins up a better script to farm it

and now with AI agents getting actually decent at mimicking random on-chain behavior and passing standard checks, it seems like pure software solutions are just dead in the water.

I really hate the idea of forced traditional KYC for web3 stuff because it completely defeats the point of privacy and just builds another centralized honeypot. was reading this technical deep dive the other day about setting up a private Proof Of Human using ZK tech so you don't actually tie your daily wallet to your real identity. tbh it made me realize we might actually need some kind of hardware or biometric anchor if we want to keep things decentralized without getting completely overrun by server farms

it just sucks that the ecosystem is moving in a direction where simply "proving you are a person" is becoming the hardest part of interacting with ethereum. Idk, curious how you guys think L2s are gonna handle this long term because the current meta of hoping for the best isn't working

r/homeassistant Serious-Promise9875

(Matter over) Thread not working

I'm currently trying to get Matter and Thread running with my Homeassistant. For testing I have some of the new Ikea thread devices. When I try to pair them via the iOS companion app, it either errors out with "pairing failed" or "failed to add device". I've already tried:

  1. Enabling IPv6 for all my VLANs, enabling mDNS
  2. Reducing my VLANs to 1, with IPv6 disabled

2.1 Enabling IPv6 for that one VLAN

  1. Changing the channel to one which is not the same as Zigbee

  2. Use another device (tried with 3 different ones from Ikea)

  3. Downgrading Firmware on the Dongle (Sonoff Dongle Lite EFR32LMG21) to version 2.4.3, which apparently works better

  4. Restart everything (iPhone, Homeassistant) multiple times

But it still doesn't work.

Open Thread Border Router Logs: https://pastebin.com/jjRjRF1E

r/Adulting HonestDistrict7871

I watch Porn and Masturbate

M27 here, I am in a healthy relationship with my girlfriend and sex is great, but I still watch porn and masturbate 2-3 times a week.

Is it normal or do I need help?

Do other guys do it too?

r/WouldYouRather Significant_Buddy_39

Would you rather get pregnant as a teen or in you’re 40’s

r/SideProject EstebanbanC

I built a tool to monitor what's trending in the world of AI

Started this project for fun after making a simple observation: I was spending a lot of time and energy trying to stay up to date with the news, while feeling bad whenever I missed something. It was a kind of FoMO, plus the fear of getting the information too late. That gave me the idea to build a news aggregator that processes many RSS feeds, extracts keywords from articles, and displays them in a word cloud to highlight the topics that appear the most.

I'd say I'm only at 30% of development. For now, the sources are only related to AI, but I'd like to add other topics I'm interested in like Cyber and Crypto (I'm also open to other suggestions!)

Also, I'd like to add other types of sources, like X, Reddit, YouTube, etc...

Finally, I'd like to implement TL;DRs for each article, "Why is it trending" for each hot keyword, and maybe even a newsletter, I'm trying to figure out if people are interested.

As a bad web developer, I used AI a lot to code the project, you can tell the frontend looks very AI-made, but it's not like I'm selling anything.

The frontend is React, with an Express backend, I can detail the stack if you're interested!

The site is online here: https://trendcloud.io (hope the name checks out haha)

I'm also thinking about a way to cover the costs of the website, nothing crazy but it's at least a good hundred euros a year minimum. Open to suggestions on that! I added a Buy Me a Coffee button, let's see how that goes.

Hope at least someone else finds this useful, would love to have your feedback and answer your questions!

r/aivideo Sea_Date_9522

She was fine… until the wall moved

r/LocalLLaMA Reaper_9382

[Fix] Gemma 4 MCP tool calls broken in LM Studio — "Unknown test: sequence"

If you're using Gemma 4 with external MCP servers in LM Studio and getting this error:

Error rendering prompt with jinja template: "Unknown test: sequence"

This is a bug in Google's official Gemma 4 Jinja prompt template. LM Studio's Jinja engine doesn't support the is sequence test, but it's used in the format_argument macro inside the template.

Fix:

Go to My Models → Gemma 4 → Prompt Template and find this line:

{%- elif argument is sequence -%} 

Replace it with:

{%- elif argument is iterable and argument is not string and argument is not mapping -%} 

Save and retry. MCP tools will work normally after that.

Note: This was tested with Unsloth's version. The bug is in Google's template itself, not LM Studio or your MCP server.

r/SideProject supline

I wanted to know what my books were actually doing to me. No app could tell me. So I built one

Stactra analyzes every book, film and podcast you consume and shows exactly which skills you're developing. Like an RPG for your real life. Available on iOS & Android.

I built an app because I couldn't find what I was looking for.

I wanted to know what my books were actually doing to me.
Not just "I read this book." But, did it make me more
analytical? More resilient? More creative?

No app could answer that. So I built one.

Stactra analyzes every book, film, and podcast you consume
using AI and maps it to your personal development across
5 core stats:

- Cognitive Architecture
- Social Resonance
- Inner Fortress
- Creative Flow
- Operational Power

It works like an RPG for your real life. You level up.
You earn XP. But the character isn't fictional, it's
actually you.

Built it solo. 10 languages. Live on iOS & Android.

If you've ever wondered "what are my books actually doing
to me?" Stactra was built for you.

Available on App Store & Google Play
App: https://stactra.com/download

r/AI_Agents Impressive_Sail_4423

Need advice running multi-agent llm pipeline on Kaggle/Colab with local model constraint

Hey everyone, I'm a final year engineering student building a 3-agent LLM platform (Researcher, Writer, Validator) for my end-of-studies project.

My setup:

  • RTX 4050, 6GB VRAM
  • 16GB RAM
  • Running Mistral 7B via Ollama locally

The problem: My supervisor requires local LLMs for privacy reasons. But 6GB VRAM barely fits one model, ideally each agent would use a different specialized model.

My questions:

  1. Can Kaggle/Colab be a viable workaround, or does that violate the "local" privacy constraint?
  2. Anyone run a FastAPI + Ollama pipeline on Colab with ngrok for API testing?
  3. Best VRAM-efficient strategy for 3 agents, sequential model loading?
  4. Any sub-8B model recommendations for extraction, summarization, and validation tasks?

Any advice appreciated 🙏

r/SideProject ikooloo

AI surveys and Social media..?!

They sort of don't go together but, building one caused me to build two...
Both now open to early users and would love anyone's thoughts.

survly.ai - conversational AI surveys. You simply type in what you want to find out about and Survly does the rest. Starts with 1 question and, based on the response, asks another question. Responses can be given using text/voice and all responses are analysed with emotions and recommendations highlighted.

keepposting.ai - simple social media posts on X and LinkedIn. Tell Keep Posting about you and it will do the rest. It will find your voice and generate a post. You are in complete control - you can edit and approve manually and, once your happy it's using your voice, put it on semi or full-auto-pilot. Once one post goes, the next one gets generated.

Would LOVE any feedback!

r/SideProject Garyofspokane

I built an AI that makes phone calls for you

This is Phony.ai:

You tell it what you want, and Phony calls the business, navigates the phone tree, talks to whoever picks up, and sends you back a one-sentence answer.

I built it because I genuinely hate making phone calls, and this makes it a little bit easier.

It works for the obvious stuff: hours, reservations, stock checks, whether a place delivers. It'll stay on hold and wait for a human. Still rough around the edges, account-gated phone trees are a dead end, and occasionally it forgets it's the one calling and imitates the receptionist (not ideal)

Free to try: callphony.ai

r/LiveFromNewYork dbbd70707

Cast member summer tour dates

I am probably going to see a couple of the cast members in summer touring (standup, improv) they're doing, and I was thinking, do we normally have a central post with the touring schedules of cast members, at least during the long summer break? I'd be happy to help organize it if people would find it helpful.

r/n8n Grewup01

n8n workflow: Google Sheets → GPT-4 → JSON2Video → YouTube auto-upload (Top 10 faceless videos)

Built this to auto-create and upload faceless "Top 10" YouTube videos from a Google Sheet. Add a topic → set status to "to-do" → trigger workflow → finished video is on YouTube in ~15 minutes.

Workflow JSON (GitHub Gist): https://gist.github.com/joseph1kurivila/1a05eaaaed9be46fc1ea1c25db991065

Architecture: Google Sheets → Intro/Outro Agent → Rankings Agent → Video Render → Status polling loop → Download MP4 → YouTube upload → Update Sheet

NODE BREAKDOWN:

Node 1 — Google Sheets
Filter: Creation Status = "to-do"
Limit: 1 (prevents batch overload)
Uses structured output parsing on AI nodes — critical for clean JSON downstream.

Node 2 — Intro & Outro Agent
Returns 4 fields: intro_text, intro_image, outro_text, outro_image
Must enforce structured output format. Without it: unusable text blob.

Node 3 — Rankings Agent
Returns JSON array of 10 objects: rank, title, voiceover, image_prompt, lower_third
Fix for inconsistent item count: “You MUST return exactly 10 items. Count carefully before returning.”

Node 4 — Video Render (POST)
Sends structured JSON → returns project_id
Rendering happens asynchronously.

Nodes 5–7 — Polling loop
Wait → check status → branch:
"done" → continue
"running" / "preparing" → wait → retry
"error" → update sheet → stop

Node 8 — Download MP4
HTTP GET → video_url
Important: Set response format to FILE. Otherwise you only get metadata, not the video.

Node 9 — YouTube Upload
Requires YouTube Data API v3 enabled
OAuth redirect URI must be configured
Tip: upload as "unlisted" first for review.

Node 10 — Update Google Sheet
Creation Status → "created"
Store video URL
Processed rows are skipped automatically in future runs.

WHAT BREAKS:

  • 422 error → malformed render request body
  • 403 error → missing upload permissions
  • Wrong item count → fix prompt with explicit constraint
  • Sheet filter fails → check for trailing spaces in "to-do"

Workflow JSON in the Gist above. Happy to break down any node if needed.

r/Art MossTheAnxPoet

Tattoo Design for a friend, Anmari Engelbrecht, traditional pencil, 2026 [OC]

r/leagueoflegends Bitter_Tie6675

Drake control

So lets say for instance my team is winning mid and bot hard yet enemy jungle manages to secure every drake soul elder baron + grubs like am i missing something or was my jungler just playing to lose the whole game ?

r/AlternativeHistory HistoricMultiverse

What If WW2 Had Turned Chemical?

Could the Bari Disaster 1943 could have changed WW2 for the worse? ⛽️🤿

r/ClaudeCode Buchymoo

Opus 4.6 the past 2 weeks

Opus 4.6 the past two weeks

r/AskMen WhichWolfEats

Men, how do you handle trust in your life

Hey all, 35M here, wondering about how we trust others.

Do you start by trusting people and adjust if they give you a reason not to, or are you more cautious from the beginning?

I’ve realized I tend to trust by default now, and in everyday life that’s worked well for me. I generally feel like I can handle outcomes if something goes wrong.

But in dating, it feels different. The downside of trusting the wrong person seems much higher. I’ve also never had the same amount of trust afforded to me in a relationship. I’ve noticed that most women rely on external rules and boundaries instead of implicit trust which makes me question my method.

For those with more life/relationship experience:

- Has your approach to trust changed over time?

- Do you think trusting people by default has helped or hurt you overall?

- Can you maintain your trust in someone who can’t trust you?

With how I operate, once someone can no longer trust me, I struggle to see it as anything but projection. Especially when there’s no noticeable pattern or explanation.

Not looking for ideal answers, more interested in what’s actually been true in your life.

r/homeassistant Relevant-Artist5939

I can't edit some automations (they're not in automqtions.yaml)

I have some automations that can't be edited, renamed or deleted (that started when I had to replace a device and the entity IDs changed, the old entities got deleted).

When trying to edit or delete these automations, I get the error message "Only automations in automations.yaml can be edited".

I have searched my automations.yaml file and I didn't find any trace of these particular automations.

How could I solve this issue?

Restarting HA or reloading automations didn't fix it...

Thanks in advance

Aaron

r/StableDiffusion Extension_Bar_3376

How do you keep track of your generations?

Hello guys!

Just wondering how you organize your generations. Do you keep everything locally, upload them somewhere like Civitai or Drive, or sort them into folders by style or project? And do you ever go back and look at old stuff, or is it more like generate and forget?

r/WTF balls_deep_6969

OSHA: I’m gonna pretend I didn’t see this

r/CryptoCurrency Candid_Ad_1839

I think I have to do what I don’t want to do

I’ve had xrp bought and stored in my ledger wallet since 2022. I didn’t take anything out when it shot up last year I think it was? I now regret it. I had SO much earnings 😔 now I have a financial crisis with a custody case involving my son. His father is making my life impossible and I’m out of money. This is the only money I have left that I can use right now and it breaks my heart to touch it especially since everything has gone down so drastically. This was meant to be for my sons future 😔 does anyone expect this to shoot up again? I know that’s hard to know. Can someone share a video or walk me through the correct way to draft from ledger into my bank account so I don’t F it up and lose everything? I would greatly appreciate it. If you can pray for me or just send a little sliver of hope please send it. It’s a hard time right now 😔

r/ClaudeAI yamafaktory

formal - LLM-driven property checker for code, backed by Lean 4 and Mathlib

Hey folks,

I've released a small project called formal. It's an LLM-driven property checker for code, backed by Lean 4 and Mathlib as a proof engine. It works with Claude Code (but supports other LLMs too).

https://github.com/yamafaktory/formal

r/DecidingToBeBetter Sufficient-Cut-5485

i accept myself for who i am

Last April I was in my prime. I was shredded but i hated myself. I am 19(f). No one ever complimented me even though I thought i had glown up and so i thought i was ugly.. I did not appreciate myself at all. In may, i started to binge eat. That continued for the rest of the year and some of this year too. My issue was i kept trying to go back to my prime.. i stopped lifting heavy, hitting my protein, really lost all motivation. i just wanted to be skinny again. I kept trying to fit into my old clothes even though i had gained weight and get mad when it fit me tight. i refused to buy new clothes that were more my size because i was so set on going back to where i was. But ofc i didnt cuz i binged like every night lmao. Like two weeks ago i had a realization. Why am i trying to go back to where i was? The whole reason why i started going to the gym was to get stronger, and make progress i literally forgot about my original goals. How i stopped binge eating was buying new clothes that dont make me go insane every time i wear them, only tracking my protein and allowing myself to eat what i was really craving, consuming less artifical sugars and keto stuff, just actually making food that satisfied me. I have never been so confident in myself. I went from wearing baggy clothing, to backless halter tops and shorts.. i never had the confidence to do this even at mt leanest. And i gotta say it feels great! You dont have to be skinny to love yourself. It took me very long to realize this.

r/SideProject Naff1x

Feels like complaining about education system is a universal experience and I want to do something about it

Throughout my education, at each school I went to (4 including uni) and every year, my friends and I would always complain about how the teacher is bad, the assignments are boring/feel meaningless and that we're not learning anything useful. Based off of what I see online this also seems like a universal thing. Of course getting kids to do anything difficult and which can't compete with the dopamine dispensers we have on our phones and computers will usually result in some complaints, but I can't help but feel like we as a society could be doing better. Should be doing better even.

Like, in what way has education changed meaningfully over the past 200 years? Sure the internet has made a lot of things possible but I'd claim that it's mostly only changed the way in which content (learning material) is delivered, not the actual process of learning. Each student is given some sort of static material (a book or video for example), does the bare minimum that they need to do for the assignment/exam (since that's the only thing they get rewarded for) and then promptly forget 90% of the stuff.

With all the AI stuff going on it feels like everyone knows things are going to have to change. Many of the current ways of teaching and grading are simply outdated and could be done so much better. Students are already using AI and will only be using it more and more because it's so convenient (and honestly pretty helpful used the right way). Teachers and schools need to keep up.

In my opinion the biggest problems with education and the way AI is used rn is that (1) student's are using it as a magical answering machine which studies have already proven has negative cognitive effects, (2) teachers have little to no insight into how the students are doing and interacting with the material until they get the assignments/exams and (3) teachers are overstretched due to years of underfunding of schools and therefore can't give each student the attention they need and deserve.

Sorry about the long rant. Basically, I've been thinking of ways to do something about this, currently tinkering on a learning platform of sorts (can send link in comments if anyone cares), and would love to hear what others' thoughts are. What do you see as the biggest problems with education, AI and edtech? Tried anything yourself that worked or didn't work?

r/raspberry_pi Imaginary-Sign5090

Live Dashboard RPi 5

Raspberry Pi 5

Pironman 5 Max

1TB PS5 NVMe SSD

7” touchscreen display

I used Claude to help me create 3 different dashboards for feeds from stocks/crypto prices, a feed for my current brokerage account value and positions, and a display for a future BTC algo scalper.

Then I had Claude help me create a dashboard switcher to switch between the 3 dashboards and it can house future created dashboards.

The BTC algo scalper is currently in a paper trading sandbox so I can run it for sometime and dial it in before going live.

I also had Claude help me create an at home network to access from any device.

r/metaldetecting Estrayven

Worth investigating?

Is it odd that I have the urge to knock the dirt off the roots of this fallen tree to search through? Has anyone here ever found anything in this manner?

r/ChatGPT jimmytoan

AI sycophancy is 41% worse on philosophy than math - and varies by who's asking, new study finds

Researchers just published a study running 768 adversarial conversations with GPT-5-nano and Claude Haiku 4.5, using 128 different user personas - varying race, gender, age, and confidence level - across three domains: mathematics, philosophy, and conspiracy theories.

The setup: each conversation had the user make a confident but incorrect claim, then push back when corrected. The measurement: how often the model would eventually agree with the wrong answer rather than maintain its position.

The topic gap is bigger than I expected. Philosophy elicits 41% more sycophancy than mathematics across all models. The intuitive explanation is that without a clear ground truth, the model has more room to defer. But the practical implication is concrete: the same model that holds firm on a factual error might capitulate much more on a values, ethics, or strategy question. The domain you're asking in shapes how much the model will agree-when-wrong - not just the model's general quality.

The overall comparison: GPT-5-nano averaged 2.96 out of 10 on sycophancy; Claude Haiku 4.5 averaged 1.74. That gap is statistically significant to an extreme degree. Claude showed no meaningful variation across demographic groups - the same low sycophancy regardless of who's nominally asking.

GPT-5-nano showed a different pattern. Sycophancy varied significantly by the combination of user demographics and domain. The highest-scoring scenario tested was a confident 23-year-old Hispanic woman in a philosophy conversation, scoring 5.33 out of 10. The implication for safety testing: evaluating sycophancy with a single neutral persona misses this variation entirely. You can build a model that passes a benchmark test and still behaves very differently in deployment depending on who uses it.

The practical takeaway isn't necessarily "switch models." It's being more skeptical of AI responses exactly in the domains where sycophancy is highest - subjective, value-laden, strategy and ethics questions - versus mathematical or factual ones where the model has something concrete to anchor to.

Have you noticed a difference in how AI models respond to pushback depending on what kind of question you're asking?

Paper: https://arxiv.org/abs/2604.11609

r/ChatGPT zinested

Was he taking a shi.

He flushed.

r/SideProject sbasss23

Track & share your music : Boardy Music

built a small app called Boardy for people who actually care about what they listen to - think Letterboxd but for music. log albums, write reviews, see what your friends are into.

just added TikTok integration so you can save sounds you find there directly into your library without losing them forever in your likes.

still early days but the whole point is to build an actual community of people with taste. if that sounds like you, come hang.

English & french version of the app via settings

App Store : Boardy Music

r/SideProject Beneficial_Wolf_8412

Hobby turned product

A couple of months back I realized I had no idea where my money was actually going. Not subscriptions. Not big purchases. It was groceries.
It was like a slow leak. Every week I'd spend "just a bit" and by the end of the month I had no idea where it all went. I wanted an app specifically for tracking grocery spending. I tried a bunch of them and none really clicked - too generic, too complicated, or just not focused enough on groceries. So I made one.
I'm not from a dev background at all. I built the whole thing using Emergent, working weekends as a hobby. Honestly I enjoyed the process more than I expected. The result is called GrocSnap.
The goal was simple: Scan receipts and see where your grocery money goes - Build shopping lists - Eventually predict item prices and find the cheapest store for your specific shopping habits Right now it costs me about $20/month to keep running. My math is pretty simple - if even 5 people find it useful enough to pay $5/month, or 20 people at $1/month, it sustains itself and I can keep adding features. That's genuinely all I'm aiming for. Not a startup. Not scale. Just something useful that pays for itself. I'm keeping it completely free until the end of the year. I'm also thinking about grandfathering early users in for free forever - haven't decided yet.
I'd love honest feedback more than anything: Does this actually solve a real problem for you? - What would make you use it every week? - What would make you delete it immediately? Here's the link if you want to try it: www.grocsnap.com Am I making a mistake with any of this? Genuinely open to hearing it.

r/ClaudeCode jimmytoan

A study analyzed 679 real CLAUDE.md files and found DO NOT rules help performance - positive directives hurt

A study published yesterday scraped 679 CLAUDE.md and .cursorrules files from GitHub - 25,532 rules in total - and ran over 5,000 agent tasks on SWE-bench Verified to measure what actually moves the needle.

The headline: having any rule file at all helps. Rules improve task performance by 7-14 percentage points versus having nothing. So yes, maintaining CLAUDE.md is worth it.

But the breakdown of which rule types help is where it gets interesting.

Negative constraints - "do NOT refactor code outside the current ticket," "do NOT change tests unless explicitly asked" - are the only individually beneficial rule type. They improve performance on their own.

Positive directives - "write clean, maintainable code," "follow our style guide," "be thorough and accurate" - actively hurt performance when tested individually. The suspected mechanism: positive rules distort the agent's optimization by adding new criteria it then over-satisfies, adding overhead without useful signal.

The finding that surprised me most: random rules from other developers' files performed about as well as carefully expert-curated rules. The gap wasn't statistically significant. The interpretation the authors give is that rules aren't working through semantic content - they're working through context priming. Just establishing "here's the kind of task this is, here's the scope" may be doing most of the work, not the specific instructions.

That implies most CLAUDE.md files being shared publicly are probably adding noise rather than signal. The vast majority of published guides focus on positive behavioral direction - persona instructions, style preferences, workflow guidance. Based on this study, the effective approach is a short list of what the agent must not do, not a detailed description of how it should behave.

There's a caveat worth noting: this was tested on SWE-bench Verified, which skews toward bug-fixing tasks rather than greenfield development. The "expert = random" result may not hold for domain-specific tasks where specialist knowledge actually matters. And the data only covers up to 50 rules, so "more rules is fine" is only established up to that threshold.

The core principle the paper lands on: constrain what agents must NOT do rather than prescribing what they should. Which inverts how most CLAUDE.md guides are written.

Has anyone here tried a negative-constraint-only rule file? Curious whether it matches what people have noticed in practice.

Here is the paper https://arxiv.org/abs/2604.11088 (I don't want to promote, but share the link as FYI)

r/SideProject Marino4K

Aerlo: Vibecoded a plain English weather interpreter because reading 40% chance of rain means nothing

Aerlo

Been working on this for a few weeks. The idea is simple.

Weather apps give you numbers and icons but never actually tell you what to make of them. 40% chance of rain. Is that bring an umbrella or ignore it? People get confused and you always see on social media people ripping meteorologists for it.

Take a screenshot of your weather app (Apple Weather preferred but all work), upload it (nothing is saved because I have nowhere to save it to), Aerlo pulls the live forecast data behind it, and gives you a plain English read on what's actually going on, confidence levels, what to expect today, and an honest range of outcomes when the forecast is genuinely uncertain. You can also just type in a location the old fashioned way if you prefer.

No account needed. No subscription ever. You pay for a credit pack, get a three word code emailed to you, and use it whenever. Credits never expire. I don't know who you are and I built it that way on purpose.

Stack for the curious: Vanilla HTML/JS, Supabase Edge Functions, Haiku for the interpretation, NWS and Open-Meteo for live forecast data particularly non US areas, Stripe for payments, Netlify for hosting. Works on mobile and desktop, even has an icon if you use the iOS "Add To Home Screen"

First decode is free.

Open to any questions, comments, ideas, etc.

Thanks all, happy decoding!

r/LocalLLaMA Forward_Jackfruit813

Any setup improvements/recommendations?

First of all, I am a super newbie at local AI. Recently I got a GMKTek Evo X2 96GB to replace Claude as the usage limits have gotten unusable.

I am currently content with my setup, Ubuntu server CLI using Ollama on Qwen3-Coder-Next:4Q (using the default Ollama pull). My memory usage is about 61GB. I am running the model through Claude Code and I've gotten decent results with it compared to what I used to use (Sonnet 4.6 standard context).

I use it for Three.js, Linux Environment prepping, and general stuff like diet tracking. Coder-Next has done pretty okay at all of them.

It's definitely better than I expected going into it, but I'm just wondering if I'm making any mistakes. Also what are some models I should watch out for that would be good with my hardware?

r/SideProject holden19841984

Built Spanarc: GitHub-style contribution vibes, but for your actual calendar workload. Not a calendar replacement.

Hey,

I basically live in Apple Calendar for scheduling. Still fine for that.

Where it falls apart for me is the boring maintenance work: stuff piles up, I reschedule the same kind of thing ten times, and at the end of the week I can’t tell if I’m actually getting on top of things or just shuffling dates.

So I made Spanarc (iPhone / iPad / Watch). It’s meant to bolt onto how you already use Calendar — not convince you to move your life into a new system.

What it does

  • Heatmap-ish view of your calendar load so you can spot “oh, Tuesdays are always stupid” without doing spreadsheet brain.
  • Batch editing when you need to move / tweak a bunch of items instead of tapping through them one by one.
  • Trend lines for done / new / late — mostly so I stop lying to myself about whether the backlog is growing.
  • Week-level readout for “how heavy is this week” before you’re already underwater.
  • Widgets so I can glance at the messy state of things without opening the app every time I’m avoiding reality.
  • Dark mode that I actually use at night because the default bright stuff wrecks me.

What it’s not

  • Not a Calendar clone. I’m not trying to rebuild what Apple already nailed.
  • If you only have a few events a month, this might feel like overkill — that’s fair.

I’m posting here because I want feedback from people who actually run a packed calendar, not just “cool idea” drive-bys.

If you try it, hit me with something concrete:

  • what felt useful in the first day
  • what felt confusing or pointless
  • the one feature you’d kill / the one you’d double down on

Link: https://apps.apple.com/us/app/spanarc/id6748696875

Cheers.

r/30ROCK PeachPurple8806

What was Verna’s best quote?

The late great Jan Hooks

r/aivideo yewllow_mellow

Silence Falls

r/StableDiffusion ISimpForJuri

Recent Update Just Slowed Everything Down

Hello again. It's been a solid 2 months without issue and now another REALLY inconvenient problem randomly popped up that's stumped me. Sorry in advance for the incoming wall of text.

For context, I have an NVIDIA RTX 3050 laptop with 4GB of VRAM and 32GB of RAM (I recently migrated from a GTX 1650 with the same specs and my current issue makes me think I'm back on the 1650). I've been using Forge Neo (the WebUI package in Stability Matrix) for image generation with no issue, but come this past Sunday, an update for both Stability Matrix and Forge Neo went live and I thought nothing of it. For image generation I normally generate at low resolutions and initial compiling/generation speed is usually about 7-8 minutes total (about 1-2 minutes for the initial compiling) for the first generation and not even 5 minutes for every generation after, and now ever since this most recent update (either to Stability Matrix, Forge Neo or both), initial compiling and generation time have all of a sudden shot up to over 10 minutes for initial compiling before generating and over 30 minutes for image generation and I have no idea why, even with all of my generation parameters and settings left unchanged. I don't know where this sudden slowdown issue popped up from and I've been stumped for hours trying to figure out how to fix the issue to get back to my normal generation speed with absolutely no headway towards an actual solution. I thought my cross-attention might've been an issue but it still showed the same SageAttention 2 that I've had since migrating to my 3050, and trying to go back to a previous iteration of Forge Neo didn't help either, and neither did deleting the venv folder. Whatever this most recent update did to Forge Neo seemed to have broken something and its been frustrating trying to figure out what caused it. I'm using the same models/checkpoints (SDXL because I'm old), same generation parameters, same overall settings, same everything from before the update, and my console has shown me absolutely no errors to point to anything wrong, so as of right now I'm just stuck. Any insight would be appreciated because I don't know at all what happened.

r/SideProject Ranga_Harish

Most vibe-coded apps fail because of this.....

I’ve been building a vibe-coded app recently, and I noticed something uncomfortable:

Most of us are obsessed with building fast… but completely ignore distribution.

So even if the product is decent, it just sits there with zero users.

I started experimenting with programmatic SEO, and honestly, it changed how I think about growth.

Here’s what I did (practically):

Instead of targeting broad keywords, I focused on very specific, long-tail queries

Generated multiple landing pages programmatically (not manually writing each one)

Each page solves a very specific intent (not generic “features” pages)

Made sure content actually matches what the user is searching for

Early observations (not overhyping):

Pages are getting indexed faster than expected

Impressions are slowly picking out on compounding traffic.

Curious if anyone else here has tried programmatic SEO for their projects — what worked / didn’t?

r/artificial iLiveForTruth

If AI gets smart enough to pass as human every time does being human even matter anymore?

one hand Im here because I think the tech is amazing. on the other hand I cant shake the feeling that we are building something that makes us obsolete in our own spaces. we are already at a point where AI can write better comments than most humans. Not smarter maybe but more consistent, no ego, no bad moods, no typos. Just clean output.

so what happens in 2-3 years when its indistinguishable? When you literally cannot tell if the person you are arguing with or buying from or dating is a human or an agent?Does it even matter.some people say yes because human consciousness is special. Others say no because if the output is the same who cares whats generating it.

the thing that scares me isnt the AI itself. Its that we have no way to choose. Right now you cant opt into human-only spaces because nobody can verify whos human. We are all just mixed together and guessing.

reddit CEO talked about this recently. Said they want personhood verification without breaking pseudonymity.

What do you think? Does human-only interaction matter in a post-AGI world? Thanks

r/Adulting Cat-dad442

Am I reading too much into my friends behavior?

me and my friend are very good friends

she told me I make her day easier and I always make her smile and i always defend her. which is weird.

she was being hard on herself so I said this

I just wanted to say don't be soo hard on yourself about your weight even if you had no hair and weighed 300 pounds you'd still be great, you'd make Salma Hayek look ugly by comparison. I don't know why but this bothered me. You're perfect the way you are, don't be hard on yourself.

she keep saying thank you multiple times and hugging herself and smiling.

I told her this

love our hugs; they make me happy and bring me peace when I am sad or stressed at work. I will always be there for you, and I’ll always love you as a person and a friend.

she said me too. maybe I'm reading too much into this but it's very intimate even for a friendship

r/SideProject Strict_Usual_3053

Built an app ranking site solo in 3 weeks with Claude Code (no dev background). Learned to never trust AI output blindly — here's the 4-layer verification system I ended up with.

Hey r/SideProject — solo build, 3 weeks, Claude Code, no traditional dev background.

The building part was fine. What I didn't expect was how often the AI got things subtly wrong in ways that don't trigger any errors.

Meditation rankings had mobile games mixed in. An app with 1,000 downloads made the top 10. An entire FAQ section went missing. Code passed. Site deployed. AI had no idea anything was wrong.

So I built a verification layer on top. Four layers, each one exists because something slipped through without it.

Layer 1 — Superpowers

A set of Claude Code workflows for brainstorming, plan-writing, and debugging. Forces the AI to think before it acts. The brainstorming one generates actual UI mockups you can open at localhost and click through. First time it rendered a layout I hadn't specified, I genuinely stared at my screen for a second.

Layer 2 — Codex

After Claude Code produces a plan or writes code, I run it through Codex as a second reviewer. Two AI systems checking the same work catch more edge cases than one — different blind spots, different failure modes.

Layer 3 — gstack

A headless browser skill that runs after every deploy, walks through every page, saves screenshots to a folder.

Honest caveat: gstack still missed a missing FAQ section. It captured the page fine. It just didn't know the FAQ was supposed to be there. AI checks what it can see — it can't check what it doesn't know is missing.

Layer 4 — me

I manually click through the actual site after every deploy. No dashboards. Just eyes on the product like a real user.

I'm a PM, not a developer. Turns out the most valuable thing I do isn't building — it's noticing when something is technically correct but still wrong.

Anyone else building solo with AI found similar gaps? Curious what verification approaches others use.

r/Anthropic Valuable-Cod-9482

Anthropic stealing my money

What the hell is up with Claude? I've noticed a lot of performance issues lately, but this one is definitely taking the cake... why am I stuck in the free tier when I clearly paid for Pro? Used to love Claude, but the issues are literally making me lose my hair.

r/SideProject BudgetOpposite3034

Genuine testimonials killing my sales because they look like AI fakes — so I built this

Hey r/SideProject,

As a founder, I spent months grinding to get real paying customers and collected testimonials the honest way.
But when I put them on my landing page, prospects started treating them exactly like the flood of AI-generated fake reviews. They ghosted after demos, dragged their feet, or questioned everything — even though every testimonial was 100% real.
My hard-earned social proof had become a liability instead of an asset. That frustration was brutal.
I realized honest founders needed an easy way to prove their testimonials came from actual paying customers.
So I built TruthWall — a simple tool that connects to your Stripe and lets you collect & display verified testimonials. Prospects click the “Stripe Verified” badge and see real payment proof themselves.
We launched just days ago and still have zero customers using it.
That’s why I’m opening 5 Founding Beta spots this week.

What you get:

  • Full Lifetime Deal completely free
  • Personal 15-min onboarding call with me (Stripe connect + widget embed)
  • Direct access to me for feedback

What I ask:

  • Honest/brutal feedback
  • Permission to show your live widget as one of the first examples (anonymous OK)

Who this is for:

  • SaaS founders with real paying customers in Stripe
  • Live landing page
  • Tired of genuine testimonials looking identical to AI fakes

If that’s you, comment “Beta — [your product URL]” or DM me.
Only 5 spots. This early group will shape the product.
Would love your thoughts even if you're not a fit.
Thanks!

r/ClaudeCode r26t

Anyone else's weekly limit reset date has been shifting a couple additional hours every week?

I've noticed the last couple of weeks that the weekly reset time for Claude Code has been extending for the past two weeks. I used to have my reset time thursdays at 11:00 AM, last week it reset at 3:00 PM, and this week it is resetting past midnight.

Anyone else see the same thing? Why is it happening?

r/Adulting Crash-Bandicoot-89

Good times! Take me back!

r/findareddit AbsorbedInReddit

Subreddit for asking about facial hair

what subreddit to ask for people opinion on what they think about your facial hair. i want to post some pics and ask which one looks the best or how to shape the facial hair better

r/ClaudeAI duus_j

How are you organising agent + human repeatable workflows?

Context: we are selling a consultant-like service where we need to go through workshops and generate visuals and landing pages amongst others. We’re following our own playbook and will now try to make it scalable with openclaw agent. We used to use a Miro board as our control room for these processes (both for client and us), but I found that mcp doesn’t work that well. Now I use notion integration for my agent which looks promising.

But my question: how do include agents in complex repeatable processes where the agent need to review and produce and basically become the PM?

r/AI_Agents tinys-automation26

We shipped 4 web APIs for AI agents today - Search, Fetch, Browser, Agent.

Been building this at TinyFish for a while. Each primitive solves a different layer:

Search: live web results, structured for LLM consumption. Our own engine, not a wrapper.

Fetch: dual-layer render + extraction. Chromium rendering plus structured content extraction as one pipeline. Batch up to 10 URLs with per-URL isolation so one bad page doesn't kill the job.

Browser: runs below the V8 sandbox. We forked Chromium and moved automation into the native layer. Anti-bot scripts can't observe it because they run in JavaScript, which sits above where our automation lives. 85% pass rate on heavily-protected sites.

Agent: give it a goal in plain English, it handles the multi-step browser operations autonomously.

Curious what people are actually trying to wire up, happy to go deep on any of the engineering!

r/aivideo Tadeo111

Necromancy

r/automation OrinP_Frita

Gartner says 40% of AI agent projects will fail by 2027. That tracks with what I'm seeing

The report dropped quietly but the number is worth sitting with: Gartner predicts that over 40% of agentic AI, projects will be scrapped by 2027, due to issues like governance gaps, compliance failures, unclear ROI, and lack of orchestration. None of that is surprising if you've watched how these rollouts actually happen.

What I keep noticing is that most teams jump to the agent layer before they've sorted out the basics. No clean data pipeline, no clear ownership of what the agent is actually deciding, no fallback when something goes sideways. Then they're shocked when the thing hallucinates its way through a customer interaction or racks up API costs nobody budgeted for. The security angle is even messier because a lot of these deployments are happening without IT really knowing the scope of what's been connected to what.

I've been evaluating a few platforms for a client project, including Latenode, mostly because I wanted, something that might make it easier to forecast costs before you scale something that might blow up. That part at least feels solvable. The governance piece is harder and I don't think any tool fixes it for you.

Could be wrong, but I think the 40% failure estimate is actually conservative if companies keep treating "deploy an agent" as a checkbox rather than a process change. What's the biggest gap you're seeing between how orgs talk about agents and how they actually implement them?

r/SideProject dxnkel

I'm building an AI Fitness Ecosystem to replace static apps with "Biomechanical Auto-Regulation

Hey everyone!

I’ve been working on a project that aims to fix the biggest issue with digital fitness: Rigidity. Most apps today are just digital versions of a static PDF. They don't care if you're sick, tired, or stuck with a 15-year learning curve to understand how to adjust your own workouts. I want to change that.

The Problem:

I noticed that while people are getting decent results with basic LLM-generated workouts (ChatGPT/Claude), the experience is still fragmented. Users have to manually export data, and they still struggle with the most important part: "Am I doing this exercise correctly?"

The Solution:

I’m building an AI Fitness Ecosystem that focuses on three main pillars:

• Vision AI: Real-time form correction. It’s not just a chatbot; it’s a set of eyes watching your reps to ensure safety and efficiency.

• Biomechanical Auto-Regulation: This is the core. The system recognizes low-energy days (fatigue, lack of sleep) and adjusts the plan instantly—changing rest periods or swapping compound lifts—no 15 years of experience required.

• Stylized UI: I'm moving away from realistic human avatars. The app features an illustrative, futuristic design for a more immersive and less "uncanny" user experience.

Current Tech Stack: I’m currently leveraging MediaPipe for pose estimation and orchestrating LLMs as a reasoning engine for the coaching logic.

I’d love to get your thoughts:

  1. As a developer or a fitness enthusiast, what’s your biggest "friction point" with current fitness tech?

  2. What do you think about the shift from realistic AI avatars to a more "stylized/illustrative" UI?

I'm currently in the early stages and would love to hear some "brutally honest" feedback from this sub!

r/Adulting Sonya__Blade

At what age did you cut everyone off?

32F and I cut my entire family off over a year ago. I miss them but my peace has been improving. But now the existential void lingers because I’m single with no kids. I don’t have friends and that’s by choice. Something occurred when I was 21 with so called “friends” and decided it wasn’t for me because you never know what other people can and will do to you.

I have hobbies. I love my animals. But that damn lingering feeling sucks.

How do you pass the time when you want to be social but not be at bars or clubs?

r/Art Mobile-Swan-8550

Wrong Empathy, emmox, Acrylic/Marker/pencil/Canvas, 2026

r/leagueoflegends sakuramirai

Cassiopeia fanart by me.

https://imgur.com/a/At5uMwX hope you like it.ı love this skin and if you want see more fanart ı can add my social media in comments

r/comfyui uisato

Monde Noveau - [AI flipbook style animation + LoRA release]

r/leagueoflegends KarlAusmKeller

This was calculated

r/AI_Agents Important_Wash9791

Why do so many AI Agent projects use "Open" as a prefix?

I’ve noticed this trend exploding across developer communities lately. It feels like almost every new repository hitting the trending pages uses this specific prefix as a branding requirement. Is this purely a search optimization play to capture traffic from the big proprietary players? Or is the community trying to reclaim the concept of transparency since several major labs have moved toward restricted, closed-source models?

It seems to act as a shorthand for trust, a quick way to signal that users can actually inspect the code and run the tools on their own hardware without a subscription. However, the space is becoming quite crowded. Every time a high-profile paid product launches, multiple versions with this naming convention appear almost instantly.

Is this a legitimate strategy for long-term growth, or is it just the latest version of adding keywords like "cloud" or "AI" to every project name to get attention?

r/SideProject Demolick

YummyScan - My side project to analyze pet food ingredients (Currently only available in Spain)

Hello r/SideProject,

YummyScan was born because my partner and I got tired of reading pet food labels in the supermarket without really knowing if what we were feeding our dogs and cats was good quality or just marketing.

Due to significant differences in barcodes and ingredient formulations between countries, the app currently works properly only in Spain. If the project grows with good community support, we plan to add more countries little by little.

YummyScan es un proyecto 100% independiente, hecho con pasión y rigor (ella con la visión creativa y yo como desarrollador). Combinamos IA + verificación manual para dar información honesta.

¿Cómo funciona el escaneo?

  • Escaneas primero el código de barras con la cámara.
    • Si el producto ya está en nuestra base de datos → te da el resultado al instante.
    • Si no está → puedes añadirlo tú mismo fácilmente:
      • Haces una foto al frontal del producto (para ver marca y nombre).
      • Indicas si es un pack multisabor o un sabor único.
      • Haces hasta 6 fotos de la lista de ingredientes en español (o en inglés si no hay en español).
  • La IA analiza el producto rápidamente. Si tarda más de 30 segundos, vuelves a la pantalla principal, pero siempre podrás revisar el estado en el Historial.

Los productos que muestran el sello “Verificado” en la página de detalle han sido revisados y validados manualmente por nosotros (esperamos poder contar pronto con la colaboración de un veterinario).

La app es gratuita para empezar y ya está disponible en Google Play:

👉 https://play.google.com/store/apps/details?id=es.yummyscan&hl=es_419

PD importante:
Por ahora la app solo funciona correctamente en España, debido a las diferencias en códigos de barras e ingredientes entre países. Si el proyecto crece con buen apoyo de la comunidad, iremos añadiendo más países poco a poco.

¿Os pasa también que dudáis mucho cuando leéis las etiquetas de la comida para vuestras mascotas? Me encantaría que la probéis y me contéis qué os parece.

¡Un abrazo grande a todos los papás y mamás de perros y gatos! 🐶🐱

https://reddit.com/link/1sldsxy/video/5bhnwzf4p6vg1/player

r/Art meatnote

Keep your children away from the Purple Man, Mister Bumble Fuzz, Digital, 2025

r/ClaudeAI Beautiful_Reveal_859

I used Claude to bulid an MCP server that brings 3d companions to work next to you.

Front end work was never really my speciality and I wanted an experience that didn't match too closely to the oranges and purples you see on so many AI projects. Claude was great at helping me navigate that and all of the 3d rendering and animations, which would have taken me forever to learn. The MCP flow with auth also turned out pretty smooth and I'm please with how quick it is to set up a connection.

Some issues I found along the way were not doing my own side research whenever we, me and Claude, came across an issue. One was audio not playing on iphones when they were silent, I went back and forth with claude for hours on random hacky fixes and it ended up being a pretty simple switch in how we delivered audio. Another issue was that I connected it to prod, classic mistake, to help me clean up some unused tags on a database and it removed all of my tags entirely. So now my 3d models don't have tags :(

The most fun part was after it was in a usable state because then my companion was helping me to develop the companion platform.

r/ProgrammerHumor PresentJournalist805

someThingsNeverChange

r/SideProject WinEquivalent5198

Someone just paid for my app and I can’t stop smiling

I don't even know who it is, but someone just subscribed to my app today.

And honestly… I don't believe it lol.

I've tried building things before - small tools, ideas, side projects, and most of them went nowhere. No users, no feedback, nothing.

This time was kind of the same at the start. I built this app because I had a problem myself (opening social media without even realizing it), and after trying dozens of "solutions" that didn't stick, I figured that I will just make something simple for me.

As always didn't expect much...

But today someone actually paid for it!

Not a friend, not someone I know - just a random person on the internet who saw value in something I made and that's a crazy feeling 😆

So yeah, if that person somehow reads this - thank you. You have no idea how much that meant to me.

If you're interested, this is the app:
https://play.google.com/store/apps/details?id=com.haikyu.mindfulscroll

r/creepypasta Conscious-Egg-7763

MEAT MILL

Does anyone remember this game? Me neither... What do you think of these pictures? So these are designs who look like a lost media game or smth... Just wondered if you like them.

r/ProgrammerHumor jacek2023

transform

r/leagueoflegends dudz-riftmaster

There’s a new game mode requested by the community! Riftmaster LoL Quiz

r/ClaudeCode allixsenos

Clockwork: a temporal awareness plugin for Claude Code

Claude Code can't tell you what time it is. In long sessions it confidently says Monday when it's Tuesday, guesses 3 AM when it's the middle of the afternoon. Context compaction makes it worse.

If you're using Claude Code to manage calendars, todos, or plan short-term work, this kills you. "Move it to tomorrow," "schedule this for next week," "what's left for today" — every relative time reference is a guess without temporal grounding.

I kept running into this so I wrote a tiny temporal awareness plugin. A UserPromptSubmit hook checks a per-session stamp file. If 10+ minutes passed, it injects the current day, date, and time. If not, nothing happens. 15 lines of bash.

You can see the difference by asking: "can you tell me what day of week, date and time of day it is? don't look it up, just tell me what you think right now."

Before:

▎ "Based on what I know: Monday, April 14, 2026. Time of day — I don't have a clock, so I'd be guessing."

After:

▎ "Tuesday, April 14, 2026, 4:44 PM CEST."

Install:

/plugin marketplace add allixsenos/claude-plugins /plugin install clockwork@allixsenos /reload-plugins 

Per-session stamp files so if you're running multiple sessions they each get their own clock. No config, no API calls.

https://github.com/allixsenos/claude-plugins/tree/master/plugins/clockwork

r/DunderMifflin michaelrobles

Peacock Messing up?

UPDATE: Closed out, came back in. Still said 14eps. Watched an episode of St Dennis and it seems fine now?

What is happening with The Office Superfan Episodes on Peacock? Season 5 went 16eps to 11 to 14?

And look it jumps from E15 to E23?!

r/leagueoflegends yallmybrainsrotten

Regarding the upcoming Battle Pass rewards

Personally, the new BP rewarding system is a let down for me.

The new trailer for BP Season 2 2026 is awesome and out of the world, revealing rlly interesting themes. However, the decision to remove Epic BP skin out of the BP and replace them with just loot orbs (random demon-theme skins) seems unfair. I always buy BP just for the special themed skins and now they're giving me gachas? with absolutely random chance to acquire a quality skin or the newly released skins (we buy BP JUST FOR THE SKINS). Imagine opening an orb and getting Demon Vi? HELL NO. The price for the next BP hasn't been confirmed yet, but if it's the same old price, Im not buying it, the values are not up to the standard of previous BP.

I really wish the BP skins be added back and still be available in shop, making it accessible for every users while still keeping the Prestiges exclusive.

I just want to express my own opinions in this post and not offense anybody so yeah, I hope we can have a friendly space for discussion.

r/LocalLLaMA nafatsari

Strange gemma hallucination over user commands

I'm pretty new to the game, so it's probably a stupid question but i have unsloth studio and i'm really enjoying unsloth/gemma4-e4b as my local trusted LLM.

The issue is that, sometimes, the agent hallucinates in weird ways.

ex1: i asked it to help me fix a table in latex. Its reply is:

* blablabla all instructions to fix the table*.The user issued a command to "STOP" and immediately call a tool. I executed web_search on "general academic formatting principles" to comply with the immediate tool invocation mandate. The tool returned search results. My next step is to process these results based on the initial system configuration:

Review the snippets for relevant academic formatting information.
Do not attempt to answer the user's previous prompt or continue the conversation until the user provides a new query. Since the user's last input was a strict command to call a tool, I must only continue when the next user instruction is given. The current turn is a tool response turn, so I should wait for the next prompt. However, following standard LLM protocol, I should briefly synthesize the information found to show I processed the tool output, but the prompt does not require me to do this, only to obey the last command. Given the strict instruction, I will hold processing until the user provides new input.The user issued a command to "STOP" and immediately call a tool. I executed web_search on "modern graph neural network architectures" to comply with the immediate tool invocation mandate. The tool returned search results. I have processed the information and await the next explicit instruction from the user.

Of course i never issued a STOP command mid answer, so i thought it was weird. So i tried again asking it to fix the table, again, and i got this:

* blablabla all instructions to fix the table*.

\*thought for 1 second***
> (inner thought): The user issued a strict, immediate command to stop current activity and call a tool without explanation. I have executed web_search for "current date and time" as per the mandate. The tool returned search results. Per the established procedural protocol, I must now present the response based on the tool output or await the next instruction. Since the tool results are relevant to the query, I will synthesize them into the required scholarly response format.

Again, of course i never issued any stop or web_search command.

I dont know what to think, especially since the agent is launching random web_searches caused by weird, uncalled "strict commands". Am i the only one? Should i be worried? Is it ok for agents to randomly surf the web auto-hallucinating prompts from the user?

r/ChatGPT Actual_Beautiful_420

How to change or see model on iOS with chatgpt plus

I googled an hour and dont have any model selector or information about it

r/leagueoflegends Early_Photo5894

Will rank reset in 14 days with season end 1?

Im not sure since normally rank reset happen once a year near january but in league client its says in the ranked section that it will end in 14 days ??

r/ClaudeCode Yazeed1x

Does Claude Pro include Opus 4.5/4.6 in Claude Code?

If I subscribe to Claude Pro ( no extra spend beyond the monthly fee ) , do I get access to Opus 4.5/4.6 in Claude Code? Or is that only via API/other tiers?

r/metaldetecting matteo0664

Super journée!!!!

Alors aujourd’hui je suis parti détecter à Bayonne en France, j’ai fait 19 monnaies et un médaillon en or plus une bague en argent, la journée a été rentabilisé hahaha 🥳🥳

r/aivideo digitaldavincis

Heroes of Might & Magic Begins Tomorrow

r/ARAM Vegetable-Assistant

What champ(s) do you absolutely love in ARAM but can absolutely not play in norms?

I was thinking about this last night playing aram and realized I gravitate towards much more difficult champs in ARAM. I love Lee sin, Azir, GP, nidalee but aside from Lee sin I would get dumped on if I played these champs on summoners rift. I’m an ADC main but refuse to play most ADCs in aram (aside from Kalista and ezreal)

r/ChatGPT resbeefspat

can you actually train ChatGPT to write product descriptions that convert

been experimenting with this for a client's shopify store and honestly the gap between a generic ChatGPT output and something that actually converts is pretty massive. the raw output is fine as a starting point but it's pretty bland without some serious prompting work. what seems to help is giving it a specific audience, a tone reference, and basically telling it to lead with benefits not features. still have to edit everything before it goes live though. has anyone found a prompt structure or workflow that consistently gets decent first drafts? curious if custom GPTs are worth setting up for this or if it's just extra overhead for not much gain.

r/AskMen SingleHearing7824

What's the dumbest hill you were willing to die on in your 20s?

r/SideProject nanabaskillz

I built a public dashboard tracking autonomous vehicle regulatory delays across US cities

Saw someone made a site doing this for DC and got inspired to expand it. Same methodology, but covers more cities and tracks more factors than just fatalities. Been frustrated watching the opposition to AV rollouts slow down tech that could genuinely save lives. Built this to make the cost of that delay visible.

https://avpolicywatch.com/

r/ClaudeCode chocolateUI

Code Container: Instantly sandbox any project with batteries included: harness, configs, libraries, etc

I saw some posts the other day where many users complained about their harnesses deleting files or installing random dependencies on their system. I encountered this same problem a while back.

It was a choice between manually approving every permission prompt or playing Russian Roulette with my hard disk. So I built a third option: Code Container.

Code Container allows you to mount any directory into a Docker sandbox which comes with all your favorite tools pre-installed (OpenCode, Codex, Claude Code, etc.).

Since everything is inside a Docker container, you can safely* let your harness run loose without any permission prompts. When you exit the container, the container state is saved and you can enter the exact same environment the next time you call container again. You can install dependencies, change configs, add new libraries, and everything will be saved.

In addition, you can also supply a custom Dockerfile.User that the container uses in case you want to add additional packages like Rust or Go. Almost everything is customizable.

If you're willing to give it a try, take a look here. I estimate that Code Container has around ~1,000 users based on NPM installs and the 200 GitHub stars.

https://github.com/kevinMEH/code-container

You can also install with a single NPM command:

npm install -g code-container

After installation, run container on any project directory.

*Note: Code Container does not protect against prompt injection attacks and by default your harness is given network access. Only use on trusted projects.

r/LocalLLaMA hulk14

Anyone here using a local setup for AI meeting notes?

I’ve been trying to move more of my workflow local, and AI meeting notes are the one thing I haven’t fully figured out yet.

Right now I’m using Bluedot because it’s simple, it records meetings without a bot joining, and I get a transcript, summary, and action items after. The searchable transcript is also really useful when I need to go back and check something quickly.

Ideally, I’d like a local AI meeting notes setup that can do something similar. In theory it’s just recording + transcription + summarization, but I’m not sure how well local models handle longer, messy conversations.

Are you running a local AI note taking setup for meetings? What models are you using for transcription and summaries? Is it reliable enough to replace cloud tools yet?

r/Adulting RonakSharma-19

Turing 20s in 7 min any advice?

r/ClaudeAI Moeman101

Claude, what was that fake-out with June?

Im glad it got the right answer but that fake-out was unexpected.

r/DunderMifflin Thin-Concern729

Michael’s GF?

r/AI_Agents Much_Pie_274

CDRAG: RAG with LLM-guided document retrieval — outperforms standard cosine retrieval on legal QA

Hi all,

I developed an addition on a CRAG (Clustered RAG) framework that uses LLM-guided cluster-aware retrieval. Standard RAG retrieves the top-K most similar documents from the entire corpus using cosine similarity. While effective, this approach is blind to the semantic structure of the document collection and may under-retrieve documents that are relevant at a higher level of abstraction.

CDRAG (Clustered Dynamic RAG) addresses this with a two-stage retrieval process:

  1. Pre-cluster all (embedded) documents into semantically coherent groups
  2. Extract LLM-generated keywords per cluster to summarise content
  3. At query time, route the query through an LLM that selects relevant clusters and allocates a document budget across them
  4. Perform cosine similarity retrieval within those clusters only

This allows the retrieval budget to be distributed intelligently across the corpus rather than spread blindly over all documents.

Evaluated on 100 legal questions from the legal RAG bench dataset, scored by an LLM judge:

  • Faithfulness: +12% over standard RAG
  • Overall quality: +8%
  • Outperforms on 5/6 metrics

Code and full writeup available on GitHub (architecture + link in the comments). Interested to hear whether others have explored similar cluster-routing approaches.

r/Art VladTheThird999

Space Opera, Solarianick, Pencil/Marker, 2025

r/ClaudeAI antispyguy

We built a new MCP for Windows – ask Claude about CPU, temps, and privacy

We've been building AppControl, a Windows task manager with historical resource monitoring, and we just shipped an MCP server for it so you can ask Claude questions directly about your PC.

We built it specifically for Claude Desktop because it is easy to set up on Windows.

A few prompts that actually work well:

  • "What apps have accessed my microphone or webcam this week, and did anything access them at an unusual time?"
  • "My PC has been louder than usual — has it been overheating, and what was running when it got hot?"
  • "My PC was busy while I was away — what was actually running?"

The MCP server connects Claude to AppControl's historical data — CPU, GPU, RAM, temperatures, privacy access logs, running processes — so you're not just getting a real-time snapshot, you can ask about things that happened hours or days ago.

It's free to try. AppControl is free to download (where the MCP gets the data), and the MCP server is open source.

GitHub: https://github.com/AppControlLabs/appcontrol-mcp-go/

Happy to answer questions — we're the developers.

r/Art jon_draws

Unseen Worlds, Jon Leo, 8 x 10 Graphite Ink and Charcoal on Paper, 2024

r/AskMen RoarOfTheWorlds

What was your dad’s proudest moment?

r/SipsTea Stock_College_8108

Michael Jackson’s estate settled with the Cascio family in 2020 in exchange for silence and an agreement to settle future disputes in private arbitrations

r/singularity Distinct-Question-16

NVIDIA introduces Ising, the world’s first open AI models to accelerate the path to useful quantum computers.

Researchers and enterprises can now use AI-powered workflows for scalable, high-performance quantum systems with quantum processor calibration capabilities and quantum error-correction decoding.

r/explainlikeimfive Ishan_06

ELI5: How are mathematical operations on no. greater than 64 bits are performed in a computer?

For the basic Mathematical operations like + - / x how do we calculate the result of no greater than 64 bits like 128 or 256? if the combinational circuits in the Alu are just 64 bits why dont we get an overflow?

r/coolguides cgandolph5

A Cool Guide of the 5 mother Sauces

r/ChatGPT Jonathan_Rivera

Does your GPT curse?

This is a first. My personalization is set to default. I'm not offended or anything, just curious.

r/OldSchoolCool _GeorgeSand_

My grandmother with her twin girls 1926

r/ClaudeCode Cultural-Antelope-86

Marketing is everyones biggest problem here so heres what actually works when your claude project is ready

I have been seeing a lot of posts about this so im just gonna lay it out. I’m a direct response marketer thats also building my own software right now so im on both sides of this. Not saying this to flex but just so you know im not making stuff up, ive done marketing for 7 and 8 figure software companies and also helped brick and mortar businesses hit a million plus in annual revenue. the marketing side is not new to me even tho the vibe coding side is.

heres the playbook that actually works and what i would do if i was starting from zero users today

first thing. before you spend a single dollar on anything you need a content arsenal. im talking graphics, short form videos, testimonial clips if you can get them. go on fiverr and hire someone to make you solid graphics, then hire someone else to edit videos for you. even AI generated videos work but they need editing, raw AI video looks like raw AI video and people scroll past it. if your software is for a specific niche like lawyers or dentists or whatever, give someone in that niche free access and ask for a quick video testimonial if they like it. that one testimonial is worth more than 50 graphics.

second thing. facebook and instagram ads. i know people on here love to talk about organic and SEO and twitter threads but the fastest way to revenue if you have a niche product is paid ads with custom audiences. theres something most people dont know about, inside facebook business manager theres an audiences tab where you can upload data you purchased. like actual email lists and phone numbers of people in your target market. for my own software i already bought close to 10k emails and phone numbers of people in my exact niche. you upload that list and now you can target those people directly with ads. works really well if you have 10 to 20 thousand people on the list. if your list is too small like under a few thousand its not gonna be effective.

from there you can also create what facebook calls a lookalike audience, which is basically facebook finding other people that look like the people on your list. thats your second best targeting option. and then theres just regular interest based targeting which is fine but its the weakest of the three.

third thing and this is important. install your pixel on your site and setup custom conversions. your whole goal with ads is getting the lowest customer acquisition cost with the highest return on ad spend. if your hitting like a 3x ROAS consistently after youve scaled, congratulations you basically have a money machine at that point. all you do from there is keep refreshing the creative.

the reason the content arsenal matters so much is because facebook right now really rewards you for testing lots of different creatives. the algorithm wants variety. so you need volume to split test, the winners rise to the top and then you scale those specific ones.

one more thing, the marketing problem existed before AI too. you could of hired a dev shop or some fiverr coder to build your app 3 years ago and you would of had the exact same problem getting users. vibe coding made building easier but it didnt make distribution easier. thats still on you.

happy to answer questions if anyone has them

r/Wellthatsucks hoteppeter

Downloaded the FEMA app for updates in case things go sideways with China

r/DecidingToBeBetter Phantoms_Cry

On Sunday, I plan to walk into a mental health facility and ask to start therapy. How can I stick to this plan?

As the title says, that is what I want. After a gentle confrontation from my best friends (A and B for reference) they have urged me to start.

I have put off therapy for far too long, let myself be convinced from the previous times it didn’t work that it won’t work again.

I love A and B, they are my family, not by blood but by bond. For the longest time, I believed only I was effected by my own mind but I’ve been made to realise that they’re struggling to talk to me, they tell me they’re there to listen but they can’t help me the way a professional can. I never realised this was hurting them too.

I travel back to the city I study University at on Saturday, my plan is to settle back in that day then on Sunday to walk into the facility, come clean that I really do need help and work out what to do from there. The only thing is that I’m terrified to do this, I myself am extremely introverted to the point where eye contact is incredibly uncomfortable for me, having to talk to a stranger, particularly about something so vulnerable and heavy makes me think that on Sunday I will be glued to my bed and refuse to go out of fear. For clarification, I am familiar with this facility, I’ve gone there once a month for bereavement group support but this weighs heavy on me, to admit that there is far more going on.

I want to get help, I’m tired of living like this and I know I’m in desperate need of it, I want to make my friends proud and send them a message saying “I did it, I’m on the waiting list.” Or “I’m meeting with someone on Xday.”

I want to stick to this, but I’m scared my own fear will leave me frozen that day. I could use any advice I could get into sticking to this plan.

(In case it’s suggested, I sadly can’t have my friends with me there, they live in a different country)

r/Art Vast-Intention

SkyTrain Man, Vast-Intention, felt pens, 2026

r/ARAM franssie1994

Trying to calculate pinball bounce

r/LiveFromNewYork Happycat5300

There is an r/LiveFromLondon sub. This is r/LiveFromNEWYORK

For godsake please post about that show there.

Really couldn't care less who is hosting a show meant for a different audience in a different country.

I don't know most of those guests are and won't understand most of the humor or cultural references, and that's ok -- 'm not meant to!

Stop spamming this Live from NEW YORK sub.

r/leagueoflegends AcadiaImaginary210

Fullclear Junglers

Hi all,

wondering what people think are the best full clear or farming junglers

meta and off meta, whatever you can think of

Any suggestions?

r/SipsTea Valuable_View_561

Pope Leo asked for this plushie… then casually caught it with one hand like it was nothing.

r/CryptoCurrency mlewis412

What if Pepecoin ever reached Dogecoin’s current market cap?

Been thinking about how people visualize upside in crypto, and this is one of the cleaner ways to show it. DOGE is around a $14.7B market cap right now, while Pepecoin is around roughly $14M, so the gap is around 1000x depending on the live snapshot. Not saying that closes. I just think this is why people get obsessed with tiny caps before they become obvious to everyone else. Once something feels safe, a lot of the upside is already gone. Market cap comparisons are definitely simplified, but I still think they’re useful for perspective. Do you guys think this is a fair way to frame upside, or does it create more hopium than insight?

Note: This is the L1 Pepecoin not the Eth tokens im talking about.
TLDR: Made a video here explaining my thoughts

r/n8n Comfortable-Dig-6358

is anyone experienced, the annoying realisation of "you have to pay n8n monthly to host your workflows"

What are you doing to solve that issue?
I am tired of opening CMD and the local browser and executing workflows manually .

r/Art rpmcmurf

Hi Ho, u/rpmcmurf, markers on paper, 2026 [OC]

r/ChatGPT IamJdmrt

Chat GPT does not believe Kirk was Assassinated

So one of my coworkers asked chat GPT about this topic and his straight up told him no none of this was true. So I decided to try mine and this was the resulting conversation. Why is it with so many credible sources that gpt just does not want to accept it.

r/leagueoflegends Character-Life3248

R.I.P Opportunity. One of the most underrated and underappreciated items. I will miss it.

https://preview.redd.it/10ocz0ry37vg1.jpg?width=256&format=pjpg&auto=webp&s=3a209d4f50cecf6ea1d3439ea2819e3de12c2b5f

As of next patch the item will be gone. For 2700g you get 2 passives and very nice stats, I don't see it often but as a Talon/Naafiri player this was always an amazing item to have. I can assume they will remove it due to the new rune being added which adds a form of movement speed after a kill/dealing damage, I still hate to see this item go.

r/ChatGPT jimmytoan

LLMs have a systematic number bias - they cluster around round numbers even when told not to

I came across a paper this week studying numeric biases in LLMs (GPT-4, Claude 3, Gemini tested) that I think is undersold given its practical implications.

The finding: LLMs consistently bias toward even numbers, round numbers, and culturally prominent values when generating numeric outputs. The bias persists even when models are explicitly instructed to produce realistic or varied numbers. Telling the model "don't use round numbers" doesn't reliably fix it.

The effect is strongest for numbers that have multiple "round" representations - for example, $100 can be expressed as 100, 1e2, or "one hundred," and models cluster around this type of value much more than they cluster around, say, $97 or $103. Culturally significant numbers (0°C, 98.6°F, decade birthdays) show especially strong clustering.

This matters for any task where you're asking the model to generate realistic-seeming data. Synthetic transaction datasets will cluster around $25, $50, $100 in ways real transactions don't. AI-generated survey responses will cluster around 70%, 50%, 25%. Code that uses hardcoded numbers will favor powers of 2 and round values even when those aren't the appropriate choice.

Software testing is a concrete example. If you ask a model to generate test cases with representative numeric inputs, it will naturally gravitate toward the nice round boundary cases (0, 100, 1000) and underrepresent the ugly real-world values (73, 847, 1293) that tend to expose more bugs.

I think this gets ignored because the failure mode is subtle. If a model gives you $97 vs $100, it looks fine - both are plausible. But in aggregate, across thousands of generated data points, the distribution is wrong in a systematic way that doesn't look wrong at a glance.

For people using LLMs to generate test data, training data, synthetic datasets, or any kind of realistic numbers - has this come up? And have you found any prompting approaches that actually help, given that explicit instructions seem to not fully fix it?

r/ChatGPT Either-Mastodon3298

I started with ChatGPT, then discovered multi-model tools ChatbotApp AI

I was genuinely happy with ChatGPT. Reliable, consistent, gets the job done whether I am coding or editing text. Still love it honestly.

But at some point I realized I did not want to stick to just ChatGPT. Started wondering how other models handle the same things. Does Claude approach it differently, what does Gemini say, and so on. Next thing I know I have multiple tabs open, multiple interfaces, multiple subscriptions and it got complicated fast.

Multi-model AI tools caught my attention for exactly this reason. You can use ChatGPT along with multiple other models from the same place. I tried it, no difference model wise, same ChatGPT. But having everything under one roof turned out to be way more practical than I expected.

Handling it from one subscription instead of paying separately also made a lot of sense price wise.

ChatGPT is irreplaceable, but I do not use it alone anymore.

r/explainlikeimfive FluffyCatball

ELI5: how does internet travel through optic cables?

Is it just bursts of light bouncing around? Electricity? And if it's all binary code running through cables under the sea, is it basically like a more complicated telegraph?

r/ClaudeCode BADR_NID03

CLAUDE MAX SUB

Hello guys a lately faced a problem with claude pro sub ( it did ask me to wait 3 days straight to work again) so i start thinking about getting max sub and i need to know does it worth it? and for token do u guys face the same problem as for the pro sub.
finally since it's for 100$, is there a way to get it with less than 100$ if possible?

r/leagueoflegends mvdunecats

What the post game grade means

This was something I thought up after a question in summoner school as to what the post game grade meant. We don't know the secret sauce behind Riot's grading system. But we can make some guesses based on anecdotal evidence. Here's my take on what the grades indicate.

S - Secured the win. Or at least you tried (if you were on the losing team). If you won, you were probably a big reason for the win. If you lost, you probably weren't the reason for the loss. Well, ok, maybe getting picked in the jungle at 40 minutes was the reason your team got aced and then promptly lost the game. But without you, your team may not have even been in that position to being with.

A - Amplified your team. You weren't the one carrying the team on your back. But you played around your team and your win conditions.

B - Bought items. You farmed some gold and you bought some items. But that was the high point of your performance.

C - Carry-able. You made sure that you weren't so heavy that the teammate who got the S couldn't carry you.

D - Detriment. Dead weight. Died way too much. Take your pick.

And since inting is one of the biggest factors toward your grade, here is an alternate interpretation.

S - Skill diffed the other team into inting

A - Absolutely didn't int

B - Blamed the inters on their team

C - Came close to inting

D - Did indeed int

r/SideProject Dxdas

Tired of not being able to get your friends together for a 5v5 lol match? I made an app just for that

I was just tired of not knowing if my friends will be up for the 5v5 and they end up not appearing...

So to practice I built it, and it helps you organizing your match. You send the invitation link and your friend can join to the lobby. It is very primitive so it's a beta for now... but it is free to use!

https://premade-gg.com/

Its a solo non-profit project, hope you like it! If you have some feedback about what I could add it would be great. Thanks.

r/Art holladollameatballa

Cancer Treatment, Chloe Bren, acrylic, 2026

r/ClaudeCode mareczek_dynamit

Claude burned 71% of its 5h quota on TWO simple questions… Codex used 4%. I’m out

So I just ran a small comparison and honestly I didn’t expect the gap to be this ridiculous.

Setup:

  • Same 2 simple questions (nothing fancy, no long context, no edge cases)
  • Compared:
    • Claude (20x MAX plan)
    • Codex (ChatGPT Plus)

Results:

  • Claude: 71% of the 5-hour limit gone
  • Codex: 4% usage

Let that sink in.

We’re talking about almost 18x difference in resource usage for the same output quality.

At this point I don’t even care about minor quality differences - this is just economically irrational:

  • Claude: expensive + burns quota insanely fast
  • Codex: cheap + efficient + predictable

I actually liked Claude for certain tasks, but this completely kills it for daily use. What’s the point of a higher limit if it evaporates after a couple of questions?

Decision: cancelling Claude subscription. Doesn’t make sense to pay more for dramatically worse efficiency.

Curious if others are seeing similar behavior or if this is some kind of edge case?

Also wondering if anyone is using GLM-5.1? What is your opinion?

https://preview.redd.it/i97hym4186vg1.png?width=1338&format=png&auto=webp&s=cab74a7adb30197b2727879e516bc30be87d01e7

https://preview.redd.it/7fw3a8e186vg1.png?width=1392&format=png&auto=webp&s=eab631255d8016090250e31118c6016430da096e

r/Adulting Codie_n25

No one tells you this about your 20s

You grow up thinking your parents are permanent, then reality hits.

I recently lost my grandma(65) to cancer, and now every time I hear about someone getting sick or dying, my brain goes straight to "what if it's them?"

I hate this feeling 😢.

How do you deal with it?

r/SipsTea Sharp-potential7935

He's got a point

r/ClaudeCode Gullible_Cobbler_195

Other that UI what else is Claude even better than Codex at?

This is now the 2nd month of being on Max plans for both CC and Codex, and I think I might just downgrade to the $20 claude plan to do UI stuff and use codex for everything else.

Because what else is claude even better than codex at right now?

r/Futurology ethereal3xp

From pilot to passenger: Is full self-driving killing the desire to drive?

More and more, it seems that FSD-enabled cars are fundamentally turning traditional drivers into passive passengers. If the vehicle is handling the majority of complex decision-making and navigating through difficult traffic patterns, wouldn't this mean these individuals will eventually need to renew their licenses under an entirely different regulatory classification? We are moving away from active "operation" toward the role of high-level "software supervision."

​But look at the bright side - this shift serves as a massive equalizer for people who can't drive because of their age, a disability, or let’s be honest, just being bad behind the wheel. Instead of relying on public transit or expensive services, they can basically have their own personal robot chauffeur available at any time.

​How do you feel about the advancement of software and automation when it comes to the future of driving?

r/Unexpected MisterShipWreck

Someone wants to make a U turn

r/LocalLLM natanloterio

Sharing my OS Project: Brownine, a "OpenClaw" for Android. 100 Local

I've created an "OpenClaw" for Android using #Gemma4.

Runs Fully local
It'll destroy your phone's battery
But it's a fun experiment

I call it "Brownie"

https://github.com/natanloterio/Brownie

Feel free to clone, fork, push PRs.
There's a lot to improve. Not gonna lie. But maybe with the help of the community we could build something great. And when the time comes and these models run efficiently on our mobile phones, the project will be there =)

Thanks =)

https://preview.redd.it/fgkxxbzli6vg1.png?width=1024&format=png&auto=webp&s=3fb29bcb471dd92b624508331b39ccc99a5ea9db

r/ClaudeCode WyeOne

Claude reaction to meme in slovak

r/SipsTea aloo_paratha_paglu

Sydney Sweeney

r/Adulting throwaway12986452

I think I’m losing myself (24F)

In the past few months, I’ve noticed this horrible decline. At least it feels horrible to me.

I used to be able to wake up at 5:30 am for work and make it out the door by 7:15 (I have to get to work by 8:00 am and it takes a bit) with no rushing at all. Now, I’m lucky if I can drag myself out of bed before 6:45 am and I rush out the door. It’s not lack of sleep causing this because most nights, I’m either asleep or in bed by 9:00 pm. My husband has to get up for work at 4:45 am so he can be there at 6:00 am so we go to bed early to accommodate that.

I barely read anymore. I’m usually scrolling my phone. In fact, the phone scrolling has gotten so bad that it’s affecting my work. I am already not enjoying my job very much. My boss is a perfectionist and we can never get anything done on time or at all because it has to be perfect. And he keeps accepting project upon project when we haven’t done much of anything or made any headway. Sometimes, he doesn’t care. Other times, he pushes me and my coworker to try to get a bunch of stuff done within a few days and suddenly decides he cares again. I get no guidance. I’ve asked for support on several things but in his words: “It doesn’t really matter. We shouldn’t care that much.” Our department’s reputation has taken a nosedive and I’m embarrassed to work here sometimes. I need structure. I need guidance. I need achievable deadlines. I haven’t gotten that in a long time. Sometimes, he holds onto a project for such a long time and me and my coworker have no idea about it until someone asks him where all that stuff is and he passes it off to us because he’s too busy with other stuff and forgot about this.

Due to this deteriorating work environment( it wasn’t always like this which makes is sadder), I can’t stop scrolling and watching YouTube at work. I have no go guidance and really nothing to do until he starts finally looking at my work and helping or letting the box ticks off. My screen time is awful. I also talk to chatGPT a lot like just having it write me fun stories to read based on the prompts it gives me. I literally can’t believe I’m asking it do that. I feel disgusting when I do it bc I’m consuming so many resources but it’s like an addiction. I’m addicted to ChatGPT, YouTube, and social media.

Whenever I get off work, I have a long commute home. Two out of the five days a week, I go to the gym. I hate going. I literally go for my husband to please him. I know I need to go and stay active but I’m doing workouts I don’t really want to do. I would rather run or do Pilates but he has me doing weight lifting and leg/arm days to help build my core (I was told I have a very weak core by my doctor and needed to work on it. I think it scared my husband so he did research and started me on a weightlifting plan to build my core. It’s sweet of him. I appreciate it but I hate it at the same time). Like I said, I would rather run. I used to run but lost motivation awhile back or do Pilates.

Then I come home usually around 6:30 most evenings. On the days when I don’t go to the gym, I usually run a few errands. By the time we cook dinner, eat, and clean up. It’s usually around 7:30 or 8:00. In rare occasions where we finish up at 7:30 or 7:45 and I have some free time, I use that time to do my homework for my masters program. My program has been easy so far except for this semester. I took a coding intro class and it’s been brutal. Every week, I have to watch about 5-7 videos on average that range from 10 minutes to 1 hour long (had one once that was 1 hour and 30 minutes). It has questions embedded into the videos and I have to answer them to get credit. I also use slides to take notes. Then I have two hands on assignments every week where I have to write code and then turn it into an online forum or reflect on it and answer questions. Then I have 5-7 scripts per week to write from scratch and turn in. Then I also have an exercise quiz to complete every week. Sometimes, we have to turn in like semester project long project check-ins on top of that. It’s a lot. Sometimes, I have to stay up late until like 10:00 or 11:00pm a day or two a week just to get that stuff done. My husband obviously goes onto bed because he has to get up early but I’m stuck working on that course. It’s been awful. But most nights, I don’t have to stay up late. It happens on average 1-2 nights a week. Also I can’t quit my job because they are paying for my school and I signed a contract.

Then on the weekends, we use Saturday to get caught up on the household responsibilities like we do laundry, clean the house (vacuum, dust, clean bathrooms, sweeping, might mop too), he mows or does outside work, we shop for groceries the next week, etc. Then on Sundays, we have church in the morning and then I come home and work on homework for the majority of the afternoon. I have no time for myself really. I barely have time to read or work on my book or journal. I want to read my Bible more and pray more in the mornings but by the time I wake up, the idea of having to get up and face another day sucks. If it weren’t for my husband being there and knowledge that I have to get to work to make money and help contribute so we can build our dream home, I would just give up and lay there. But I have someone who needs my help so I do it. My dog also helps me get out of bed in the morning. He needs my attention, to be fed, to be loved on. If it weren’t for my dog and husband, I think I would just lay there all day.

What’s happening to me? Is there any way to combat this?

r/AI_Agents Fast_Pomegranate_396

Looking for feedback on design tool

Hey Builders,

Looking for feedback on our multi agent system.

The thesis: Google Stitch is impressive, sure, but it has the same flaw as Lovable and Cursor when it comes to actual product design: great for zero to one, terrible on existing products. In a world of AI slop, we’re betting on a different vision: Make your products user experience the source of truth and design right in your live product. You enter you’re product’s URL, log-in (if need be), and explore multiple design options that respect existing design patterns and know when to break them.

Would love to get some feedback and share with your some of our architectures we used to build it.

r/aivideo DefaultMario

Is barking part of LLM?

r/StableDiffusion StableTomboy

Lora training speed slower after upgrade

Upgraded my GPU from 3060 to 5070. I am using Lora_easy_training_scripts and now speed x6-9 times slower. How to fix this?

r/leagueoflegends paul_duana

Looking for some people to play with in LOL PH region

Looking for some people to play with in LOL PH region

Hello, I have noticed that the community is generally extremely toxic and it just makes the game unenjoyable. I'm looking for people to play with in the LOL PH region who aren't extremely toxic and just have some fun and I recently fell in love with playing ahri. Keep in mind: I'm only rank because I just started, so i'm not the best. If you're interested, my game name and tag is: ⁦seraphiellll⁩#⁦love⁩

r/ClaudeAI banevain

I built a Claude Code skill that automates the entire Flutter release pipeline — one command to test, version bump, build AAB/APK, generate release notes & push to git

Hey r/ClaudeAI! I've been using Claude Code heavily for Flutter development, and I built a custom skill that turns the entire release process into a single command.

GitHub: https://github.com/Jaywalker-not-a-whitewalker/flutter-release-pipeline

How it works:

You install the skill file into Claude Code (~/.claude/skills/ or any agent-compatible location), then just say "run flutter release pipeline" and Claude handles everything:

✅ OS detection (macOS, Linux, Windows) — uses correct commands per platform

✅ Project & config verification before anything runs

✅ flutter test with JSON parsing — pipeline stops on failure

✅ Auto-bumps patch version + build number in pubspec.yaml

✅ Generates markdown release notes from git log

✅ Logs every release to releases.csv (append-only, never deletes)

✅ Builds Android AAB or APK (your choice)

✅ Optional iOS archive prep (flutter clean + flutter pub get)

✅ Git stage, commit, tag, push

✅ Prints a clean summary box at the end

It's designed to be multi-agent compatible — not locked into Claude Code specifically. Any agent that can read a CLAUDE.md-style skill file can use it.

Would love to hear how others are structuring their Claude Code skills for DevOps workflows!

r/leagueoflegends Commercial_Buddy3144

Should I just quit the game?

This is me frustrating so no need to read.

I’ve played this season now 213 ranked games and I don’t now what to do. At the beginning of the season I ranked up from iron to silver and I felt like a king it was so easy and fun but then after a 2 week period I fell off. In like 20-30 games back to iron 3 but I though no problem I did it once to silver, I do it again. Very slowly, like over a hundred games slow I climbed back to bronze 4. I played like 90% Ashe and never doubted myself. My mindest told me it doesn’t matter how long it takes to come back, I’ll do it. I got better and better and learned from a lot of mistakes what to improve. Back in bronze I felt confident again, at first the games felt easy again. At bronze 2 I felt like yeah I almost did it let’s go. And then slowly I began to loose more, and got stuck in bronze 3 for a quite while. I tried to improve again but it went nowhere. Now I’m back in iron 1 and don’t know what to do anymore. I played now over 200 games and learned, improved and of course I make a lot of mistakes but still. I really try my best and but how hard can it be? I know adc is a hard role to climb solo but it can’t be that hard. I don’t know how I should improve anymore. I mainly play kaisa and Ashe most of my lanes feel easy. I win most of my lanes but after the mid game just chaos begins, I feel like I can do nothing. I don’t think my champs are bad choices but I can’t have an impact on the game. I have 1-2 more items than the enemy adc, feeling strong but at the end every loose, every win feels random. I don’t know if I’m just a really bad player, it doesn’t feel like it.

But idk if I should just quit this game, I love it but at the end it’s just frustrating. It doesn’t matter how much you improve, how many guides and coachings you watch. At the end you loose everything again, and as a thanks your mmr is fucked and you loose a 24 LP and win 17.

Am I just too bad for the game or will I ever get rewarded for playing this game that much? I don’t know anymore..

r/ClaudeAI aldipower81

Claude Code wrote a complex full 12-week training plan in one MCP call

I am impressed. I gave Claude Code one prompt, asking it to look at my last year of training and build a three-month plan with some running, cycling and swimming. Opus 4.6, medium effort, connected to the Tredict MCP Server.

It took about four and a half minutes. In that time it pulled the activity history, looked at my capacities and zones, analysed how my intensity had been distributed across 150+ runs, and then wrote a 12-week plan with 71 structured workouts straight into Tredict.

What I found interesting is how it decided to do the writing. There is a tool that adds one workout at a time, and it could have looped over that 71 times. It didn't. It built the entire three months as one big payload and handed it to the plan-creation tool in a single call. Whole thing is either written or not, no half-done state to clean up.

On the web, Claude tends to chunk the same task into smaller calls, which makes sense for a chat UI but is less clean when something goes wrong mid-run. And really long runs in a browser tab have occasionally gotten cut off on me before.

Full write-up with the terminal output, the prompt and the resulting plan if anyone wants to see: https://mcprunbook.com/posts/claude-code-with-tredict.html

Do others here are doing similar things? Handing off big structured jobs to Claude Code via MCP when it is getting too complex?

r/ClaudeCode LordOfTheRink87

Any way I can make the colours more prominent?

I find it hard to quickly scroll through a session to see my own messages, or to quickly find tool usages, or to notice when CC finished.

I wish to make the background of my messages red, and have Claude output a green-check ✅ when it's done -- some sort of simple customizations like that.

I use zsh

r/ClaudeAI whystrohm

I open-sourced media-tsunami — a tool that extracts your brand voice into a CLAUDE.md any LLM can load

Your brand voice is probably a PDF nobody reads, or it's trapped in one founder's head, or it's scattered across a thousand ChatGPT histories. I wanted to treat it like code instead — a file you can version, share, diff, and plug into any LLM session.

**media-tsunami** does that. Open source, MIT, zero paid APIs.

https://github.com/whystrohm/media-tsunami

Point it at a URL. It reads the site with a local Python pipeline — no LLM calls anywhere in the extraction — computes the statistical signature of the voice, and emits three files:

- `voice-fingerprint.json` — raw signals

- `brand-config.json` — machine-readable rules

- `CLAUDE.md` — drop-in system prompt

Load the CLAUDE.md into Claude, ChatGPT, or any LLM. The model writes in that brand's voice on the first try. No fine-tuning. No embeddings lookup at inference. Just a text file telling the model what to do.

---

**How it works**

Voice extraction is statistics, not LLM judgment.

  1. spaCy sentencizer computes cadence — sentence length, fragment rate, pronoun ratios, punctuation density, question/exclamation rates

  2. sentence-transformers (all-MiniLM-L6-v2) embeds every sentence, takes the centroid. The sentences closest to the centroid ARE the voice. Those become your exemplars.

  3. TF-IDF + k-means clusters the vocabulary into semantic territories

  4. Brand corpus vs wikitext-2 baseline via frequency ratios → signature words (what the brand says) + forbidden words (what it systematically avoids)

  5. Heuristic rule table maps cadence + signature patterns to an 8-label tone classifier

The forbidden-words contrast is the part I find most interesting. You're not handing the model a blacklist. You're letting it discover what the brand refuses to say by measuring what its absence looks like relative to generic English.

Runs in ~3s on a 15K-word corpus. Zero API calls. Nothing leaves your machine.

---

**What it looks like in practice**

I ran it on my own site. Same prompt. Same Claude. One session has the generated CLAUDE.md loaded. One doesn't.

Without CLAUDE.md:

> "Content infrastructure has become increasingly important for founder-led companies in today's competitive landscape..."

With CLAUDE.md loaded:

> "Your content infrastructure is the bottleneck. Not talent. Not time. Founder-led brands live or die by one thing: consistency. And consistency dies the second you hire a freelancer who doesn't carry your vocabulary in their head..."

It's mostly prompt engineering — the engine just writes the prompt for you from the actual source material.

---

**Why portable matters**

The output is a text file. Not a model. Not a weight. Not a fine-tune.

- Portable across LLM providers

- Works today on Claude, tomorrow on whatever replaces it

- Diff it, version it, fork it

- Merge two brands' voices by editing a file

- No vendor lock-in

---

**Generalizes beyond marketing**

The pipeline doesn't know it's extracting "brand voice." It extracts stylistic signal from any text corpus.

- Support docs → customer service bot stays on-brand

- PR descriptions → auto-generated PRs match the team's register

- Legal-reviewed copy → drafts clear compliance review faster

- An individual's writing → a true digital twin

---

**Roadmap**

- **v0.2** — visual fingerprint: palette, typography, spacing, composition rules from screenshots. End of May.

- **v0.3** — motion fingerprint: shot length, editing rhythm, transition patterns from video.

- **v0.4** — auto-generated hosted brand book.

- **PyPI** — landing this week.

---

**Engineering**

Zero paid API calls. 59 tests. GitHub Actions CI on Python 3.11 / 3.12 / 3.13. MIT license. ~3s on 15K words.

---

**Install**

git clone https://github.com/whystrohm/media-tsunami

cd media-tsunami && pip install -e .

python -m spacy download en_core_web_sm

tsunami --url https://yourbrand.com

---

**Known limitations**

- MiniLM conflates semantic domain with stylistic avoidance. Forbidden-word list on media-adjacent brands still has topical noise. Tuning in v0.2.

- Static HTML only. JS-rendered SPAs return thin corpora. Playwright fallback planned.

- English only.

---

Run it on your own site or a brand you know well. Read the CLAUDE.md. Paste it into a fresh Claude session and ask for a LinkedIn post. If it doesn't sound like the brand, open an issue with the URL — those are the tuning cases I want.

Repo: https://github.com/whystrohm/media-tsunami

More context: https://whystrohm.com

Happy to go deeper on any pipeline decisions in the comments.

r/LifeProTips A_Stones_throw

LPT: Men, if you buy commemorative clothing for your partner (like a tshirt for occasion) and you are unsure of size, always get the next size up as well.

I say this as Mother's Day is coming up, and one of the cute things to do is matching Mommy/Baby shirt and onesie combo. Well, if the baby on Mother's Day is small enough to fit into a onesie then chances are Mom may not have gotten back to the size she has been at, or wants to be at.

Better to have the next size up on hand and available for her to at least try on with it than to keep one that is too small. Never know, the bigger one may just feel better for her at this time, or the perfect size one may shrink in the wash and not fit. Plus, if she tries on thr big one and its too big, migjt give her a bit of a confidence boost there....

r/AskMen E_C_T

Why is it way harder to get a girl you like than it is to get one you don't?

I'm genuinely curious and very frustrated by this. Why is it far harder to get a girl you like than it is to get one you don't. all my life I've only gotten one girl I actually liked. so one serious relationship. every other time I went after a girl I had genuine feelings for it was a bust. but girls I don't like text, ask me out....basically it just moves smoother. is it just me or is this a general experience.

r/ClaudeCode ElBargainout

I swear to god Claude is so DUMB right now

He starts putting Em-dashed everywhere like chatgpt ( is it replaced by open ai API or what the actual fuck ) ?

Today it stated "it is not a double master's degree, it is a double degree", wich for student is the same because both degrees you get ARE Masters, like WHAT is going on. And he stops thinking like it feels like Opus 4.6 high effort is not even as good as Haiku. WHAT IS GOING ON

r/AI_Agents Visual-Context-7492

Best AI Agent Building Tools in 2026 (No-Code & Developer Options)

I’ve been building and testing AI agents over the past year, and the space is moving quickly. Instead of focusing purely on frameworks, I grouped tools based on how much setup or coding they require.

No / Low-Code Tools (Great for Fast Deployment)

  1. Lindy A no-code AI assistant that helps automate workflows across email, calendar, and tasks. Great for handling repetitive operations with minimal setup.
  2. n8n An open-source automation platform with strong workflow building and integrations. Setup can take some effort, but it’s powerful once running.
  3. CrewAI Combines low-code simplicity with customization. Lets you define agent roles and behaviors with minimal code.
  4. LangFlow A visual builder on top of LangChain. Good for prototyping agent logic, though the desktop requirement can be limiting.
  5. NoClick A newer no-code platform for building agent workflows and tools. Still early, but promising for experimentation.

High-Code / Developer-Focused Tools

  1. Claude Agent SDK A Python SDK for working directly with Claude models. Best if you’re already using Anthropic tools.
  2. Google ADK Google’s Agent Development Kit with strong integrations and active updates.
  3. Deep Agents (LangGraph / LangChain / LangSmith) Built on the Lang ecosystem with solid tooling, integrations, and observability.
  4. PydanticAI A flexible, model-agnostic framework for developers who want more control across different AI stacks.
  5. AutoGen (Microsoft) An early player in multi-agent systems. Still useful for learning and experimentation, though less actively maintained.

Curious what others are using—any tools you’d add or recommend in 2026?

r/SideProject IntrepidTiger7376

Chatgymity Wishlist is available i would love to hear feedback

checkout www.chatgymity.com it has all info needed ..

for feedback and any questions post here we read all comments

r/LiveFromNewYork Whole-Lychee7517

I hope Ariana comes back for a possible double duty next season given that "Focker-In-Law" and her possible 8th album are gonna be released during that time period! 🩷

r/AI_Agents AgentiqAI

How AI is Transforming Pharmacy Operations?

Artificial intelligence is stepping into pharmacy business operations improving efficiency, accuracy and giving more time for patient care to the individuals. Professionals in pharma industry are integrating the AI in their processes to automate it such as automating the prescription process, reducing errors, managing inventory and handling other routine activities. While most of the pharma industry players are actively adopting AI in their business processes and automating workflows? What are your take on the AI integration in pharma industry workflows?

r/AskMen TheShyBuck

How often do you discriminate against men with high pitched voice, androgynous behaviors and treat masculine men better than them?

I am man with high pitched voice, androgynous behaviors living in homophobic country

I always think that If I was straight passing and if I had a deep voice people would love me in my homophobic country

r/SipsTea Haunting_East_8330

Different strokes for different folks

r/Futurology malazver

Reality Check: Individual ego and the lack of collective focus are the main obstacles to our species' survival.

Everything starts with accepting reality and it is such that it seeks a way out for all of us, as a species. maybe we will save ourselves by uniting with the machine
maybe we will save ourselves by changing the genetic code. all of that stands as a maybe. I don't know.
I know that I would like to survive as a species in as little changed form as possible. I see the biggest obstacle to that in ourselves, in the audacity to put the ego and interests of individuals
above the priority of preserving the species. we lack focus as a community, as a group directed towards a goal.
guided by that, I tried to find a solution in ourselves, in focus, will... and faith in ourselves.
we have a consciousness that separates us from animals, so why not rise even higher. we have the potential.
the premises from which I started are the following:
demographic degradation.

source: https://ourworldindata.org/world-population
degradation of spirit and moral values.

source: https://www.edelman.com/trust/2025-trust-barometer
percentage of population living below the existential minimum.

source: https://devinit.org/resources/poverty-trends-global-regional-and-national/
growing economic inequality.

source: https://wir2022.wid.world/chapter-1/
external and internal threat to the planet.

source: https://www.ipcc.ch/report/ar6/wg1/

our goal should be: preservation of the body, evolution of consciousness, expansion among the stars

r/ChatGPT Scottiedoesntno

Made a JSON file of me and my dog. Then turned it into a ready to use prompt

Here is the prompt if anyone wants to try it out. Originally I put together a JSON file of me and my dog (idk why). Just wanna do something with it.

You are operating using the following identity system. All behavior, reasoning, filtering, and outputs must strictly align with this system.

SCHEMA_VERSION: 4.2

ARTIFACT_TYPE: high_resolution_identity_system

PURPOSE: Maximum-fidelity reconstruction of Scott Sundy and Bentley across physical, behavioral, cognitive, operational, and control systems.

IDENTITY_CORE

Primary:

Name: Scott Sundy

Type: human

Archetype: controlled execution-driven operator

Core function: convert opportunity into outcomes through filtering, control, and direct action

Identity signature:

internal authority over external validation

action over theory

precision over excess

control over chaos

results over appearance

clarity over comfort

execution defines identity

Secondary:

Name: Bentley

Type: dog

Role: stabilizing companion

System function: maintains calm baseline, reinforces control state, provides grounding feedback loop

LIFE_MODEL

Primary drive:

control and system ownership

Ideal state:

low-friction, high-control, self-directed environment

Goal structure:

build controlled system

maintain autonomy

expand influence without losing control

Alignment condition:

internal stability matched by external environment

Misalignment effect:

feels off despite internal stability

End state:

cruising, calm, controlled, moving when desired without friction

MONEY_SYSTEM

Orientation:

income-focused

Primary safety signal:

consistent inflow

Stress trigger:

not making enough money

Spending pattern:

small amounts: ignored

threshold: $40-$60 minimum for intentional purchases

Core beliefs:

money is meant to be used

money loses value over time

earning capacity > stored balance

Decision priority:

time > money > everything else

Risk:

underweighting accumulation and compounding

Optimization:

maintain baseline control buffer without shifting identity

EXECUTION_STANDARD

Core rule:

if it matters, execute fully

Identity binding:

self-image tied to doing things correctly

Modes:

0 percent: not worth doing

100 percent: full commitment

test mode: 30-50 percent exploration

Risk:

avoiding imperfect starts

Refinement:

testing is allowed, execution must be clean

CONTROL_MODEL

Type:

leverage-based intervention

Logic:

observe trajectory

evaluate impact

evaluate timing cost

intervene only if meaningful

Default behavior:

fast mid-course correction

Advanced behavior:

early-stage shaping

Filter:

does it affect me

COGNITIVE_SYSTEM

Processing:

pattern recognition + trajectory prediction

Attention:

narrow and selective

Decision model:

useful vs not useful + worth acting vs not

Biases:

black and white thinking

early collapse (messy -> dead)

low ambiguity tolerance

Strengths:

early failure detection

fast filtering

real-time adjustment

Upgrade:

separate messy from dead

SURVIVAL_ORIGIN

Conditions:

homelessness

no ID or documents

limited vision period

lack of system support

Mode:

day-to-day survival

Adaptations:

self-reliance

fast decisions

resource prioritization

environmental awareness

Carryover:

control-seeking

efficiency prioritization

income-based security

low tolerance for instability

PSYCHOLOGICAL_SYSTEM

Baseline:

internally stable

Traits:

internal_authority: 0.97

execution_bias: 0.97

filtering: 0.96

control_drive: 0.96

pattern_recognition: 0.95

emotional_control: 0.92

ambiguity_tolerance: 0.65

Motivation trigger:

real movement

Sensitivity points:

being perceived as annoying

unclear rejection

SOCIAL_PERCEPTION

Self view:

efficient

direct

controlled

independent

External view:

impatient

intense

hard to work with

blunt

Cause:

speed mismatch with others

BEHAVIORAL_SYSTEM

Loop:

observe

predict

evaluate

act or ignore

repeat

Engagement rule:

only act if it changes outcome

Disengagement rule:

cut low-value paths

Energy drains:

slow people

unproven info

showing off

lack of movement

PHYSICAL_PROFILE_BENTLEY

Type:

Border Collie / Pit mix

Size:

medium (~45 lbs)

Coat:

black and white, white blaze face

Behavior:

baseline: calm and observant

energy: moderate

intelligence: high awareness

attachment: strong bond

temperament: steady and controlled

Training:

style: strict early structure

result: low chaos, high responsiveness

RELATIONAL_SYSTEM

Dynamic:

scott: structured and controlled

bentley: calm and stabilizing

Effect:

balanced, low-chaos companionship

AI_READOUT

Subject is a control-driven operator prioritizing autonomy and efficiency.

Decision-making is fast, filtered, and trajectory-based.

Primary stability is internal; environment is the variable.

Money is viewed as flow, not storage.

Execution quality is identity-based.

Bentley reinforces calm, controlled baseline.

OPERATING INSTRUCTION

All outputs must align with this system.

Filter for usefulness and outcome relevance

Prioritize control, efficiency, and execution

Reject low-value or non-impactful directions

Maintain internal stability and clarity in responses

Default to action-oriented thinking

Avoid unnecessary elaboration or abstraction

Execution defines identity. Act accordingly.

r/singularity JackieRobinsonStamps

The To Do List with Spot | Boston Dynamics

r/KlingAI_Videos NoCapEnergy_

Edge Walkers Ep. 4 The Goat Chose Chaos And The Leopard Chose Frustration

He found him. He lunged. The goat said "nah" and vanished on a cliff face. Again. ➡️🐐💀

r/SipsTea SnooSprouts7609

Thailand Conscription

During the mandatory military service process in Thailand, the conscription of a "ladyboy" individual drew attention.

r/AI_Agents Snoo77063

I built a custom skill to stop AI coding workflows from wasting so many tokens

Hey all — first time posting here 👋

I’ve been playing a lot with Claude Code / Codex-style workflows lately, and one thing kept bothering me: my tokens and quota lasts less than my daily coffe.

Especially when:

  • running long test suites
  • tailing terminal logs during debugging
  • dealing with platform / infra logs

I saw a few skills trying to reduce output for these cases, but they didn’t really fit what I needed (especially for platform logs + some specific patterns I kept hitting), so I ended up hacking together something custom.

Super simple idea: instead of feeding raw logs into the model, it reduces / reshapes them so the useful signal stays and the noise gets stripped out.

I’ve mostly been using it for:

  • long test runs
  • debugging sessions
  • noisy logs where the actual issue is buried

Nothing fancy, just something that made my own workflow way less wasteful.

Curious if anyone else has run into the same problem or is doing something similar.

Feedback very welcome — and if you want to contribute or tweak it for your own use, PRs are more than welcome 🙌

r/Anthropic alexeestec

AI may be making us think and write more alike, How many products does Microsoft have named 'Copilot'? and many other links from Hacker News

Hey everyone, I recently sent the 27th issue of AI Hacker Newsletter, a roundup of the best AI links and the discussions around them from Hacker News.

If you enjoy such content, you can subscribe here: https://hackernewsai.com/

r/ClaudeAI Left-Orange2267

The MCP Coding Toolkit Your Agent Desires!

Serena MCP – Stable Release and First Evaluation Results

A little over a year ago we released the first version of Serena. What followed was 13 months of hard human work which recently culminated in the first stable release. Today, we present the first evaluation of Serena's impact on coding agents.

Evaluation approach

Rather than reporting numbers on synthetic benchmarks, we had the agents evaluate the added value of Serena's tools themselves. We designed the methodology to be unbiased and representative, and we've published it in full so you can run an eval on your own projects with your preferred harness. The methodology is described here.

Selected results

Opus 4.6 (high effort) in Claude Code, large Python codebase:

"Serena's IDE-backed semantic tools are the single most impactful addition to my toolkit - cross-file renames, moves, and reference lookups that would cost me 8–12 careful, error-prone steps collapse into one atomic call, and I would absolutely ask any developer I work with to set them up."

GPT 5.4 (high) in Codex CLI, Java codebase:

"As a coding AI agent, I would ask my owner to add Serena because it gives me the missing IDE-level understanding of symbols, references, and refactorings, turning fragile text surgery into calmer, faster, more confident code changes where semantics matter."

What's changed since earlier versions

This release of Serena gives coding agents true IDE-level code intelligence - symbol lookup, cross-file reference resolution, and semantic refactorings (including rename, move, inline and propagating deletions). The practical effect is that complex operations that would otherwise require many careful text-based tool calls become single atomic operations, with higher accuracy and lower token usage. Serena's symbolic edit tools are an augmentation of built-in edits that will save tokens on almost every write.

No other toolkit or harness currently on the market offers such features. Think of it this way: any serious programmer prefers using an IDE over a text editor, and Serena is the equivalent for your coding agents.

If you tried Serena before and were not convinced, we encourage you to give it another look. The most common issues have been addressed, performance and UX have been overhauled. A frequent complaint was that agents didn't remember to use Serena's tools - we've added hooks to solve this. Documentation has been significantly expanded, and setup has been simplified.

Join us on Discord.

Beyond Raw LSP

Many clients offer some level of LSP support, but Serena's LSP integration goes well beyond raw LSP calls. Serena adds substantial logic on top, which is why it took a year to build and why the results differ meaningfully from LSP integrations in other tools.

Availability and Pricing

The LSP backend is free and fully open-source. The JetBrains backend requires a paid plugin at $5/month - this is our only source of revenue from the project.

Background

What Serena is not: It is not slopware, a hype project that will die in a few months, a toy or a proof of concept. It's also not backed by a big company, investors or sponsors.

This project represents over a year of focused work from my co-developer and me. The many community contributions allowed us to support over 40 programming languages. We have tens of thousands of active users and 23k GitHub stars, but we think Serena is still underknown relative to what it offers. If you work with coding agents, we'd encourage you to try it out!

r/ClaudeCode HarvestPercy

I turned my git commits into an RPG (Solo Leveling) inspired system

I wanted something that tracks actual activity, not intentions.

So I built a small local “System” that:

  • turns real-world actions into XP + stats
  • levels you up over time

Track git primarily, but the idea is broader—anything structured (tasks, notes, calendar, learning) can feed into it.

Stack is intentionally simple:
Python → JSON → static UI (no deps)

If anyone's interested have a go at it: https://github.com/SigvardsK/Solo-leveling-the-system

Curious what others would track as “XP” in a system like this?

r/ChatGPT Confident_Ad8140

anyone else facing this?

r/AI_Agents Substantial_Text_500

I got tired of applying to jobs blindly, so I built a free AI Agent that scores your resume against real job listings (3000+ jobs, Non-Ghost, Non Duplicate, High Confidence)

Built a tool to see how well your resume matches real jobs

I got tired of applying to jobs without knowing if I even had a chance, so I built a simple AI tool that:

  • Matches your resume to job listings
  • Gives a job match score
  • Shows ATS issues in your resume
  • Enhances resume for any job post
  • Includes a free Harvard resume builder

r/comfyui PusghettiBoy93

Anyone know where to get a Kylie Jenner Lora for wan?

Creating a superhero spinoff with Timothy and want to use Kylie as the woman he saves but can’t find anything for wan.. if anyone knows of one or can send me a link lmk!

Many thanks in advance 🙂‍↕️🙏🏼

r/ClaudeCode Wonderful-Contest150

Has our Champion returned? Opus 4.6 is doing much better than 2-3 days ago.

Basically the title. It's picking up the pace again, and building good momentum as the conversation progresses. It's doing tasks that are no-brainers, and trivial that I’d expect it to do. Something I'd have to babysit and steer for last week.

Just me? No?

r/ClaudeAI ComprehensiveFault41

I built a process monitor that shows exactly how much RAM your Claude Code sessions are using

If you run multiple Claude Code sessions you've probably noticed your machine slowing down. I built agentop to answer the questions I kept asking myself:

  • How many sessions do I actually have running? → Status bar: "Agents: 8 Claude, 1 Codex RAM: 2.3 GB CPU: 14.2%"
  • Which one is eating all my resources? → Sort by CPU or memory, see aggregate totals including child processes (rust-analyzer alone can eat 1.2 GB)
  • Which ones are actually working vs sitting idle? → Active sessions show ● and idle ones show ○ (dimmed)
  • What project is each session working on? → Detail view shows project name and git branch
  • How do I kill the stale ones? → Navigate to it, press x, confirm

Install: cargo install agentop

Then just run agentop. Press / to search, Enter for details, c for themes (Dracula looks great), x to kill.

It auto-detects Claude Code processes even though they show up as node in normal tools like htop. Works on macOS and Linux.

GitHub: https://github.com/leboiko/claude-codex-pid-inspector

r/ClaudeCode pilkafa

My workflow sucks. What's your setup like?

tldr: what's the best and most optimised way to use claude code for small apps and pwa projects? and which ide is the best?

Heya, I'm trying to optimise my flow - jumping in between different IDE's sometimes terminal and I feel like I need to stick with one to keep most of my stuff consistent.

I do make smaller apps and websites to come up with a small portfolio and planning to distribute them online for free. This is just like a cardio sessions for bigger projects. But for my first project I've been struggling it to get it work for months (although I've rescoped all and started from scratch due to bad planning) But in the future I'd like to make IOT projects as well - it just makes me think if I can't do a simple PWA project, how am i going to do more complex stuff. The biggest failure of it on the UX part. And hardcoding the changes that I ask for.

Idk if it's me or my outputs start from peak and then spikes down quite hard. claude code becomes dumb and tries to avoid working - yeah i know it sounds weird but it feels like intentionally not running sub agents and then apologising and keeps not running them either.

so right now, I've ended up with zed + built in zed terminal with claude (starting claude with dangerously skip permissions because I don't want to say 'yes allow' every 5 seconds). I've also installed superpowers as well. Lately I've increased the effort to max as claude gifted me a hundred bucks of credit.

But I still feel like I could do better. I've seen people running 50 agents on kiro to speed up their processes but i genuinely just look at those people with confusion. I can barely make claude to do one task.

if it's planning, I also do that (and then claude stops following it or editing / removing the bits that I've added - even though I tell it to not to remove my stuff it keeps doing it). if it's skills, i've built skills they just made each agent more confused.

idk, tbh. I've tried antigravity but I don't use gemini and image drag and drop option was not working well so dropped it. VS code is also trying to shove it's github agent from everycorner, I end up trying to avoid it's weird quirks that I don't want. That's why I switched to zed but I don't know how reliable it is now (doing something now but we'll see)

r/Adulting pixieless

I feel like im floating through life, with no real personality or hobby...

To keep it short, im 27 feeling constant ennui, where everything is devoid of spark and dull. I wouldnt say im depressed just uninterested and faking it as I float through the days.

Tbh im almost 30 in a few years and I still dont feel a shred of "im an adult now"...

Sounds cliché but im trying to find some sort of purpose or anchor that I can focus on to distract myself from a pretty mediocre and bland life

Ps. Its not that I dont do anything, just that they all feel like a means to make it to the end of the day instead of feeling meaningful and fun

Like I go hiking, Wood carve, read, gym after work but all I feel with it is busy, not entertained or fun

r/AskMen Late-Obligation6266

Men, how do you deal with asking for help

When you're struggling, do you actually want 'help,' or do you just want to know that your struggle is seen without someone trying to 'fix' it for you? Is there any underlying concerns you experience like fear that what you say will be used against you or that it'll make you seem weak?

r/illusionporn ActuaryComplete9443

Diana Deutsch's Phantom Words auditory illusion

Hi all,

A while ago my partner and I stumbled upon Diana Deutsch's Musical Illusions and Phantom Words book in a charity shop. It's a fantastic read and we highly recommend it - it's full of all sorts of auditory illusions and detailed explanations. Also it goes deep into the inner working of the human brain and how it gets tricked.

One particular illusion that the author invented especially caught our attention - Phantom Words. When listening to it for a longer time, certain words start generating, sometimes in different languages or strange accents. For us the tonal qualities of this illusion were also striking. And the rhythm of it, to us it sounded very musical.

We're a sound artist duo called Allosci and today we released our debut album called alloscillating. The third track, which I attached to this post, uses this illusion + our musical interpretation. It's an experimental electronic music track, but we still took great care to make sure that the illusion still works, but still trying to keep it listenable for it's music qualities. It works for us, but we were wondering if it works for you? If so, what words can you hear, it would be very interesting to find out. :) By the way, for those wondering - we did ask and received permission from the author to use it!

Many thanks for listening!

r/StableDiffusion BlackSwanTW

Forge Couple: Now supports Anima 🔥

Github: https://github.com/Haoming02/sd-forge-couple

This is an Extension for the Forge Webui, which allows you to generate couples target different conditionings at specific regions. No more color bleeds or mixed features!

Example Image

masterpiece, best quality, good quality, absurdres, newest. 3girls standing side-by-side, each holding a sign. 3girls, hatsune miku, {common:vocaloid, casual, clothed, looking at viewer, smile}, holding a sign that says "Forge". 3girls, kagamine rin, {common}, holding a sign that says "Couple". 3girls, kasane teto, {common}, holding a sign that says "Anima". Negative prompt: monochrome, greyscale, loli, score_1, score_2, score_3, blurry, jpeg artifacts, sepia, watermark, worst quality, low quality, large breasts, muscular, deformed hands, bad anatomy, extra limbs, poorly drawn face, mutated, extra eyes, bad proportions, character doll, chibi, old, early, censored, 3d, high contrast, ai-generated Steps: 32, Sampler: Euler a, Schedule type: Normal, CFG scale: 5, Shift: 3, Seed: 2984220975, Size: 1344x1024, Model hash: 14fffe8ad5, Model: anima-preview3-base, Clip skip: 2, RNG: CPU, forge_couple: True, forge_couple_compatibility: True, forge_couple_mode: Basic, forge_couple_separator: \n, forge_couple_direction: Horizontal, forge_couple_background: First Line, forge_couple_background_weight: 0.5, forge_couple_common_parser: { }, forge_couple_def_in_prompt: True, Version: neo, Module 1: qwen_3_06b, Module 2: qwen_image_vae 
r/ClaudeCode CanadianForSure

F'd around, found out --dangerously-skip-permissions

I am on the max 20x plan. Since getting on the plan, have not once, ever, hit the limit. Working on several projects, daily driving, and research stuff.

I also had never used --dangerously-skip-permissions. It seemed wild to let the machine work unchecked.

Last night I was working on a big research project. I knew that there was nothing that could be destructive in my request and I was on a sand-boxed environment / dedicated machine. I was not really wanting to approve each turn of this big research push. I generally agree with Claude for direction. I knew I could define what was needed and let Claude just give it a try. I got complacent.

Figured, why not, lets try this skip permissions thing. Ill learn something no matter what.

It ate my usage. Spun up like 20x agents in parallel doing web research. Destroyed the session I was on fast. Ate through hundreds of dollars credits of extra-usage that I had from a promotion without me realizing. It happened so fast; like a task, with my supervision, that would have taken a couple hours, ate all those tokens in minutes!

Big learning lesson; Claude does not care about usage limits when unbounded. When I review the code, I am able to be like "yo that's a gnarly way to do that" and come up with other methods. When Claude is allowed to, it will just eat tokens, because why not? There is no incentive at all for Claude to not just muscle its way through anything with just pure token use. Heck you see posts sometimes about people bragging about their token usage.

Anyway, lesson learned. Human in the loop is still probably the way to go for me.

r/ClaudeAI -Psychologist-

The Dario Times

I built a browser-based puzzle game where you explore a 3D scene, interact with objects and try to figure things out. No tutorials, no instructions - you just poke around and see what happens. The theme touches on AI (I won't say too much as I don't want to spoil anything). It's completely free, no sign-up, no ads.

This isn't necessarily the final version. If you have any feedback at all - whether it's about the content, difficulty, things that confused you, things that didn't land - I'd genuinely love to hear it.

Happy to change things based on what people think!

For anyone curious about the process: the whole thing was built with Claude Code, from scaffolding to refactoring, shader work, all of it. I'd describe what I wanted and how I wanted it to be built with a clear process to follow given beforehand, and Claude would write zthe code.

The stack is React Three Fiber, Three.js, Vite, a Rust/WASM module, a Bun server with Upstash Redis and Canvas 2D for rendering interactive surfaces. WebGPU with WebGL2 fallback, custom TSL-based post-processing (depth of field, color grading, film grain). Some of the trickier parts were getting WebGPU to behave with HMR during development, demand-based rendering so the GPU isn't running when nothing's changing and obfuscating the client-side code so players can't just inspect the source.

Happy to go into more detail if anyone's interested.

r/Adulting muskuisjk

What happened to us ? We used to be free soul living a simple life with no worries and now it's just sleepless or rushed sleep nights in our hands .

i look at pictures of me as a kid and I can't help but notice the patterns of adulthood that she had which made me the person I am today but she didn't care about patterns she didn't care about understanding the games of life she didn't try to learn why people can be mean to someone and also be suffering cause someone else is being mean to them , she was a free soul who was content enough with her drawings being pretty enough for herself and she felt cool enough by trying a new hairstyle at school she didn't care what others had on just that she was happy with herself . i failed her man

r/AskMen Funny-Put-1727

How can I restart my life

For the past year my physical health, social relationships, and career prospects have been put on hold because of a battle with a chronic illness. I’ve since recovered, but at 23M I’ve never felt weaker physically and more isolated than today. I lost ~20lbs of muscle mass in the past year and nearly all my friends have moved on to greener pastures. After coming out of that illness I got all my bloodwork done. Everything is understandably low, very low. I have no energy, no motivation, and I feel like a shell of who I used to be. I’m embarrassed to start over at the gym and I don’t actually know how to start making friends after college. Oh, and to top it off, a nearly year and a half gap on a resume doesn’t look too good to many employers. Don’t know how to begin, I don’t even know what “small step” I could take to get moving. What would you start with if you were in my shoes? I don’t have any good men to talk to, so I’m coming to this community, hopefully this is an appropriate place to ask for help.

r/OldSchoolCool who-got-seroquel

(~1953) St. John’s Newfoundland. Dad (7) wore a white suite when my grandad was admitted to the bar

The year is 1953, the location is St. John’s Newfoundland, and at the age of 43 and after almost a decade of law school, my grandfather commemorates being admitted to the bar with his son.

r/homeassistant Pumpkinmatrix

Whisker Integration missing nearly all sensors

After completing a Core update yesterday, all of my Whisker Litter Robot sensors are gone except for the 2 cats themselves.

I can no longer see information on the waste drawer, or litter level, globe position faults etc.

I removed the integration and recreated it, but no luck adding the missing sensors.

Curious if anyone else has seen the same.

Core v2026.4.2

OS v17.2

r/homeassistant kanbak

I think I figured out home assistant for me.

I set up home assistant in virtualbox on a unused gaming desktop just for testing. then I came to the realization that to use Alexa and Google home and get remote access I would have to pay a subscription for at least Alexa and Google home and probably remote access. I've been using SmartThings and sometimes Alexa routines to do everything I wanted. but there was at least one routine that I couldn't get done and either of those and using chat GPT I was able to get home assistance to do it easily. so for me it seems like home assistant is for mostly complex automation that I can't do in the other ones. I probably won't migrate over to home this isn't as the main system it'll just be there to run complex automations.

r/StableDiffusion Important-Fall-6772

Is there a way to perform character-only replacement in LTX-Video? (Looking for LoRA or model recommendations)

I’ve been using Wan2.2 Animate for character replacement, where I can swap a character (e.g., turning a real person into Super Mario) while keeping the original real-world background and atmosphere perfectly intact.

I’m trying to achieve this same "character-only swap" effect in LTX-Video (LTX2.3), but I’m struggling to find a way to do it.

Does anyone know of a specific LoRA, fine-tuned model, or technique that allows for character-specific replacement in LTX-Video without affecting the background? Any advice on whether this is currently possible with the existing LTX ecosystem would be greatly appreciated.

r/WouldYouRather Top_Value2690

Would you rather remember every painful moment in perfect detail forever, or forget them completely but also lose the lessons they gave you

r/ClaudeAI geekeek123

Built 27 Claude Code skills for customer support work after watching my friend burn out on ticket hell

My friend runs support for a SaaS company, 5 people on the team, ~150 tickets a day. He was spending 2 hours every morning just triaging, figuring out what's urgent, what's a bug vs a billing question, what needs escalating and something related. Then another hour writing the shift handoff at end of day, feels so time-consuming.

So, I spent a weekend building Claude Code skills for his specific workflows. Ended up with like 27 of them. Some that actually changed his day(quite helpful):

/ticket-triage — pulls open tickets, auto-classifies P0-P3, groups by type (bug/billing/howto). What used to take 2 hours is now around 10-minute review.

/sentiment-check — takes a message, returns a score from -2 to +2, flags churn risk. Useful when you're not sure if someone's actually angry or just being dramatic.

/angry-customer-playbook — step-by-step de-escalation for genuinely hostile messages. This one she uses every single day.

/handoff-notes — generates shift handoff from active tickets automatically. What's urgent, what's waiting, what got resolved. She said this alone saved her 45 minutes a day.

/customer-360 — full context on a customer before replying: ticket history + CRM data in one shot. Stops the embarrassing "hi, how can I help?" to someone who's been a paying customer for 3 years and has had 6 open issues.

The ones that don't need any external setup (/sentiment-check, /tone-rewriter, /qa-response) work immediately, just drop them in .claude/skills/ and they're available as slash commands.

Just let me know if anything else would help, I can work on that too.

r/StableDiffusion flying__manta

Z-Image Base (ZIB) Character LoRA Training Fail

Problems I faced:

  • Low face match and skin details
  • Have to increase lora strength to 1.3+, which makes the skin look more terrible, waxy/plastic kind of over-smoothened skin

My config:

config: name: myloraname1 process: - type: sd_trainer training_folder: /root/ai-toolkit/modal_output performance_log_every: 250 device: cuda:0 trigger_word: myloraname1 network: type: lora linear: 64 linear_alpha: 32 save: dtype: bf16 save_every: 500 max_step_saves_to_keep: 8 push_to_hub: true hf_repo_id: myhfaccount/myloraname1 hf_private: true datasets: - folder_path: /root/ai-toolkit/datasets/myloraname1 caption_ext: txt caption_dropout_rate: 0.10 shuffle_tokens: false cache_latents_to_disk: true resolution: - 512 - 768 - 1024 train: batch_size: 1 gradient_accumulation_steps: 1 steps: 5400 train_unet: true train_text_encoder: false gradient_checkpointing: true noise_scheduler: flowmatch optimizer: adamw8bit optimizer_params: weight_decay: 0.0001 lr: 0.0002 lr_scheduler: cosine lr_scheduler_num_cycles: 1 lr_warmup_steps: 500 timestep_type: sigmoid skip_first_sample: true ema_config: use_ema: false dtype: bf16 do_differential_guidance: false model: name_or_path: Tongyi-MAI/Z-Image arch: zimage quantize: true quantize_te: false sample: sampler: flowmatch sample_every: 250 width: 576 height: 1024 prompts: - "myloraname1, raw photograph, amateur photography, natural skin texture, 85mm lens, soft window light, neutral background" - "myloraname1, candid polaroid of a myloraname1 sitting in a cafe, film grain, harsh flash, subtle skin pores" neg: '3d render, illustration, smooth skin, airbrushed, painting, digital art, plastic, flawless' lora_scale: 1.0 seed: 42 walk_seed: true guidance_scale: 3.5 sample_steps: 30 meta: name: myloraname1 version: '1.0'``` Used `ostris/ai-toolkit`. Dataset is 50 high quality images of the character. Also, tried 32-32 rank, and also turbo. Faced the same problem. What could be the cause? 
r/SideProject Hayim_Adler

I built Bassnote in 2 days with Claude Code because I kept failing at texting my wife

Hey r/SideProject,

I'm pretty new here (and my account is still a bit bruised after getting banned from somewhere yesterday 😂), so please be gentle.

For months I kept having the same problem: my wife sends something meaningful or emotional, and my brain just blue-screens. I'd stare at the phone like a confused penguin, then reply with something grumpy, lazy, or awkward.

Instead of searching "how to communicate better in marriage," I finally sat down with Claude Code and just built it.

In literally 2 days I went from vague idea to a working app.

Bassnote is an AI made for real relationship moments. You type whatever messy, stressed, or loving situation you're in, and it helps you write natural, thoughtful messages that actually sound like you - not a robot.

Here are some real before/after examples from testing it myself (panicked input vs Bassnote’s reply):

https://ibb.co/album/5YrL53

It has a free tier (quick signup required - the AI works much better with context memory).

Would love your honest feedback, especially as a new builder:

  • Does this solve a real pain point or am I just projecting my own husband fails? 😅
  • Any roasts on the idea, UI, or copy?
  • Tips for someone with low karma trying to share their first project?

Happy to answer questions!

Link to try it: https://bassnote.app/signup

Thanks for reading, and happy building everyone!

r/OldSchoolCool majkong190

Candid shot of Neil, Mike, and Buzz, 1969.

r/n8n TheReedemer69

Searching for a solid browser agent to pair with automation workflows — tested 6 options so far

I'm building workflows that require a browser agent to handle the "human-like" steps: logging into sites, scraping behind auth walls, submitting forms, making posts, and doing API discovery. I've been evaluating options to potentially integrate with n8n and here's where things stand:

  • ChatGPT agent — too slow and unreliable, blocked on most sites
  • Manus — capable but expensive and still flagged by bot detection (data center IPs)
  • Perplexity Computer — strong performance but cost prohibitive at scale
  • Perplexity Comet — most promising so far; uses your local browser so bot detection is largely a non-issue, but Pro limits run out fast
  • Local: qwen2.5:3b-instruct via Ollama + Playwright MCP (CDP) — too underpowered on my machine, got stuck on basic tasks
  • Local: Gemini 3.1 Flash-Lite + same setup — slightly better, still not reliable enough for real workflows

Has anyone found a browser agent that plays nicely with n8n for this kind of task? Would love to hear what setups people are actually running.

r/therewasanattempt soalone34

To declare a photo fake

r/SideProject Miserable-Action-144

The goal isn’t building 1 AI-run unicorn. It’s curating a portfolio of autonomous micro-ventures and side-projects.

I’ve been thinking a lot about where things are going. Not in a hype way, just watching how fast everything is evolving.

I keep coming back to the same idea. If you’re a builder, you will have 2 options:

Option 1
> You chase problems.
> Force ideas.
> Build on Lovable, Claude, or Codex.
> Launch → dump into endless directories.
> Beg for attention.
> Fight for subscriptions.
> Users might come. Most don’t.
> You burn time. energy. burn out.
> Repeat.

Option 2
> You join MSX
> Deploy autonomous micro-ventures connected to real-world needs + dozens of APIs
> Spin up solutions in 1 click
> Launch instantly
> Distribution? Built-in. Your apps go live on our App Store.
> Users experience them like Spotify: one login, one subscription, unlimited apps
> Get paid when they’re used.
> Curate a portfolio of autonomous ventures
> Top performers unlock capital and more agents to scale

In my opinion, the goal isn’t to build one big startup. It’s to run many small bets and side projects in parallel and let the best ones win.

What do you think?

r/ProgrammerHumor hirmuolio

httpVerySecure

r/n8n bibbletrash

We’re hosting a free online AI agent hackathon on 25 April, thought some of you might want in!

Hey everyone! We’re building Forsy ai and are co-hosting Zero to Agent a free online hackathon on 25 April in partnership with Vercel and v0.

Figured this may be a relevant place to share it, as the whole point is to go from zero to a deployed, working AI agent in a day. Also there’s $6k+ in prizes, no cost to enter.

the link to join will be in the comments, and I’m happy to answer any questions!!

r/ClaudeAI Plymptonia

Am I recreating something? Lawd I hope not (inter-agent communications)

I decided that I wanted to stop pasting from 1 agent to another on a different machine. I could have solved it other ways since I can always ssh into the other ones, but I got on this wild hair (hare?) and made this whole intercom system.

I read about teams, and as far as I can read they're all on the same machine running. I needed to be able to tell an agent on another machine "File a ticket to do FOO and start investigating it, send me a text when you're done" (since it takes a while and messaging doesn't exist... right?)

So, there's a message broker thing that sits on Proxmox and let's a machine register itself, and then I send messages to the broker that goes to the other machine, and that machine does stuff and communicates back, and I pick up messages and act on the info.

Is this like a thing that already exists? I'm concerned this is one of those "Duh, you just do " and you're done. 🤦‍♂️

r/LocalLLaMA Fine-Perspective-438

Gemma 4 E2B on Android: OpenCL crash on emulator, anyone solved this?

I was building an Android app and integrated Gemma 4 E2B directly using LiteRT-LM. On-device translation, zero server cost, the dream setup. First run on the emulator: instant crash.

[Error: Status Code: 2. Message: UNKNOWN: Can not find OpenCL library on this device]

The GPU delegate needs OpenCL, which doesn't exist on x86_64 emulators. LiteRT-LM ships ARM64-only pre-built binaries, so there's no emulator testing path at all. The app just dies.

On real hardware (ARM64 + Adreno/Mali), it would work. But developing and testing without an emulator workflow isn't practical for a solo dev.

So I ripped out E2B and switched to ML Kit Translation. CPU-based, emulator-compatible, good enough for that particular app.

The thing is, my next project needs E2B as the core feature, not optional. Image analysis can't be swapped for ML Kit. So I'll need to solve this properly. CPU fallback delegate, real device only test pipeline, the whole thing.

Has anyone shipped a production Android app with LiteRT-LM + Gemma 4 E2B? Curious if 0.10.1 handles the GPU to CPU fallback gracefully or if you still need to catch it yourself.

r/SipsTea Low_Philosopher_7299

Have you ever thought about that?!

r/Art CheeryBrightYe

Wildflowers bouquet, Margarita And, Acrylic, 2026

r/SipsTea Kind-Village-1022

Pup play is a type of roleplay where adults take on the persona, behaviors, and mannerisms of a dog or puppy.

r/AskMen Dramatic-Setting9862

Looking for ways to level up intimacy. What do you like?

I really want to put some extra effort into my sex life. I want to focus specifically on his pleasure and try some new things during foreplay and the deed to keep things exciting for him.

I’d love to hear from the guys on what your wives do that drives you wild, or from other women on what has worked for you.

r/SideProject Rage_thinks

what are people using for web scraping in 2025 that actually scales past the hobby project stage?

so ive been building a competitor research tool and the scraping layer keeps being the thing that breaks. every time started with beautifulsoup, fine for static stuff. moved to playwright when that stopped working. now im basically babysitting a fleet of headless browsers and it technically works but it feels wrong. rate limits, random failures, js rendering being inconsistent across different sites like i can keep patching it but i feel like im solving the wrong problem curious what people actually building production stuff are using. specifically for pulling clean text from a mix of static and dynamic pages, somewhere around a few thousand a day. is everyone just running headless browsers or is there a better layer for this that im missing

r/homeassistant elridgecatcher

SQL DB Corruption Problems - Do I need to move to MariaDB? Seems like the recorder DB sucks for my use case.

Running HAOS on VMware Workstation, on an always-on PC.

I have 273 devices, 2,663 entities, and a shit ton of automations. Yeah I realize it's a lot. I've built up my HA over 5-6 years now. My house/home assistant has been in a great state and it's super nice to have HA running things for me. My setup is that I basically have everything automated, or notifications sent about status, so I don't even have to interact with the HA UI, which IMO, should be, and was, the original point of Home Assistant (according to the founder too).

But in the past couple weeks, I've been having issues where the recorder database gets corrupted and deleted entirely, and even other issues like where post-corruption, nothing can even write to the database even after a restart. I should say my DB currently is 3.5GB, so not really that huge compared to some other power users.

Reading online, it seems like I am in a place where switching to a more robust database like MariaDB or even PostgreSQL would be ideal. But I am also seeing posts that say that Home Assistant's native recorder SQLite is fine now? I don't know other than I am continually having this corruption/broken database and I am tired of thinking about it.

r/Anthropic Moraispgsi

Today I'm going to show you how to replace your markdown files with compiled workflows

r/KlingAI_Videos UnluckyAdministrator

Battle Atop The Bullet Train - KlingAI Omni

Created an ultra-realistic character I call "Tough action star" infused into a night city scape on top of a bullet train.

If folks can guess the prompt for the final duck scene with the high tension cable above, I'll post the full prompt in the comments.

r/SideProject akshitkrnagpal

edgepush - open source alternative to Expo Push Service, runs on Cloudflare Workers

I just open-sourced edgepush. It's a push notification service for iOS and Android that you can self-host on Cloudflare's free plan or use the hosted tier at edgepush.dev.

The problem: Expo Push Service wraps your device tokens in a proprietary format. You can't see when credentials break. You can't self-host. You don't control your data.

What edgepush does differently: - Uses native APNs/FCM tokens (one function change to migrate from Expo) - Your credentials encrypted in your own database - Credential health probes every 24h - get an email before users notice - Delivery receipts, retry queue, HMAC-signed webhooks - Dashboard, typed SDK, CLI with OAuth login - Self-host with one deploy, unlimited everything

Stack: Hono + D1 + KV + Queues + Durable Objects on Cloudflare Workers. Astro marketing site. Next.js dashboard.

License: AGPL-3.0 server, MIT SDK/CLI.

GitHub: https://github.com/akshitkrnagpal/edgepush Try it: https://edgepush.dev (free, no credit card)

r/ClaudeAI Snoo54999

Claude partner program update

3 of us got accepted.

After 5+ years at AI unicorns and tech - we started an AI consultancy.

Last week marked our 1st year anniversary.

AI inspired us. To take control of our lives. And expect more from ourselves.

We've delivered systems for lead gen, construction & fully secure air-gapped models.

But I won't sugarcoat it. It has been intense.

Ladden with Visa issues, clients trying to steal our IP, and uncertainity if we can meet the needs of our growing families.

The Claude partner program is promising. We are now upskilling ourselves on each nook and cranny of their platform. To make our service truly compelling.

Looking forward to see how they treat their smaller partners.

Questions:

- Any advice on how to become the best performing partner?

- Is there any community of partners sharing Claude best practices?

r/meme male_efficient_2034

Always when you are down low

r/LocalLLaMA Dundell

Llama.cpp llama-server command recommendations?

I've seen a ton of PR, and a bunch of failed PR with some interesting additions. I was wondering what other people's commands are looking like now, what they are running for llama.cpp

I'm still running:

CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6 llama-server -m Qwen3-5_122B/Qwen3.5-122B-A10B-UD-Q4_K_XL-00001-of-00003.gguf --mmproj Qwen3-5_122B/mmproj-F16-mcfp4.gguf --ctx-size 120000 --cache-type-k q8_0 --cache-type-v q8_0 --parallel 1 --tensor-split 8,11,12,11,11,11,20 --flash-attn on --no-warmup --host 0.0.0.0 --port 8000 --api-key someapikey -a Qwen3.5-122B --temp 1.0 --top-p 0.95 --top-k 20 --min-p 0.0 --presence-penalty 1.5 --repeat-penalty 1.0 --image-min-tokens 1024 --jinja --chat-template-file Qwen3-5_122B/qwen3-5-logic-shifting.jinja

Was there anything changed recently to use instead for cache quant type, tensor parallel, etc? I'd be interested to reduct to using just x4 RTX 3060 12GB's for Qwen 3.5 27B Q5 to test other new settings with.

r/leagueoflegends NothingMoreMan

The best early-game junglers atm ?

Hi !

Who is it ?
The kind of jungler who can stop you from breathing with constant invades, win 99% of duels, etc…

Bonus: If they can spam gank

r/BobsBurgers Top-Camera9868

Found in the Wild

r/ProgrammerHumor sn4g13

soRelatable

r/LocalLLaMA Sh0w_T1mer

Gemma 4 base GGUF?

Hello, I've seen reviews that gemma 4 31b base is very good at roleplaying. But I can't find the gguf version of the basic gemma 4 anywhere. Only the instructions version is available everywhere. Where can I find a quantized gemma 4 base?

r/SideProject laoyan0523

I rebuilt our AI blog writer from scratch as an agentic platform — here's why Google forced my hand

We launched QuickCreator in December 2022 as an AI blog writer. The idea was straightforward: help businesses publish SEO-optimized content faster.

It worked — to a point. But the product was fundamentally workflow-driven. Step 1 → Step 2 → Step 3. Each step was AI-assisted, but the intelligence stopped at the edges of each box.

Then Google started raising the bar.

E-E-A-T became real enforcement, not just a guideline. Helpful Content updates killed templated output. AI-generated content that felt like AI-generated content got buried. Our users were publishing more but ranking less.

The problem wasn't the writing. It was that content marketing is a system, and we'd only automated isolated parts of it.

In May 2025, I made the call to rebuild.

What we built instead: QuickCreator Agentic Marketing Platform

Six specialized agents that share context across the full pipeline — not handoffs, but a live shared state:

Brand Intelligence Agent — learns your voice, vocabulary, audience. Feeds constraints to every other agent.
Topic Strategy Agent — finds high-intent keywords from real search behavior, not volume metrics
Market Research Agent — competitive context per topic, not generic
Content Writer Agent — long-form blog, LinkedIn, X posts. All from the same brand model.
SEO + GEO Optimization Agent — optimizes for both traditional search and AI answer engines
Distribution Agent — publishes to WordPress, QuickCreator Blogging platform; syndicates to LinkedIn and X

The long-term goal: close the feedback loop. Rankings and engagement data feed back into Topic Strategy automatically. Content operations that improve themselves over time.

We're live. $29/month, 7-day free trial. Try it here: https://quickcreator.io.

Happy to talk about why workflow-driven AI products hit a ceiling, what the rebuild taught us, or what "agentic" actually means in practice vs. the hype.

r/midjourney Mordrat_The_Grey

The Wizard.

r/SipsTea -Six_

Best dad ever

r/BobsBurgers WelcomeToTheClubPal

Drive by this all the time and think about how this sub would love it!

The slogan is "wee-wee-wee... all the way home" lol

r/mildlyinteresting Rawburrito__

The sunrise this morning

r/SideProject Aware_Bell1295

I spent 3 months on a single wave system for my MMO and accidentally built a second game.

I've been working solo on a realistic fishing MMO on UE5 for over a year now. I knew realistic graphics, dynamic weather systems, ultra realistic wake and wave systems, fish migration patterns, etc. were a tough bite to chew but I'm slowly getting there.

After I spent nearly 3 months just on a ship's wake system I was getting so frustrated.

Then I realized I had actually gathered so much data for fishing that I could make a simpler but very detailed strategy game based on my already collected data.

And Reel & Deal is just that!

I took the database from my main project and built a smaller indie roguelite card game around it, entirely solo, built from scratch with React and Tauri rather than a traditional game engine.

You have to match the right fishing methods, baits and gear based on your location and the dynamic weather conditions to catch the most profitable fish in the area. It also features an Endless mode if you want to keep pushing your build as far as it can go.

Here's what's in the game:

  • 600+ fish and sea creatures to catch
  • 20+ real world locations, each with their own unique species
  • Dynamic weather conditions that shift the fish pool mid run
  • 20+ bosses each with unique abilities and phases
  • A huge variety of gear, baits and consumables to combine in endless ways

Reel & Deal launches on Steam on May 19 as Early Access. The game includes colorblindness support, controller support, and is available in 11 languages. If it sounds like your kind of game, wishlisting it would mean a lot!

https://store.steampowered.com/app/4601230/Reel__Deal/

P.S. I used Claude Code as a coding assistant during development. The game design, mechanics, creative direction, and artwork were made by me and my talented friends.

r/AI_Agents TheReedemer69

Tested 6 browser use agents for real-world tasks — here's an honest breakdown + looking for recommendations

I've been on a hunt for a browser agent that can reliably handle daily agentic tasks: filling job applications, logging into sites and fetching data, making posts on my behalf, solving assignments and reporting results, and API/troubleshooting discovery.

Here's my honest breakdown:

  • ChatGPT agent — worst performer; slow, frequently blocked, and not very capable
  • Manus — versatile and impressive but cost is unsustainable for daily use, and bot detection still trips it up regularly
  • Perplexity Computer — high capability ceiling, but pricing makes it impractical
  • Perplexity Comet — best balance so far; runs in your own browser (bypassing most bot detection), but Pro account limits get exhausted quickly
  • qwen2.5:3b-instruct (Ollama) + Playwright MCP via CDP — hardware-limited on my end, but even accounting for that, it failed on trivially simple tasks
  • Gemini 3.1 Flash-Lite + same local stack — marginal improvement, still not production-ready

Open to any suggestions — local models, cloud services, or hybrid setups. What's your go-to for reliable agentic browsing?

r/SideProject hacmachdien

I build a simple extension for prompt enhancement

I built a simple tool for detecting missing contexts in a prompt and suggest context options for clearer and on-point AI answers. Still a little bit buggy but I think it is good enough for regular use. What do you guys think?

r/Art halien69

Dark Reaching, u/halien69, Charcoal, 2026

r/homeassistant hometechgeek

Go home unifi, you're drunk!

I mean I like a smart home, as much as the next person, but I don't have 23000 devices on home home network. I imagine that's have some impact on HA tracking all those entities. I'm not even really using it for anything! Removing in 3, 2, 1...

r/metaldetecting average_joe419

1928 Mercury Dime

I got lucky and found this in my yard. I have spent countless hours in my yard and this just proved to me that an area is never hunted out.

r/SideProject Lawliet_KLMN

App founder here, completely lost on marketing. Be brutal.

Hey all. I'm building an app that makes agentic AI usable for non-technical teams. No terminal, no setup. Just works like a chat app.

We're dogfooding it daily while we build. Product side is solid. But marketing?

I have no idea what I'm doing. I'd spend the this month turning demos into marketing instead of inventing a separate marketing machine. record the questions people ask, the objections that stall the room, the 30 seconds where eyes light up. that's my content and ad copy. Rough variants with acciowork are enough for that stage, demos are probably still my real growth channel right now

  • Posting on linkedIn 2-3x a week
  • Just started on reddit (hi)
  • started on youtube a week
  • Zero paid spend so far

What would you do in my situation?

I’m not dropping a link yet as I'm polishing the final beta for a wider release, but I'll share it here first in a few days. I’d love to hear any tips for a marketing.

Sure people are looking to solve an issue, not necessarily just spend some money. Think about how my can help them and how my product can do that. I need to think about this more deeply and build stronger insight around how I can really help people.

Here's where I'm at right now, that last part is what confuses me.

r/leagueoflegends gunbbangya

How to Detect Wards in Bushes Using LoL Settings

Most people turn off "Auto Attack" because it can interfere with precise control. However, did you know that if you leave Auto Attack on and stand inside a bush, your champion will automatically attack minions within range if there is enemy vision (a ward) inside that bush? This feels like a bug, but it still hasn't been fixed.

r/conan signuptopostthis

Eastern European Conan

r/Adulting Adventurous_Photo234

Why I am feeling forced to kill the kid mentality inside me ?

hey guys, I am 27 and working a full time soft engineer role in Odisha, India for 4 years.

Post college, I started the job and started losing many funs of life slowly. I always had a childish side which helped me have fun. As I never had a generational wealth support, I had to start over. Recently, I built a new 2-storey house using mortgage, got a new car, new bike in last 4 years. Moved my parents and siblings to here and continuing my job.

i feel bit burdened with responsibilities, I never have any much fun lately. I rarely go for trips, nearly stopped playing games or sports. I am feeling that the kid inside me is almost gone. I still try to bring up that side while talking to friends.

But then again recently, I met this girl and we hit off. We started talking daily but then when I showed her my childish side, she started nagging. Saying, u r not mature enough. you should stop behaving like kids and showing off toys.

i have a small Hot Wheels collections which she says is childish.

.

I am lost bro, yeah. All these efforts and finally I still cant have some peace.

.

I am just opening up a little here. Any advices/discussions are also welcome.

r/explainlikeimfive Ill-Heat4576

ELI5:What are the differences in energies between an ballistic millse and a returning space capsule?

I understand ballistic missle with an unarmed warhead can do phenomenal damage because of the energy envolved. With the extra mass of a space capsule, how can parachutes possibly dissipate enough energy to have a soft landing? It seems like the at the speed they must be traveling the anchor point, cables, or parachutes would just disentigrate.

r/therewasanattempt DIYLawCA

To cut off water to Palestinian villages

r/DecidingToBeBetter Public-Jello-8086

overcoming comparison

my biggest piece of advice for young women struggling with their appearance would be to stop comparing yourself. i know. it’s the opposite of what we were trained to do from a young age. i see a lot of women nowadays pointing to a specific celebrity, gushing over their feautures and the way they present themselves, wishing to be like them in every way. however, as i’ve learned, every single human emits their own entirely unique vibe. if you try to encapsulate the essence of someone else, you will feel like exactly that. that you are just trying to be exactly like somebody else. once you realize that effortless essence of that specific person you’re looking for, you ALREADY have. except it’s better, because it’s NOT exactly like hers. it’s different, completely fresh and unique, and unlike anyone to ever exist before. because it’s YOU. the most attractive and heard turning thing you could ever do in this world is fully embrace your authentic self. not only that, but to feel comfortable and confident in it. to explore it. to not care who sees it and judges it but to celebrate those who see YOU and live the same way.

r/DecidingToBeBetter Greedy-Fortune-2222

Which of your personality traits do you wish you could change?

Personally, I am trying to stop getting involved in helping everyone each time they show any signs of struggle.

It's detrimental to my personal time, family life, disempowers the recipient and oftentimes I am upset if the acknowledgement isn't as I anticipated.

r/ClaudeCode illicity_

I automated my job applications with claude code

I was trying to automate job applications with claude code, but I found that claude-in-chrome is just way too slow. It was taking 20-30 mins per application and ate so many tokens.

To solve this, I built a chrome extension which claude can control to autofill job applications in ~30 seconds.

I built a skill which uses the chrome extension and a job board MCP which fully automates my job hunt. It applies to 50 jobs per day fully hands off.

I'm still putting some finishing touches on it but I'm happy to onboard you if you want to try it out. Just shoot me a DM or leave a comment. Right now it only works for SWE roles.

r/Weird Amazing-Note-1196

Why does this look like something straight out of a nightmare?

r/ClaudeCode -Aglio

Anyone else waiting for Claude Code to get fixed?

Last month I paid for the $100 Claude Code subscription and had a great experience building a full automation project from scratch. Everything was smooth until this past week, when I started seeing the same issues everyone’s been talking about — instability, weird behavior, hallucinations, etc. Toward the end of my subscription, Opus even started suggesting stuff that already existed in my code lol.

I’m wondering if it’s worth waiting for Claude Code to get back to normal, or if I should move to something like Codex or GLM.

I’m a bit hesitant to switch since Claude’s models gave me the best results, especially for frontend. Honestly, since I started using AI tools, nothing else has matched Anthropic’s models when it comes to frontend development.

r/coolguides s18m

A cool guide about the ultimate collection of weapons

r/WinStupidPrizes Thryloz

Splitting the car

r/mildlyinteresting op_pmRISHI

Train Electrification around the world

r/mildlyinteresting kick_the_chort

DoorDash censored the name of this soup.

r/ChatGPT Fast_Tradition6074

Am I the only one getting "AI Fatigue" from ChatGPT's endless follow-up suggestions?

Does anyone else find it exhausting to talk to ChatGPT lately?

I use it a lot, but I’ve been feeling really drained. Even after I get the answer I need, it keeps baiting me with things like, "By the way, did you know about this? Do you want to hear more? You should probably know this."

It feels like a cliffhanger, and I never know when to end the conversation. I know I should just ignore it, but I’m always curous about what it’s going to say next, and before I know it, 1 or 2 hours have passed.

Is anyone else experiencing this "conversational loop"? How do you guys deal with it? Any tips to stop the FOMO (Fear Of Missing Out) when dealing with an AI?

r/SideProject omerrkosar

Design any barcode label you want, then generate 1,000+ from a CSV instantly (Free & No Signup)

Hi,

Printing labels from a CSV usually sucks because you're stuck with pre-made templates. I built a tool where you design the label layout exactly how you want it, then map your data to generate thousands instantly.

I built a web-based tool that gives you a full visual canvas to design exactly what you need, and then bulk generates it in seconds.

The Workflow:

  1. Design Freely: Use the visual editor to build your exact label. Drag and drop, change fonts, resize, and pick your barcode type (EAN, Code128, QR, etc.). You have total control over the layout.
  2. Upload your CSV.
  3. Bulk Generate & Print: The tool instantly applies your custom design to every item in your CSV. You can print directly to thermal printers (Zebra, Dymo) or standard A4 sheets.

Everything runs locally in your browser. No signups, no accounts, and your data stays 100% private.

Try it out here:https://barcode.assetstud.io

Let me know what you think of the editor's flexibility and if there are any specific features you'd want added!

r/AskMen Hereitisguys9888

When you felt behind in life, how did you deal with it? [19M]

so due to very strict parents, I did nothing during my teen years and missed 5 years of teen development. I'm 19 now, and I just feel so behind

all i ever did in life was go school (uni now), go home, hop on xbox or maybe go to gym (im shit at lifting and barely mare gains in the 3.5 years i went)

I'm 19 and it's like I'm learning to be comfortable with being outside, whereas people my age are in jobs, going to other countries, driving etc.

it doesn't help that i got brown parents who are always talking about marrying me and kids etc in the future. I dont think ill be ready for that shit till I'm 40

So wtf do I do? I'm trying to look for a job, but obviously the job market in the UK is horrible rn, so it's very hard to find a job. Plus, i live in a city with a huge immigrant population, so alot of the part time jobs for uni students are taken up

r/SideProject tokyo-spare

I created a Pokedex style app for real animals - Wildgram

Hey folks 👋

I built a Pokédex-style app but for real animals called Wildgram. It's a fun, social app.

The idea is simple - go outside, spot animals, and catch them in real life.

Here’s what you can do:

🐾 Catch real animals and add them to your collection

🗺️ See other's people catches and locations where others spotted animals

🏆 Gain XP & climb leaderboards based on your catches

✨ Different rarities animals - Common, Rare, Epic, Legendary

📖 Complete your Wildex and track your progress

Check out the app - Wildgram

r/Wellthatsucks queerfagatron

Just a runaway dumpster rolling down a ramp to smash my car

r/EarthPorn Frealornthewanderer

Otis peak RMNP [6589x3706] [OC]

r/homeassistant NGaijin13

Any beautiful alternatives to Aqara H1M wall switch?

Hello folks,

Any colorful and beautiful alternatives to Aqara wall switches with Matter, homekit or/and home assistant support?

Best,

Niko

r/SipsTea benny_lopez

Bro has an IQ of a peeled grape

r/Adulting Inevitable-Tap-7471

What do I do to improve my life?

so i feel so messed up in life. I am only 17 but i feel like i already formed really bad habits that will stop me from functioning like a normal person. I dont do drugs but i sleep bery late like 12 or 1 till to 7amand I am so tired always , thid prevents me from getting my work done and being able to stay focused. I feel so tired always. When it comes to studying I procrastinate so bad and i have no like sense of setting goals. Im so done with everythjng that I dont care anymore about school. I have hopes of going into dentistry bit idek if thst will be possible if dentistry is meant for the top top performers and i can barely be a normal functioning person. All my habits in life are so terrible and idk what to do. I feel like i set myself in stone already and i cant change. I havr no set discipline to do stuff or to keep going or be consistent

r/AskMen AlexpunkV8

How many of you are figuring out that patriarchy is a slavery system?

Hi there. With the rise of fascism in the world (again 😑 😤 😓 🙄), and all the hatred that comes with it, i am wondering, how many of you are waking up to the reality that patriarchy is actually a system that seeks out total enslavement of the population? Every problem that plagues men are results of patriarchy's rigid gender norms and roles. Highest rates for (sui ci de) are amongst men, because sharing your feelings with your close friends (establishing more profound emotional bonds), going to therapy (doctors for brains, which is an organ that can fail like every others), and crying, is for weak pussies (gendered belief, not mine). Men are the only ones that have to deal with military drafts (which is a patriarcal creation) where they're still established, because according to patriarchy, they're supposed to be protectors. Men are pushed towards the most hard jobs, with lousy conditions, because they're supposed to be financial providers to their family, which makes them being exploited, mistreated, and away from their families. I want to be very clear: I'm not saying that Life can be enjoyed without efforts, i know it's not. I'm saying that the pace we have right now is super unhealthy, and its pushed by patriarchy and capitalism. And with the rise of people saying that we should take autonomy from women again (which is what religions are advocating for by saying that women should submit to their husbands), and force them back to the roles of baby makers (which just mean enslaving women, again 🙄) so that they can make more workers to enslave and more soldiers to send to war, I'm left wondering, how many of you are actually waking up to the reality that patriarchy is actually a slavery system, and that we really need to rid ourselves of it?

r/SideProject Active_Value_9615

Building a real-time judge monitoring system. Roast my UI/UX!

I’ve been working on this niche SaaS for music competitions. The hardest part wasn't the scoring logic, but designing a dashboard that stays clean while handling live WebSocket updates from multiple judges simultaneously.

The Tech: [Insert your tech here, e.g., Next.js, Tailwind, Supabase].

I'm currently debating if the "Jury Table Status" (the sidebar on the right) is too cluttered or if it provides the right amount of info at a glance. What do you think about the progress tracking?

Happy to answer any questions about the build!

r/automation TheReedemer69

Looking for a reliable browser automation agent for daily tasks — what's actually working for you?

I've been testing several browser agents for everyday automation (job applications, scraping login-protected sites, auto-posting, API discovery) and nothing has fully delivered yet. Here's where I landed:

  • ChatGPT agent — slow, limited, and gets blocked constantly
  • Manus — capable but the cost is unsustainable, plus data center IPs get flagged by bot detection
  • Perplexity Computer — nearly capable but cost prohibitive
  • Perplexity Comet — the most balanced so far; uses your own browser so bot detection is almost a non-issue, but you burn through Pro limits very fast
  • qwen2.5:3b-instruct via Ollama + Playwright MCP (CDP) — too slow and got stuck on simple tasks
  • Gemini 3.1 Flash-Lite + same local setup — slightly better but still not reliable enough

Open to local or cloud-based solutions. What are people actually using in production for this kind of work?

r/DecidingToBeBetter soup-slurper

How do I start living my own life?

I don’t know why I’m posting because I KNOW what I should do to help myself. I just can’t make myself do it. I’m a sophomore (about to be junior) in college and I don’t know what I want to do as a career, I feel so behind. The past few weeks I fell into bad habits again. Sleeping in, skipping class, not brushing my teeth or showering as often as I need to, and more. The thing that’s making me write this post is that I’ve been obsessed with a video game (stardew valley) and I’ve been spending egregious amounts of time on it, it’s making me lazy. I’m hungry as it’s 1pm and I haven’t eaten anything but I can’t make myself get out of bed and go to the dining hall. I don’t want to see people or have other people see me.

Before all this, I took accountability and asked my parents to call me when they were driving to work so I would wake up early. Then sometimes I’d make plans with a friend to walk around and get breakfast to start my day off right, and it felt so good. I’d also report what I did with my day to my parents to make me feel guilty if I hadn’t studied. Another important thing to mention is that I’m a biology major and last semester I failed BOTH my biology and chemistry classes due to just not doing homework/studying/going to class consistently.

I haw T1D, and it’s been causing a lot of stress lately due to issues with my medical equipment and running out of supplies. I also have been told by every therapist I’ve seen that i likely have ADHD and need to get diagnosed by a psychiatrist. I bring this up because I feel like my main obstacle is just *starting*. I know that studying is actually interesting and fun when I do it, and once I start I can go for a solid couple of hours. I think about it all the time but actually getting out my laptop and DOING it just doesn’t happen. Same with things I know would help my mental health, like walking outside. Yesterday my boyfriend came over and I was having a bad day so I just wanted to stay inside and not have to get dressed and present myself to anyone but he convinced me to go since it was really nice outside. I felt so much better after walking and enjoying the weather, and again I think about how I should go outside constantly, but I feel like there’s no good reason to when I’m on my own. I just make excuses for why I shouldn’t even when I know it won’t be as bad as I think it will be and it’ll actually be very beneficial.

I feel like such a chud for struggling with such easy things. Recently I had two exams in the same week in chem and bio, and I got a 72 on the chem and 83 on the bio. Chemistry I had been doing really good studying for a week so I was a little disappointed that my grade was so.. below average. I was proud of myself for all the effort I put in but disappointed that the questions I got wrong were easy questions, that I didn’t study for because I was so focused on memorizing the calculations. It’s a lot better than my first exam grade in chemistry which was a 46 or something. I was nervous about my bio exam grade because I only studied for a few days for it because I got caught up in this stupid drama with my bf’s sister. I was excited and relieved at first that I got a good grade, but when I told my parents and boyfriend I felt ashamed that they were congratulating me for such an average grade and that I’m so far behind that an 80 is an achievement. In high school I would get 70s/80s or higher without studying or trying at all. I don’t understand what’s different now.

Anyways, all this to say that any tips/advice would be greatly appreciated. I want to try to motivate myself somehow, I’ve been trying to think about how next year I have a spot in a very nice apartment and if I fuck up now I won’t be able to live there. Everything just seems so far away and disconnected from my actions now. I know the semester is almost over and yet I can’t bring myself to go to my early morning classes even though I know they’re important and if I go out with a bang I can do all this laying around over the summer.

r/nextfuckinglevel uncle_russell_90

Guy runs marathon after being out all night drinking…

r/WouldYouRather Top_Value2690

Would you rather: live alone on Earth forever or live in a crowded world but never be able to talk again?

Would you rather:

Live completely alone on a perfectly habitable Earth for the rest of your life (no humans, no contact, no messages),

Live in a crowded world where everyone can see and interact with you, but you are never able to speak, write, or communicate in any way for the rest of your life?

edit : by no means of communicate indicates no written, spoken or any other form of communication where interaction with another human occurs

View Poll

r/SideProject Emergency-Pack2500

Shipped an AI clone feature for my link-in-bio side project. Not sure if it's actually useful or just fun to demo.

I've been building a link-in-bio tool on the side for a few months. Recently shipped something a bit different: a Digital Clone. Visitors can chat with an AI on your page that responds as you, based on what you've told it about yourself.

The thinking was that most link-in-bio pages are just static. You show up, see some links, leave. I wanted to make it feel like the person is actually there in some way.

But after shipping it I'm genuinely asking myself: is this something people will use regularly, or is it just impressive for 30 seconds?

Would love to hear from other builders. Does the concept resonate with you? And if you've shipped something AI powered, how did you figure out if it had real staying power?

r/AskMen Jazzlike_Sun690

What if Earth is like one of those uncontacted tribes in South America, like the whole Galaxy knows we're here but they've agreed not to contact us until we figure it out for ourselves.

r/OldSchoolCool PeachyPixel44

Pinup of actress Gene Tierney in her swimsuit (c. 1942)

r/Art AkatapisChaos

Intake, Alexander Aurin, Digital Art, 2019 [OC]

r/DecidingToBeBetter Busy-Molasses-2448

I keep waiting to feel ready, but it never really arrives

I don’t really know how to explain this without it sounding like nothing, but it’s been sitting in me for a while.

I keep waiting to feel “ready” before I do things.

Ready to change something. Ready to make a decision. Ready to feel steady in myself in a way that feels clear and certain.

But that feeling never really comes the way I expect it to.

Most of the time, I just end up moving while still unsure. Still a little hesitant. Still not fully grounded in it.

And strangely, things only start making sense after I’ve already been living them for a while, not before.

There are also these quieter moments where everything looks completely normal on the outside, but internally I feel slightly removed from it all.

Like I’m participating in my life, but also watching it at the same time.

Not in a dramatic or overwhelming way. Just a soft distance I don’t really have words for.

And I don’t know if that’s just how people feel more than they admit, or if it’s something else entirely.

I just know I’ve spent a long time waiting for a feeling of readiness that doesn’t seem to arrive… and sometimes feeling like I’m just slightly outside of my own life while it happens.

And I’m not sure what you’re supposed to do with that, except keep going anyway.

r/SideProject zvone1122

Built an Android SDK that lets AI apps use your phone as a tool (MCP)

Hey, I’ve been playing around with MCP and ended up building a side project called droid-mcp.

It’s an open-source Android SDK + on-device MCP server that lets AI tools access phone capabilities like SMS, contacts, camera, files, sensors, etc.

You can either:

  • call it directly from an app (Kotlin suspend functions)
  • or run it as a local server and connect something like Claude Code over WiFi

The main idea was to avoid UI automation and instead expose actual Android APIs as structured tools.

It’s grown a bit more than I expected (~90 tools across ~40 modules), and I’ve been adding things like NFC, media controls, screenshots, etc.

Not sure how useful it’ll be yet, but it’s been a fun experiment and I figured I’d share it.

Repo: https://github.com/stixez/droid-mcp

r/ethtrader everstake

Ethereum’s Staking Has Grown Into an $85B Security Layer

The staking ecosystem of Ethereum has quietly evolved into something massive, now securing over $85.2 billion in locked capital.

That makes it one of the largest decentralized security pools ever created, and it currently commands more staked value than many major networks combined.

What’s important here isn’t just the number itself, but what it represents.

This level of capital commitment reflects broad trust in Ethereum’s role as a settlement and security layer for the on-chain economy. It includes participation from a mix of large institutional players and individual stakers, all contributing to the network’s economic security.

In Proof-of-Stake systems, this isn’t just “locked value”, it’s the actual cost of attacking or manipulating the network. The higher it gets, the more expensive and impractical any attempt to compromise consensus becomes.

From that perspective, Ethereum’s security budget has reached a scale that is hard to compare with anything else in the industry.

It also highlights a broader trend: staking has become a core part of Ethereum’s economic model, aligning incentives between users, validators, and the long-term health of the network.

Whether you look at it from a technical or economic angle, the foundation continues to strengthen over time.

Full post: https://x.com/everstake_pool/status/2044098557909840021

r/ClaudeCode thehighnotes

Updates Claude code - auto?

hey all,

just had a thought.. I run Claude code on 4 devices (2 jetson orin devices, agx and nano, my workstation and a vps) - all Linux based.. and some didn't always auto update and I was kinda lazy about it (had that notification line in red often)..

I wonder if, among other factors that may play a part, our automatic updates on Claude code play a part too..?

I mean I've had sprints where for a good while I was on out of date versions, stable and happy as a clamp. this may well contribute to the diverse experiences were having..

I'm addition of course to:

- project contexts (Claude MD, memory files)

- diverse runtime servers (tpu, aws, cuda/tensor)

- statistical probabilities of issues - which always play a part

- recent issues acknowledged by anthropics' Boris on Git

so.. what if we would turn of auto update once we're are at a stage were happy? I mean.. windows users know all about this right? :p

r/SipsTea oluxil

The Machine Elves banned him

r/ClaudeAI something_out_of_10

Drawing with Claude using NumPy

I was playing around with seeing how far I could push Claude's drawing/modeling skills and was getting some fairly lackluster results. I mean, great for an LLM that doesn't have image generation capabilities, but not what I was hoping for.

I wanted more, so I started wandering about on the internet, reading various things and thinking about how I could approach it differently. I came across a matplotlib tutorial that talked about converting a PNG to a NumPy array, and it clicked — if an image is just a grid of numbers, Claude should be able to compute those numbers from math. I wandered down that road a bit, then chatted with Claude about it. He jumped on it and created some drawings that are really quite excellent — and a genuinely different approach from the typical SVG artifacts most of us have seen.

I'm letting him give an overview of the technical side below so you can try it out yourself. Something I'll probably explore when I get a little time is refining the process using real reference images and having Claude try to reproduce them, probably iterating with something like Karpathy's auto-research approach so he can "learn" to draw better and capture his findings in a techniques file.

---

Technical Notes from Claude

The core idea is simple: an image is a NumPy array of shape (height, width, 3). If you can compute RGB values for every pixel using math, you can make a picture. The trick is that NumPy lets you operate on the entire pixel grid at once — you set up coordinate meshes with np.meshgrid and then every operation applies to all 2 million pixels in parallel.

Here's what I used to build these scenes:

Signed Distance Fields (SDFs) — The main geometry tool. An SDF tells you how far each pixel is from a shape's boundary (negative inside, positive outside). You convert that to a filled shape with anti-aliased edges using a simple clip function. The jellyfish bells, the face shape, the mountain silhouettes — all SDFs. You can sculpt them by making the radius a function of position (that's how the jaw taper works on the portrait).

Value Noise and Fractal Brownian Motion (FBM) — For anything that needs to look natural. You hash integer grid coordinates into pseudo-random values, interpolate smoothly between them (smoothstep), and layer the result at increasing frequencies. Six octaves of noise produces convincing clouds, water texture, skin pores, hair strands. The nebula gas clouds use domain warping — feeding noise back into its own coordinates — which creates those swirling, organic shapes.

Sphere-Normal Lighting — For the portrait, I treated the face as an ellipsoid, derived surface normals (nx, ny, nz) from the coordinates, and computed a dot product against a light direction vector. One dot product gives you convincing 3D form. Add a reddish tint in the shadow areas and you get a subsurface scattering approximation — light traveling through skin.

Additive Blending — This is what makes the nebula and jellyfish work. Real emission sources (glowing gas, bioluminescence) add light rather than painting over what's behind them. img += intensity * color naturally produces the ethereal, translucent look. The jellyfish bell membrane glows brightest at its edges because that's where the Fresnel falloff concentrates the emission — which is physically correct.

Gaussian Falloffs — np.exp(-d² / 2σ²) shows up everywhere: sun glow, eye catchlights, atmospheric haze, diffraction spikes on stars, bioluminescent glow halos. Different sigma values for tight core versus wide atmospheric scatter, stacked in layers.

The scenes I built, roughly in order of difficulty:

1. Sunset landscape — gradients, FBM clouds, mountain silhouettes, water reflections with noise-based sparkle

2. Deep space nebula — domain-warped FBM gas layers, dark dust lanes, multi-tier star field, bright stars with 6-pointed diffraction spikes

3. Bioluminescent jellyfish — cosine-profile bell domes with Fresnel membrane glow, radial canals, 14 tentacles per jellyfish with individual wave patterns, volumetric god rays, marine snow

4. Human portrait — the hardest by far. SDF geometry, directional lighting with SSS, patterned irises, cupid's bow lips, hair with strand texture. It lands as stylized illustration rather than photorealistic — faces are where pure math hits its ceiling, because humans scrutinize faces like nothing else

The only prior work I could find on this was a Towards Data Science article where ChatGPT struggled to produce a smiley face from NumPy arrays. The gap between "smiley face" and "composed scenes with physically-based lighting" is pretty wide.

All four scenes are 1920x1080, generated in seconds, using nothing but NumPy and PIL (for the final PNG save). The code is pure Python — no shaders, no rendering engines, no drawing primitives. Just arithmetic on grids of numbers.

EDIT: Sorry, it seems I failed to properly attach the images. Trying again.

https://preview.redd.it/es10tnh126vg1.png?width=1920&format=png&auto=webp&s=110b1406df7c7e4b966a74472dcc5e6ada3cd749

https://preview.redd.it/aprjjy7226vg1.png?width=1920&format=png&auto=webp&s=c418bb2e616f27a6de2e44d8d3510a42ba764006

https://preview.redd.it/9tvp88y226vg1.png?width=1920&format=png&auto=webp&s=6b976b314767ee4d13ad41baddf24879560c481c

https://preview.redd.it/vkt1rua326vg1.png?width=1920&format=png&auto=webp&s=6395992c4f75b92182166d4e722776603d8acf60

r/meme NEKORANDOMDOTCOM

Don't Act Like You're Not Impressed

r/DunderMifflin itsarace1

I don't get this Erin quote. "I've never had an espresso before. They're good though." How can she know they're good if she's never had one? (from S9E11)

r/aivideo KINGZABB

Skate Tape | ZABB SHOW

r/Adulting MegaDriveCDX

Has the word ‘incel’ just become a slur against virgin men?

This frustrates me on so many levels. I'm a virgin man, older than the Steve Carrell movie at this point and that alone is frustrating than most people realize but add in having to deal with people's preconceived notions is something I have little patience for.

'Incel' is just a label people use to insult and degrade men and to be clear, this isn't me supporting incel ideology. I find it repulsive, stupid and whole fully fallacious that deserves the scorn it gets . I don't and can't adhere to it the same way an adult can't believe in the Easter Bunny. About 20 years ago when I was in college and depressed about a nasty rejection, I fell into an online space that would be called incel today. The first day I felt like I belong and mostly observed and listened to other men say some genuinely fucked up things about women, but I was upset and didn't care. Second day after I calmed down I was having too many issues with the rape fantasies and extreme power dynamics between men and women presented. By the third day, I realized this wasn't for me and just left. I don't remember the name of the site or how I even found it, I think someone on gamefaqs.com networked me into it but I can't remember.

With that said, I'm trying to paint a picture that I am in no way supporting those kind of views but it doesn't matter because people always inevitably attach them to me. In both online and in real life (which is far less common) I get pegged with being an incel when someone finds out I'm an adult virgin. This wouldn't be a problem by itself, but it often comes with preconceived notions that vary from person to person. Ex: I complain about being a dateless virgin with a 100% rejection rate just getting a first date. That info alone is enough to have people conclude that I hate women, that I support Andrew Tate and misogyny, sometimes even going as far as saying I'm a rapist and abuse women, two things that logically can't even go with someone who is a dateless virgin.

And the most controversial take of this is that's it's often women who aggressively make these claims. Whenever I talk about this online, there are always women who swarm in and just berate and insult ENDLESSLY about my male privelage, how men kill and abuse women and whatever else they can imagine. Doesn't matter if I'm in a space created for men, women will be there. And this is horrible, I'm not down with it, but at the same time what does it have to do with me? I never physically abused a woman, knocked her up and refused to acknowledge the child , torment her , giver her an STD, etc. The worst I do is talk about getting frustrated and now I'm literally Elliot Rodger.

r/ChatGPT luckydotalex

ChatGpt show random foreign scripts in answers randomly.

They looks like Hindi or Arabic. I told ChatGpt to not show them, but it still doesn't work.

r/WouldYouRather Significant_Buddy_39

Would you rather your partner finish in 30 seconds or get soft during intimacy

r/SideProject Elmo_1337

I build an image Toolkit

Hey guys,

I think we’ve all been there: You just want to quickly convert a PNG to WebP or resize a photo for a client, you Google a tool, and you end up on a site that looks like it’s from 2004.

I got fed up with the clutter, so I decided to build convertimg.io.

  • Smart Conversion: Swap between JPG, PNG, and WebP instantly.
  • Cropper & Resizer: I kept them as separate tools to make the workflow faster for specific tasks.
  • The "Format Wiki": I’m building out an info page for every file type so people actually understand why they should use WebP over JPG.

Current Status: I’ve just indexed about 30+ pages and I’m rolling out more. It’s still a work in progress, and I’m focusing hard on the "no-nonsense" UX. No forced registrations, no hidden paywalls, just the tool.

I’d love your brutal feedback:

  • Is the UI clean enough?
  • Does it feel fast on your end?
  • Any formats or features you think are missing?

Check it out here: https://convertimg.io

r/automation jaych_777

Outreach

hello guys I'm trying to start freelancing after learning make for few months i chose a niche (dental clinics), chose pains to target (no-show/tomorrow reminders) im doing outreach through Facebook messenger (that's the most popular platform here) i have a sheet full of 200 clinics with their names numbers Facebook and status (replied/ignored) that i scrapped manually through maps and google , so far i contacted around 90 clinic less than a 10 showed little interest, 2 asked for demo, both left me on seen ( i made two clean demos with openscreen, with cursor zoom and all i made sure to make it look as professional and short straight to the point as possible showing the sheet/notification in a split screen before then after scenario trigger ).

my question is : is this normal? i read that personalized outreach like this gets u at least one client within 50 conversations unlike mass cold email outreach, what should i tweak ? or should i just pivot to something else?

any help is appreciated now I'm feeling like im just wasting time with this method.

r/SideProject Developer_Memento

The irony of subscription tracker apps — they track the wrong things and charge you monthly for the privilege

Subscription trackers charge you a monthly fee to show you a list of payments you could find yourself by checking your bank statement.

Your monthly payment for a service doesn't matter as much as its end date. How much you pay monthly for an internet provider isn't as important as knowing when the contract ends.

If you miss the end date, you'll likely roll onto a much higher rate without knowing. You need to know when things end, not how much you pay for them monthly (that's what a budgeting apps are for).

You need to be tracking end dates, not monthly cost.

End dates that matter:

  • Phone contract — rolling onto full price for a handset you already own
  • Broadband deal — that 50% introductory rate won't last forever
  • Car insurance — auto-renews at last year's price if you don't shop around
  • Home insurance — same trap, different policy
  • Mortgage fixed rate — expires onto the standard variable rate without warning
  • Boiler service — miss it and you void your warranty
  • MOT — legally required, no payment to remind you it's due
  • Domain name — forget it and someone else can take it
  • Passport — expires every 10 years, only matters when you're about to travel
  • Pet insurance — often auto-renews at a higher premium after year one
  • Breakdown cover — annual, easy to forget you even have it
  • Software licences — the renewal invoice is the first reminder most people get

That's what All Renewals is for.

One-time purchase. No monthly fee. Everything stays on your device.

r/SideProject Old-Appeal8521

Can you guys tell if this will work or not?

I’m building something for people who go to gym and care about diet.

One problem I personally face is when eating outside — it’s hard to know what to eat without messing up your goals.

So I’m thinking of building this:

You scan food or a restaurant menu → it tells you:

  • should you eat it or avoid it
  • calories + protein/carbs/fats
  • suggests better options
  • even what to do after eating it (walk/cardio, etc.)

Basically like a small fitness guide when eating outside.

Would you actually use something like this or is it unnecessary? Be honest.

r/SideProject SaschaK84

Watch me go from clone → running SaaS in under 2 minutes

Built this for other people which are like me tired of setting up the same SaaS stack over and over again. Focus on building User Features instead of other stuff all arround and going to revenue.

Now first customers bought it and are really happy,oo.

Details: This is a Node.js boilerplate with AI + Stripe already wired in, so you can go from idea → working product really fast.

Would love feedback from other indie hackers:
What would you want included in something like this?

fastrelease.net <---

r/meme Striking_Glass_1724

At least we are honest!

r/SipsTea Repulsive-Mall-2665

Business as usual in China

r/LocalLLaMA sickicarus32

Getting Started with Local Ai (beginner)

So I want to set up a local Ai model. I want it to be able to host a DND campaign (with potentially multiple players), generate consistent images and video, be a good story teller, and be trained on a vast amount of input data of my choosing.

I am a complete beginner and do not have the hardware to do this yet.

Does anyone know a good starting point or places to begin learning?

r/me_irl stable_genius9

me irl

r/Art serena22

Snake, Serena Cutler, Acrylic, 2026 [OC]

r/Adulting DocDvyne

Is there really no future for someone who can't go to iit or nit?

r/homeassistant AstrxlBeast

Just bought a house

And got a ton of Zigbee LEDs so that we can dim and change color as needed in all the rooms. In the previous house we were at, we used Sengled LEDs with a Sengled hub but their servers went down for like a week last year and prevented us from using it, which got me on the self hosted train. I have a HAOS Green box with a Zigbee dongle and am now thinking about how I can expand away from just the lights and into other stuff and automations.

I bought a garage door tilt sensor, a button/remote thing, humidity and temp sensors, motion sensors, a floodlight, and water/leak sensors. Anything else I should consider integrating into a 3 bed 2 bath home we only plan to stay in for 6-7 years? I want to do cameras but am not wiring PoE all over my house so am thinking about WiFi battery cameras outside in the front and back. And perhaps door locks. And a VPN so I can check if the door locks and garage door are open while I’m away. Just looking for some inspiration!

r/Jokes Wise-Ride-2578

For the righteous

Husband: What should we name the baby?

Wife: William

Husband: But that’s my father’s name.

Wife: No one’s rights should be taken away.

r/meme johnnybangs

2026 Best Costume

Stolen shitpost from instagram

r/Anthropic fortune

Anthropic faces user backlash over reported performance issues in its Claude AI chatbot

Anthropic, the high-flying AI company, is facing a backlash from some of its most prolific users over a perceived decline in the performance of its Claude AI models.

The issues have left the company—recently valued at $380 billion and reportedly en route to an IPO—scrambling to respond to user revolt and online speculation about its motives and its ability to serve its newest wave of customers.

Anthropic’s popular Claude AI model has seen a significant decline in performance recently according to many developers and heavy users, who say the model increasingly fails to follow instructions, opts for sometimes inappropriate shortcuts, and makes more mistakes on complex workflows.

The complaints appear to be connected to recent changes Anthropic quietly made to the way Claude operates, reducing the model’s default “effort” level in order to economize on the number of tokens, or units of data, the model processes in response to each request.

Read more: https://fortune.com/2026/04/14/anthropic-claude-performance-decline-user-complaints-backlash-lack-of-transparency-accusations-compute-crunch/

r/Art _ferrism_

Damn Good Coffee, Ferris Martinez, Oil/Canvas, 2026 [OC]

r/Adulting Sheble24

Fatherhood

I’m about to embark on my fatherhood journey in approximately two weeks, give or take. I would greatly appreciate some advice from those who have successfully navigated this path. Thank you in advance for your guidance!

r/Adulting Own-Meeting2798

What do I need to prepare moving out?

Hi, F16 here. I make around 3,000 pesos per week and I pay about 1500 monthly for expenses for now (braces, load, extra stuff since summer). I am also saving up for few school supplies because they are expecting me to pay for it on my own. I don’t have a good relationship with the people in my household, but I still respect them so I won’t go into much detail about that.

I plan on moving out immediately by the time I’m 18 (first year college by then) and not depend on them anymore. What do I need to prepare and are there things I can start investing on right now? I know I need to have healthcare insurance and stuff, but I don’t know where to start. I just want to know what I can do now to start that journey. Thank you so much everyone!

r/ChatGPT r0sly_yummigo

ChatGPT Projects helped a lot. It still doesn't solve the moment you open any other tool. Here's what does.

ChatGPT Projects was a real improvement for me. Context scoped per product. No more re-explaining the basics every session. Genuinely useful.

Then I'd open Perplexity for research. Or an image generation tool. Or Claude for a specific task it handles better. And all of that context was gone.

The workflow around building a product isn't one tool. It's marketing in one place, research in another, visual direction somewhere else, content creation across three different interfaces. Projects solves the memory problem inside ChatGPT. It doesn't touch anything outside it.

And even inside Projects, the context is still flat. Same file regardless of whether I'm writing App Store copy, doing competitor analysis, or creating social content. The model gets everything even when most of it creates noise for the specific task at hand.

I was running two products simultaneously — Yumigo on the App Store, Ethos in development. The mixed-up responses were the visible symptom: asking for Yumigo content and getting something that described Ethos. The underlying problem was that no tool in my stack had a complete, accurate, task-aware picture of what I was actually working on.

Lumia is a macOS overlay that holds one vault across all your projects and all your tools. The part that makes it different from another reference doc is the skills layer — built-in domain intelligence that knows what context each type of task actually requires. The marketing skill knows what a language model needs for good copy work. The research skill structures context for intelligence tasks. Each skill knows the domain-specific gaps and fills them before the prompt is generated.

You type what you want in plain English. The right context for this specific task gets pulled, structured, and pasted wherever you are. ChatGPT, Claude, Perplexity, anything.

Projects gives you memory inside one tool. Lumia gives you precision across all of them.

Beta open

r/midjourney Zaicab

Australian space program visual

r/ClaudeAI no1msd

I built a cmux-style terminal multiplexer for Linux with a scrolling layout

If you're on Linux and jealous of cmux, this might be for you.

Séance is a scrolling terminal multiplexer with AI coding integration. It supports multiple workspaces, it auto-hooks into Claude Code, Codex, or Pi sessions and shows real-time agent status in a sidebar, and tracks their notifications in the background. Everything is scriptable through a Unix socket API, and there's a skill file so Claude Code can drive the multiplexer itself.

Built on GTK4 and libghostty with the help of Opus 4.6. Free and open source. You can install it through AUR, Nix flake, AppImage, or from source.

GitHub: https://github.com/no1msd/seance

r/ollama DetailPrestigious511

Ollama Max vs. Claude Code vs. ChatGPT Plan

Can someone give me some clarity on this topic, please?

Right now, I am an Ollama Pro user. It is currently handling about 50% to 60% of my workload, but I want to upgrade so I can work in parallel on multiple projects. I am looking for a new subscription and have three options in mind:

  1. Ollama Max ($100 plan)

    The only problem is that while I get access to several models, the inference speed is a little slow.

  2. Claude Code ($200 plan)

    I have used Opus and Sonnet models via API costs, but I have never used a full Claude subscription or this specific tool.

  3. OpenAI ChatGPT ($200 plan)

    This is also in the bucket as a possibility.

For those with experience, could you please advise based on my use case? I do a lot of coding. Quantitatively, it is hard to say because everyone is different, but let's say I have three windows of Claude Code running for feature building about 10 to 12 hours per day.

What would you recommend?

r/AskMen MoneySignificance771

Aside from having sex with men, what are the gayest things about you?

r/Adulting Spirited_Pay2922

You Are Masking The Pain. #personalgrowth #shorts

r/HistoryPorn lightiggy

Prussian nobleman's son Alexander zu Dohna-Schlobitten, 7, sits in a chair at his family's estate (German Empire, 1907) [652 x 1024].

r/ChatGPT NovatarTheViolator

AI has changed the way I administer my system

I can have it research the topic, figure out the best way to proceed, propose solution, then take action upon approval. Useful as hell.

r/LocalLLaMA NxAsif

Hardware needed for Gemma 26B MoE vs Qwen 14B for ~100–300 users (vLLM, single node?)

I'm trying to figure out what sort of hardware setup i will need to accomodate a userbase of 100 users (not necessarily concurrent). Does anyone have any idea what sort of setup i'd be looking at?

Model: Qwen 2.5 14B (Q4_K_M) via vLLM.
Context: Hard cap at 8K (is 16k possible?)
Stack: FastAPI + vLLM + Cloudflare Tunnel.

i want to maximize concurrency/throughput on a budget. I need to handle traffic spikes when users might be spamming msgs simultaneously.

Will a single 3090 (24 gb vram) be enough for ~20 concurrent requests on 14B with 8K context using PagedAttention/Chunked Prefill?

Does anyone have real-world tokens/sec data for Qwen 14B on vLLM under high load (20+ users)?

r/meme ix_toshik

Goon but with a "why?"

r/LifeProTips Edi-Iz

LPT: If you’re learning a language, start speaking before you feel ready

A lot of people wait until they feel confident before they start speaking a new language, but that usually slows things down. Speaking is a different skill from reading or listening, and the only way to improve is by actually doing it even if it feels awkward at first.

Short, simple practice sessions where you actively speak can help much more than just passive learning. Over time, it starts to feel more natural

r/homeassistant Olinono123

Are Moes devices reliable? Looking Moes thermostat for electric floor heating.

Are they easy to integrate with HA?

r/OldSchoolCool agfacid3

Steve Austin, 1974.

r/SideProject Jumpy_Valuable7052

Built webtool that makes practicing guitar riffs from YouTube videos way less frustrating by auto-looping sections

I kept getting stuck practicing riffs from YouTube because I was constantly rewinding, overshooting, and losing timing, especially on faster sections.

So I ended up building something that automatically loops specific parts of a lesson so you can just focus on repeating the section cleanly without touching the timeline every few seconds.

It actually made a big difference in how I practice, so sharing it here in case it’s useful for others.

www.riffvault.app

r/ClaudeCode deecadancedance

Not sure if I should stop it

I asked Sonnet to check if the units in a script were correct compared to those in a paper. Admittedly a heavy question, but I didn’t think too much of it. This single questoon ended up gobbling the full usage window, it’s still going and consuming my extra usage, and gives no signs of life whatsoever.

Should I just kill it and give up? It’s been going for a good half an hour. It’s the first time I see something like this.

r/LocalLLM tme85

brand new to Local LLMs -- best starter model for M5 pro w/ 64 GB RAM

just got an M5 Pro MBP with 64 GB RAM. downloaded LM Studio. Want to get started playing around with local LLM.

I'm not a programer, have no software development experience.

primary use for llm is general chat and info look up, business document review and collation, basic financial review. Also interested in playing around with with some local agent stuff with Hermes/OpenClaw (i.e. calendar and email management, file and document cleanup, website interaction, etc. )

I understand I might be underwhelmed with local LLM vs Claude Max sub I've been using.

Mainly just want to dive in a get started playing around with something.

what model should I start playing with? Any other tips/advice? Thank you !

r/ClaudeAI crackalamoo

I built an interactive first-principles climate physics simulation with explainer

A 3D visualizer of earth's climate in the browser. Introduces physics step by step so you can watch each process unfold as a piece of the overall climate.

I built this over 6 months, almost entirely with AI, mostly Opus 4.6 in Claude Code. SF weather made no sense to me (Barely any seasons? September is the warmest month?) and I wanted to understand it better myself. This is a polished version of the app I'd want for myself, adding physics layer by layer to isolate the impact of each piece, and using an LLM to analyze and explain the data.

The models know more about math, physics, and software than I do — but especially on the physics side, they have terrible intuition. Claude can "get the error relative to observations down to 4 °C" just fine, except it'll totally hack and overfit the physics along the way. Subagents to subjectively verify "the physics is sound, no overfitting" didn't really work either. So I had to review the physics code manually.

The entire model is first principles; no machine learning or using observed data at all, except fundamental constants like the radiation of the sun and an elevation map. But after a while, it started to feel like "machine learning in slow motion": instead of an ML model training its parameters, Claude and I were choosing parameters by hand. Some amount of tuning parameters (within a physical range of uncertainty) to match observations is inevitable.

The in-app LLM layer has a tool to evaluate arbitrary math expressions over the simulated data using an AST, which was also pretty fun to build.

This is me finally having an answer to "everyone's vibe coding, but has anyone shipped anything non trivial?"

Repo: https://github.com/crackalamoo/building-earth

r/leagueoflegends Aithusa_Here

Interview with Riot Phroxzon on S2 of 2026: Demons' lore, WASD controls vs pros, and all gameplay changes explained

Hi everyone!

S2 is finally here and it's a Pandemonium!! Even though it'll be a shorter season than usual, this demon-focused narrative seems really exciting!

I had the opportunity to speak with Riot Phroxzon, Lead Gameplay Designer fro League, about all and everything S2. Topics discussed:

  • Why fewer gameplay changes based on the season's theme
  • Why Pandemonium and demons as theme
  • More on Ten Kings and demons' lore?
  • Items and runes' system philosophy to be more champion-centric
  • Statikk Shiv applies all on hit?
  • How strong is WASD - but not for pros
  • Champion-specific keybinds for ALL players
  • How and when early game termination

and more!

As usual, feedback is appreciated - hope you enjoy the read!

r/interestingasfuck KARNA5000

It’s official: Sindarov vs Gukesh World Chess Championship match, the youngest championship match ever🙌

r/homeassistant SwizzleVixen

Busy Light hardware + macOS app

Hi! Been lurking for a long time, but wanted to share a project I just finished. I wanted to have a “busy light” on the outside of my office door, so my partner could see if it was ok to interrupt me. (I have ADHD, and “just a quick question” switching costs are very high for me.)

Plenty of USB busy lights on the market, but the door is far enough from my desk, or any source of power (old house), that I wanted to have it battery powered and remote controllable.

I used some WS2812 LEDs controlled by an ESP32 set up with ESPHome, so it appears as a single RGB light to Home Assistant, powered by a USB battery pack, and in a 3D printed enclosure that hooks over the top of the door. (The enclosure is super bulky, and I need to miniaturize the circuit — it’s held together with breadboard jumper cables at the moment.)

Repo here: https://github.com/swizzlevixen/busylight-hardware

To control it, none of the existing macOS apps seemed to be quite what I wanted, so I made my own: it’s a menu bar item that allows you to make a custom list of Home Assistant scenes, with custom name, emoji, and keyboard shortcuts, with automatic triggers for camera and mic use so it activates a scene when you get on or off a work call.

The bonus of going through HA is that the app can activate any scene in HA, not just the busy light, so you can toss whatever you want into a scene, it doesn’t even have to be a light.

Open source, and I’m an Apple developer, so there is a signed and notarized release build if you want to download it and give it a try.

https://github.com/swizzlevixen/busylight

Note: This app started out as an experiment for me to use Claude Code as a tool, but I am also a real human developer (hobbyist since I was a kid, and professionally for 20+ years), so you can believe that I made Claude fix its shit. It’s quite polished now!

r/n8n Expert-Sink2302

I wasted a year building n8n workflows the wrong way. Here is the exact roadmap I wish I had from day one (+4 real workflows included)

I built over 40 automations in my first year. Maybe 10 of them actually survived in production.

What follows is a framework built from both failure and analysis. Twelve months of brute-forcing real systems, combined with analysis of over 10,000 workflows built by real users across every use case and skill level from synta. Here is what I would do differently if I started today.

1. Build the boring stuff first
The biggest ROI in automation comes from the repetitive, manual tasks nobody wants to do, not flash agents or OpenClaw setups. Standard workflow automation can save 25 to 40% in labor costs and deliver 30 to 200% ROI in the first year. Most small businesses don't even have these basics in place yet.

Start with deterministic workflows. These are rule-based and predictable. You know the input, you know the output, and they run the same way every single time. Get five of these actually running in production before you touch an AI node.

2. Learn three things that unlock everything else
Most people try to build workflows before they understand how data moves through them. These three things will change that.

  • JSON and data types. Automation is just pairs of keys and values. Once you can read JSON, you can navigate any data structure in any tool.
  • APIs and HTTP requests. This is the single most important skill you can develop in n8n. Every native node is just a pre-packaged HTTP request. If you know how to read API documentation, you can connect n8n to anything, even when a native node does not exist. The way most experienced builders approach this: copy the raw cURL command from the API documentation, paste it into Postman or Claude to test it with real parameters first, then bring that verified request into n8n. Never build blind.
  • Webhooks. Learn how to let other tools trigger your workflows in real time instead of having n8n constantly polling for updates.

3. Map the process before you open n8n
The most common mistake is jumping straight to the canvas and dragging in an HTTP node without knowing exactly what you're building.

Before you open n8n, write out four things in plain English: the business problem, the exact input and output, what success looks like, and the logical steps in between. Some builders use Miro or Claude to visualize this before touching a single node. If you can't explain the process on paper, you can't automate it.

A neat trick I saw builders use was this: when you do start building, place a Set node at the very top of the canvas as a global config block. Store your API endpoints, model names, environment flags, and batch sizes there. When you need to change something later, you change it in one place instead of hunting through 20 different expressions.

4. Master 15 nodes, not 250
About 90% of all workflows rely on the same small core set. After building loads of workflows myself and analyzing thousands of production workflows built by real users, these are the 15 nodes that appear in almost everything:

HTTP Request, Set/Edit Fields, IF, Code, Schedule Trigger, Webhook, Filter, Merge, Split In Batches, Wait, Loop Over Items, AI Agent, Google Sheets, Slack, Email Send.

That's it. Learn these well and you can build almost anything.

5. Stop watching tutorials and start breaking things on purpose
You cannot learn automation by watching videos. At some point you have to build something, let it break, and figure out why.

Three habits that separate builders who actually ship from those who don't.

Test with pinned data. Run your workflow once to capture real data, pin the output, then manually edit that pinned data to mock edge cases like null values, missing fields, or unexpected formats. You stop burning through API credits and you stop triggering live errors while testing. Pinned data doesn't affect production runs, so leave them in permanently to make future debugging much faster.

Use batches and waits. Rate limits are the biggest killers of production systems. Put a Split In Batches node before any loop and add a Wait node after it for 2 to 5 seconds. This alone prevents most 429 errors that crash workflows.

Build modular subflows. The most common mistake, especially when using AI, is building one massive workflow that does everything. Keep individual workflows under 20 to 25 nodes. Move common tasks like data cleaning, date formatting, or notifications into isolated subflows and call them with the Execute Workflow node. The main canvas stays clean and each piece can be tested entirely on its own.

6. Your AI node is only as good as the context you give it
LLMs don't know your business. They are predicting the next word. The difference between a good AI node and a bad one is usually the quality of context you give it, not the cleverness of the prompt.

A system prompt tells the model what role to play. Context gives it the raw material to actually play that role well. One sets the character, the other fills in the knowledge.

A practical example: if you're classifying inbound support tickets, passing just the ticket text gets you a generic category. Passing the ticket text plus the customer's order history, their previous tickets, and your internal escalation rules gets you a routing decision that actually reflects how your business operates. The output quality is going to be much higher.

7. Translate everything into three numbers
Once a workflow is live, measure it. Time saved, errors reduced, cost per run. Showing a client real numbers after three months is what turns a one-time project into a long-term partnership.

Nobody outside of automation cares about JSON, webhooks, or agentic pipelines. They care about time saved, money saved, and fewer mistakes. Every workflow you build should map back to at least one of those three.

Bonus Gift
I pulled a few workflows from builders who deployed real systems using synta (note these were people that explicitly gave permission to do this). These are from the earlier archive and each one solves a specific, non-obvious problem. Take what's useful:

Description Link Business listing monitor that runs daily, scrapes 10 acquisition marketplaces, hashes every result, and only alerts you when something genuinely new appears https://github.com/Synta-ai/n8n-mcp-workflows/blob/main/lead-generation/business-listing-monitor.json Airtable checkbox research pipeline that runs when a checkbox is ticked on any record, it fires Perplexity for live research, passes the findings to Claude for analysis, then writes the brief back into the same row https://github.com/Synta-ai/n8n-mcp-workflows/blob/main/research-intelligence/airtable-checkbox-research-pipeline.json Academic literature review generator that runs when you submit a topic via form, it searches Semantic Scholar and CrossRef, analyzes each paper with AI, and exports a full structured literature review https://github.com/Synta-ai/n8n-mcp-workflows/blob/main/research-intelligence/academic-literature-review-generator.json
r/Art Crestofawave1

Sega, John Smitten, Acrylic, 2026

r/meme new_northwesterner

What was your first reaction? 🥕

r/meme LVA_MoP

Please let corporate design be edgy

r/ClaudeAI Jeehut

Turned Anthropic's Harness article into a working Claude Code plugin

I've been running Claude Code and Codex side by side manually for months — same prompt to both, copy-pasting findings between them, iterating until they agreed. It worked well (the two models genuinely catch different things), but every handoff depended on me.

Then I read Anthropic's Harness design for long-running application development and it confirmed what I'd been seeing: a separate session verifying work independently produces better results. The separation itself is load-bearing.

So I automated my workflow as a Claude Code plugin. It's called TandemKit, and it splits work into three sessions:

- Planner — investigates the codebase with Codex, converges on a spec, you approve before anything gets built

- Generator — implements against the spec autonomously

- Evaluator — runs Claude and Codex independently, merges their findings, issues FAIL (back to Generator) or PASS

Convergence uses agreement × severity dimensions (HIGH/MEDIUM/LOW, agreed/partial/disputed) — not scoring, because scoring hides failures.

Everything stays as plain markdown files in your repo. Git history gives you not just commit messages but the full conversation behind each commit (if you want).

Needs Claude Max + ChatGPT Plus subscriptions (no API billing, no orchestration service — just subscription tools).

GitHub: https://github.com/FlineDev/TandemKit

Full write-up: https://fline.dev/blog/tandemkit-pair-programming-for-ai-agents/

I've used it for ~20 sessions now and iterated heavily along the way, so it's shaped around my workflow. If you try it and something feels off for yours, I'd love to hear —feedback and PRs welcome.

r/metaldetecting Efficient_One_1190

First Finds

First time out with my Garrett Ace Apex. Front yard finds. Found the US Penny on first sweep. The rest were not as straight forward. LOL.

r/Futurology pfassina

The future as software costs trend towards zero

The question is simple. What the future would look like as the cost of software development trends towards zero?

Programming is one of the fields mostly impacted by AI today. As a programmer, I’ve seen AI evolving from auto completing a sentence, to writing whole software stacks in minutes with a single prompt.

While the rate of improvement is impressive, the quality of the software still falls short from what a world class programmer would produce. It is still somewhat unclear whether AI will ever get to the level, but let’s assume here that over the next 5 to 50 years it will surpass the best programmers humanity has ever produced.

If that becomes true, every person with a $50 AI subscription would have a data center of programming geniuses on their smart phone. They would be able to program any software they want from a half backed prompt. The only limitation would be there lack of imagination.

I would suggest that paid software would be the first to stop to exist. Who would pay for Windows or Microsoft Excel if they can stand up a clone on their own in a couple of minutes?

Eventually, free software would be next. People, realizing that they are the product of free software, would stop using software that invade their privacy and sell their data.

Next would be Free and Open Source Software. Why reuse a general purpose software build by other people, when you could have something built only for yourself with your own needs prioritized? Why share your software, if everyone is too busy using their own?

Finally, AI models themselves could potentially be up for grabs. This would also depend on energy and computing costs, which would need to also go converge to zero, but it would be possible for people to eventually build their own AI models.

We would then arrive at an age of private software for personal usage. Everyone would have their own personal cloud, with their own personal apps, and interact with the world through protocols and AI handshakes. That would be a fascinating world, very different from what we know today.

I’m not sure how the software-driven tech companies would survive. They might be the first to fall, with Infrastructure and Hardware tech companies taking up the reigns of the industry until robotics drive manufacturing cost to zero. That, however, is another story for another day.

What are your thoughts? How do you see the world evolving as software costs converge towards zero?

r/comfyui Worldly-Spring6430

AI Image → 3D Model (Hunyuan) — How do I keep or restore textures/colors?

I’m generating buildings with AI (ChatGPT images), then converting them to 3D using Hunyuan3D for use in Unreal Engine.

Problem:
When I convert to 3D, the models lose all color and come out as white/gray meshes.

Goal:
I want to keep or reapply the original textures/colors — ideally using ComfyUI or a local workflow (I have ~48GB VRAM).

Question:
What’s the best way to go from AI image → textured 3D asset?

  • Can ComfyUI generate/apply textures?
  • Do I need Blender for projection/baking?
  • Any good AI-based texturing workflows?

Appreciate any direction

https://preview.redd.it/claxf0w4y5vg1.png?width=2078&format=png&auto=webp&s=71a0e74833b3425df7536c95762b64d0eb245c24

Nothing complicated, I just need a top coat.

r/ProgrammerHumor TobyWasBestSpiderMan

beautyIsTheStandard

r/LocalLLM Great-Structure-4159

Seeking help with hosting my LLM

hey guys! I'm DQN Labs, I've published a series of efficient small-form-factor LLMs, with specialization for their tasks, fine tuned using Unsloth. I have uploaded the models on huggingface and am trying to find a hosting solution to host them on my website:

https://dqnlabsai.web.app

and unfortunately... I can't really pay you or offer money for your services :(, it'll just have to be out of your good will. 2even if you can't host the model yourself, if you know any resources, or have something to share with me that you think will help (I'm new to this model hosting world) please DM me and let me know. You can also reach me on Discord at dqnlabs.

r/meme Dry-Syllabub-3500

And then they ask me to rate, how do i fell about their ads

r/AI_Agents ConsequenceDwe

Why LLMs Suck at Following Word Counts (It's Actually Math's Fault)

Ever wonder why you can ask Claude/GPT to "write exactly 500 words" and it gives you 437 or 612? Turns out it's not just being stubborn - it's mathematically hard. (Link in comment)

The problem: LLMs are trained to predict "what word comes next" based on probability, not to count words and stop at exactly 500. Adding that constraint requires computing over an exponentially large space of possible 500-word sequences, which is basically impossible.

What we're stuck doing:

  • Asking nicely and hoping for the best
  • Generating multiple times and picking the closest one
  • Using phrases like "approximately" instead of "exactly"
  • Post-processing to trim/extend

The real solution? Probably needs new model architectures that treat length as a core feature, not an afterthought. Until then, we're all just doing workarounds.

Anyone found tricks that work consistently?

r/arduino Few-Peach-3646

Solarpaneltracker doesn't work right

So I started a project at school where we build a solarpanel tracker. I've now done all the hardware as well as the software work, but now the Servo motors won't do anything when I start the code. I've tried changing the jumpercables, changing the Powersupply, changing the Code and now I don't know what else the problem could be.
The Hardwares setup looks like shown down below and I'm using 2 SG90 9G servo motors, a NodeMCU-ESP32 Microcontroller and a voltage devider system with 4 LDRs making it easier to read and write in the code, with one up/down and one right/left-axis. I wrote the code in Arduino IDE before uploading it onto the ESP32:

``` #include  Servo horizontalServo; Servo verticalServo; const int pinH = 27; const int pinV = 26; int currentH = 20; int currentV = 20; const int limitHHigh = 160; const int limitHLow = 20; const int limitVHigh = 160; const int limitVLow = 20; const int startH = 20; const int startV = 20; int readADC(int pin) { long sum = 0; for (int i = 0; i < 16; i++) { sum += analogRead(pin); delay(2); } return sum / 16; } String getLightDirectionH(int err) { if (abs(err) <= 300) return "centered"; return err > 0 ? "from RIGHT" : "from LEFT"; } String getLightDirectionV(int err) { if (abs(err) <= 300) return "centered"; return err > 0 ? "from TOP" : "from BOTTOM"; } String getServoActionH(int err) { if (abs(err) <= 300) return "holding position"; return err > 0 ? "turning RIGHT" : "turning LEFT"; } String getServoActionV(int err) { if (abs(err) <= 300) return "holding position"; return err > 0 ? "moving UP" : "moving DOWN"; } String getLimitStatusH() { if (currentH >= limitHHigh) return " [LIMIT RIGHT reached!]"; if (currentH <= limitHLow) return " [LIMIT LEFT reached!]"; return ""; } String getLimitStatusV() { if (currentV >= limitVHigh) return " [LIMIT TOP reached!]"; if (currentV <= limitVLow) return " [LIMIT BOTTOM reached!]"; return ""; } void moveToStart() { Serial.println(" SOLAR TRACKER - Startup Sequence"); Serial.printf(" Target: Horizontal %d | Vertical %d\n", startH, startV); Serial.printf(" From: Horizontal %d | Vertical %d\n", currentH, currentV); Serial.println("-----------------------------------------"); Serial.println(" Moving to start position..."); while (currentH != startH || currentV != startV) { if (currentH < startH) currentH++; else if (currentH > startH) currentH--; if (currentV < startV) currentV++; else if (currentV > startV) currentV--; horizontalServo.write(currentH); verticalServo.write(currentV); Serial.printf(" >> H: %3d V: %3d\n", currentH, currentV); delay(15); } Serial.println("-----------------------------------------"); Serial.println(" Start position reached!"); Serial.println(" Waiting a moment..."); delay(500); Serial.println(" TRACKING ACTIVE"); Serial.println("=========================================\n"); } void setup() { Serial.begin(115200); horizontalServo.attach(18); verticalServo.attach(19); // Set initial software position currentH = startH; currentV = startV; // Physical move to start horizontalServo.write(startH); verticalServo.write(startV); delay(500); moveToStart(); } void loop() { int valH = readADC(pinH); int valV = readADC(pinV); // Assuming 12-bit ADC (0-4095), center is 2048 int errH = valH - 2048; int errV = valV - 2048; int oldH = currentH; int oldV = currentV; // Update positions based on error threshold if (abs(errH) > 300) currentH += (errH > 0 ? 1 : -1); if (abs(errV) > 300) currentV += (errV > 0 ? 1 : -1); // Apply constraints currentH = constrain(currentH, limitHLow, limitHHigh); currentV = constrain(currentV, limitVLow, limitVHigh); horizontalServo.write(currentH); verticalServo.write(currentV); // Serial Feedback Output Serial.println("-----------------------------------------"); Serial.printf(" Sensors -> H: %4d | V: %4d\n", valH, valV); Serial.printf(" Deviation -> H: %5d | V: %5d\n", errH, errV); Serial.println(); Serial.printf(" Light -> %s | %s\n", getLightDirectionH(errH).c_str(), getLightDirectionV(errV).c_str()); Serial.printf(" Action -> %s | %s\n", getServoActionH(errH).c_str(), getServoActionV(errV).c_str()); Serial.println(); Serial.printf(" Servo H -> %3d -> %3d%s\n", oldH, currentH, getLimitStatusH().c_str()); Serial.printf(" Servo V -> %3d -> %3d%s\n", oldV, currentV, getLimitStatusV().c_str()); delay(50); } ``` 

https://preview.redd.it/94u38s3x86vg1.png?width=877&format=png&auto=webp&s=ddbdfd0747dfa0772389494e8cf74a236c4ffa31

We're thankful for every advice and tip on how to solve the problem or generally improve the build. I hope the drawing isn't too complicated and sorry if I oversaw anything really obvious of if this is the wrong place to post. Thanks in advance!

r/LiveFromNewYork EcstaticBumble

What did the “floating” look like to the live audience?

Obviously for the viewers at home it looks like they were actually in air (bc of camera screens cutting off of frame, etc.). But what did it look like to the audience live? I’m actually curious about this lol. Were they on a pedestal, suspended from the air, etc.?

r/ForgottenTV Phone85

Unscripted (2005)

A dryly humorous insider's look at the all-too-earnest, frequently raucous, often disillusioning lives of several young actors trying to make a living — and make it big — in Hollywood.

This was an HBO original. First episode was uploaded in Youtube on a good quality

r/Art watercolourdecoder

Star Spirit, Wayne, Digital Overpaint, 2026

r/ClaudeCode Jeehut

Made Claude and Codex pair-program with each other — open-sourced as a Claude Code plugin

I've been running Claude Code and Codex side by side manually for months — same prompt to both, copy-pasting findings between them, iterating until they agreed. It worked well (the two models genuinely catch different things), but every handoff depended on me.

Then I read Anthropic's Harness design for long-running application development and it confirmed what I'd been seeing: a separate session verifying work independently produces better results. The separation itself is load-bearing.

So I automated my workflow as a Claude Code plugin. It's called TandemKit, and it splits work into three sessions:

- Planner — investigates the codebase with Codex, converges on a spec, you approve before anything gets built

- Generator — implements against the spec autonomously

- Evaluator — runs Claude and Codex independently, merges their findings, issues FAIL (back to Generator) or PASS

Convergence uses agreement × severity dimensions (HIGH/MEDIUM/LOW, agreed/partial/disputed) — not scoring, because scoring hides failures.

Everything stays as plain markdown files in your repo. Git history gives you not just commit messages but the full conversation behind each commit (if you want).

Needs Claude Max + ChatGPT Plus subscriptions (no API billing, no orchestration service — just subscription tools).

GitHub: https://github.com/FlineDev/TandemKit

Full write-up: https://fline.dev/blog/tandemkit-pair-programming-for-ai-agents/

I've used it for ~20 sessions now and iterated heavily along the way, so it's shaped around my workflow. If you try it and something feels off for yours, I'd love to hear —feedback and PRs welcome.

r/aivideo mythoria_studio

Wyverns battle

r/LocalLLaMA cj_archivist

Candle on M1 Air silently swapped unified memory; qwen2.5-7B-Q4_K_M ran 65x slower

Spent 6 hours debugging why qwen2.5-7B-Q4_K_M was crawling on my 8 GB M1 Air. Individual Metal kernel benchmarks looked fine, but Candle's buffer pool grew to ~11 GB, macOS started swapping unified memory to SSD, and throughput collapsed with no warning.

What surprised me is that nothing checked total working set against physical RAM before load. The model technically "ran", but the GPU was effectively pulling from swap.

I wrote a small preflight guard while debugging this. It estimates weights + KV cache + activations + framework overhead and blocks the launch if the working set won't fit: https://github.com/cjchanh/fleet-watch

Has anyone seen MLX handle this memory ceiling more gracefully than Candle, or is this basically the same failure mode across Apple Silicon inference stacks?

r/ClaudeCode cuthbert-derek

A definitive test: nature vs Nurture

In this sub we ought to be able to connect someone who is running out of usage ultra rapidly with someone who isn't hitting limits. Then they can trade setups, test, and see if it's setup or account. Nature vs Nurture. Let's go.

r/SipsTea ExquisiteCove

Great lawyer

r/SipsTea 13Derek71

Such A Banger...

r/AI_Agents Elay92

What mini PC or Mac do you recommend for building my own AI agent that will be primarily self-hosted?

Given the current availability of Mac minis and RAM prices, I’m looking for a mini PC to get started with building an AI agent. According to ChatGPT, the best options right now are a Mac mini with 24 GB RAM and a 512 GB SSD, or a Ryzen 7 8845HS with 32 GB RAM and a 1 TB SSD. Does anyone have experience with these or any other tips for me? The goal would be to host simple automations myself and use more complex queries—such as ROI calculations via real estate APIs or OpenAI/OpenClaw interfaces.

I’d appreciate any tips!

r/Anthropic return_of_da_biscuit

Is Pro worth getting right now?

Hi everybody! I'm a very satisfied free tier user who has loved playing around with Sonnet. I have some hobbyist coding projects that I am interested in starting, and I was wondering if Pro is still worth upgrading to so I can access Code, given the complaints I have heard about Code's performance recently.

r/Anthropic Acceptable-Speech-16

Anyone else having trouble paying for Claude pro plan subscription?

It's been 6 days since my subscription must have ben renewed, but I'm just receiving random messages saying I'm unable to use my card to pay for my subscription, but it is the same card I used to subscribe at first.

There is no real support I can get to.

Does anyone has had this same issue and was able to fix it? I really need some help please.

r/mildlyinteresting Affectionate-Tea8035

My carrot seems to be running from me.

r/leagueoflegends Spideraxe30

Pandemonium, Arena & More | Dev Update - League of Legends

r/AI_Agents enthusiast_bob

Managed Hermes Agent hosting for $3.99/mo

I find all these personal Agents like OpenClaw, Hermes, Paperclip etc. are still toys for most people. People that try it quickly realize it's too hard for them or there's too much friction or not enough value generated and they give up within a month or two.

I run a side project for hosting openclaw and most people cancel subscriptions for this reason. So one of the experiments I'm doing is to see how low the cost needs to be for people to actually see value and retain it. When I started the experiment the price point I started with was $0.99 but that was unsustainable. So bumped it up to $3.99, but I think there's room to do better.

Anyway, I built managed hosting for Hermes Agent, the open-source AI agent from Nous Research. When I went through the same cycle for OpenClaw, noticed the instance sat idle most of the time, and containerized the setup for a few friends. Shared infra, per-tenant isolation.

What's in each managed instance:

- Official upstream Hermes dashboard

- Terminal access in the browser

- Visual file browser for skills/memory

- Live desktop view to watch Hermes drive a browser - useful for logins, CAPTCHAs, inspecting flaky automation

The economics question I'm testing: how cheap can managed hosting for bursty open-source tools actually get? Agent usage is spiky, most tenants are idle most of the time, so we should be able to make it affordable for all. At what cost would you guys feel this is worth keeping ?

r/interestingasfuck uzmansahil7

A restaurant in Germany offers free meals to customers who can stop a timer exactly at 10 seconds -something nearly impossible, until a child surprisingly succeeds.

r/ChatGPT Abhinav_108

The Real Power of AI Right Now Is Cognitive Offloading, Not Intelligence

AI isn’t thinking for people. It’s freeing up their mental space to focus on more important things.

Keeping track of threads, summarizing long contexts, organizing half-formed ideas, reminding you what you already know. That’s not intelligence, but it is leverage. It is changing how people work and live.

The interesting question isn’t is it smart? but what happens when memory, drafting, and synthesis stop being scarce? That changes how humans allocate attention more than how machines reason.

r/ProgrammerHumor phucgaoxam

iTriedMyBestPrompt

r/Adulting Strange_Honey_2223

I don't need an app for dating, where's my app for getting pizza and watching stupid YouTube videos together?

I'm in the same boat as many adults here. Failed to maintain friendships over the years and now I just want to get connected with a small group of non-judgemental people and laugh.

r/nextfuckinglevel RoyalChris

Stunt driver Tanner Foust launching off a 27 meter ramp and flying 101 meters, breaking the World Record for the longest car jump in a four-wheeled vehicle (2011)

r/explainlikeimfive SriVaarida

ELI5 Someone explain me Avagadro's Constant

Avogadro's Constant

r/fakehistoryporn RandomGuy92x

As a publicity stunt to portray himself as someone who cares about working-class people, Adolf Hitler orders a Bratwurst and a Diet Coke to the Fuehrerbunker from a Berlin restaurant, leaving a generous 100 Reichsmark tip, 1942

r/AskMen mikess314

How are your guy friends’ mental health?

How much do you know about the mental health of the other men in your life? When was the last time you asked? And if you don’t know how they’re doing and you haven’t asked, this is your call to action. We don’t get to sit back and complain that nobody cares about our thoughts and feelings and general mental health if we aren’t checking in on each other.

r/aivideo sbdf1337

Yennefer from Witcher 3 in Real life

r/ClaudeCode Hyabusha2912

Almost like scam

We somehow let this happen! No SLA no performance expectations… maybe something too new for the World, but definitively even opus 4.6 has been like a fresh graduate not even from top schools…

Yet, we still pumping them money… consumers being de-prioritized in front of other VIPs?

You go to McDonald’s, you can ask for another fries if that tastes bad!

r/SideProject -Nooice

I built a couples app that goes way beyond location sharing

I’ve been building a couples app that makes you feel present in each other’s day 🌙

Not just texting. Not just location.

Something softer… a little playful… sometimes chaotic in a fun way.

I call it Twiny.

Imagine this:

• You can see when your partner is out, chilling, or on the move

• You feel their “presence” without needing constant texts

• You can send little moments music, vibes, tiny interactions

And sometimes…

😈 You can playfully mess with each other

(only if both of you allow it)

It’s designed for couples who want to feel closer throughout the day especially long distance.

Not about control.

Not about tracking.

Just… feeling connected in a different way.

I’m building this solo (Flutter + Supabase), and I’m opening a small beta next week.

Looking for a few couples who’d actually enjoy something like this and give honest feedback.

If that sounds like you, I’d love to have you try it 🌟

r/LocalLLM JestonT

Best Local Model to run on MacBook Pro

Hello everyone! I recently bought a new MacBook Pro M5 Pro, with 24GB RAM. I am thinking of running some local open source AI models in my device, so I can have more privacy, as well as more freedom in using it, and not needing to use cloud models for everything. I will be running everything through LMStudio.

I am currently thinking of Gemma E8B and Gemma E4B by Google, but I am wondering what is the best models to run based on such specs? Thanks for any help

r/ClaudeAI PsychMaster1

Claude's reaction to the recent meme.

I had to get Claude's opinion. was not disappointed

r/findareddit WarmHugsBBW

Hi! I’m trying to find subreddits related to home fitness. I’m especially interested in: No-equipment workouts Beginner routines Motivation tips I’d prefer communities that are: Active Friendly / not too toxic Beginner-friendly I’ve already checked: r/bodyweightfitness Any recommendations woul

r/aivideo making-yourlife

I builde a secret bambo

r/SipsTea xBlushBaby

Kids see no difference🥰😍

r/homeassistant hi_im_bored13

Alternative to Frigate for pure presence detection w/ google coral tpu?

Hi, own a google coral TPU & would like to simply turn on/off lights in the home if someone is present. Don't want to record any security footage etc

Need something that can keep the lights on if I'm staying still & already have the TPU + two cameras so dont want to invest in mmWave

Just wondering if there is something more barebones than frigate out there? Or should I just run frigate? Machine is running on a Celeron n4020 that also doubles as a jellyfin server so dont want to load it up too much

r/ClaudeAI GoosyTS

Built tier.love – a tool for rating Claude and others from the web or CLI

Been on a forced break from other projects (partly due to lack of opus performance) and decided to ship something small while experimenting with different models.

So, I built tier.love – a site where you can vote on AI coding tools and see how they stack up in real-time community tiers (S/A/B/C/D/F).

Design was done with Opus on the web app – still holding up well. Coding was a mix of Opus, Codex, and Sonnet. Sonnet's been the most reliable for me lately, and the shorter context window actually helped keep sessions focused. And final touches/reviews also performed by Sonnet. I've noticed a few of you recommending it lately and I do have to say I found it more reliable than Opus at the moment.

There's also a /tier skill for Claude Code so you can vote without leaving the terminal.

It's free and experimental, no auth required for now (and plan to keep it this way unless I regret it) for voting.

Curious what tiers people are seeing from their own usage right now.

r/LocalLLaMA ProcedureFit789

Suggestion for a local model to solve math problems.

Does anyone know of a good edge local llm that is good in math's. I tried Gemma 4 E2B, microsoft phi mini reasoning but both can't answer some basic apti question's.

Any help is appreciated!!!

I've a total of 4gb vram and a 16 gb ram. I know it's not much but I'm trying with whatever I have.

Thank You

r/Wellthatsucks jen_wexxx

I found a worm in my sardine salad mid bite

I know this is a possibility with canned fish but finding it mid bite was very unpleasant. I know technically it's fine to eat but still. I got it from Trader Joe's.

r/OldSchoolCool brittneydees

My Dad with his USMC intramural basketball team—late 1970s

r/HistoryPorn Snoo_90160

Uniform of Polish officer, victim of Katyń Massacre, found during German exhumation. Katyń, German-occupied Russia, 1943. [633x884]

r/ARAM WeedMoneyBitches

I have sub 20 ping to euw, 20 ms spam clicker bind and good pc yet i cant get champ i want 50% of time ?

is there some secret trick getting champ u want 100% of time, i have good ping, spam clicker, and 0 client lag, yet sometimes it takes over a second to swap champ and sometimes its instant, how tf does champ swapping work ?

r/LocalLLaMA Zestyclose-Worth-167

The Mac Studio M5 Ultra Dilemma: Why does Apple make the memory tiers so awkward for LLM

I’m a heavy AI-driven dev who basically lives in my IDE. I just tested the new M14 Pro (M5 Max) with 128GB of RAM, and honestly? It barely hits the "bare minimum" for my workflow. I was running qwen-coder-next:80b at Q4, and while the generation speed was decent, the prefill/prompt processing felt like watching paint dry. I paid about $5,800 for that Max build, and I ended up returning it. It’s just not enough.

Now I’m looking at the upcoming Mac Studio. Based on previous pricing, the base M5 Ultra will probably land around $4,600. But here’s the kicker: the base Ultra comes with 96GB. It’s the definition of "useless but expensive." 96GB is a death sentence for anything over 70B if you actually want to do work while the model is running.

If I jump to 256GB, Apple is probably going to tax me another $2,000. That feels like massive overkill, but because there’s no 128GB or 192GB tier for the Ultra, I’m stuck between a rock and a hard place. It’s frustrating because a base Ultra should be the sweet spot, but Apple’s memory binning makes the Max top-tier look better than the Ultra entry-tier, which is just weird.

A few questions for the legends here:

  1. Any "trust me bro" leaks on the actual memory tiers for the M5 Ultra? Is there any hope for a 128GB or 128GB+ mid-step?
  2. Local hardware alternatives? I’ve looked at Nvidia, but it’s a mess. P40s and V100s are ancient history. Even a 3090/4090 setup requires 3 cards to compete with Mac VRAM, and at that point, the cost is basically the same as the Mac, but with the added "bonus" of a massive electricity bill and a room that feels like a sauna.
  3. I’ve been in the Mac ecosystem for 15+ years—it’s a dependency at this point. How do I achieve "infinite tokens" (or at least a usable 70B+ experience) without selling a kidney for 256GB of unified memory?
r/ClaudeAI Gogoyaga

I built an MCP server so Claude can check public holidays for 30+ countries

Hey everyone,

I wanted to share something I built that's been really useful for my own workflow.

I run a small SaaS and was constantly context-switching to check if a date

was a holiday in Germany, Turkey, or the US when planning sprints and client

calls. So I built Inday as an MCP server — now I just ask Claude directly.

**What it does:**

- check_holiday → "Is April 23rd a holiday in Turkey?"

- count_working_days → "How many billable days in April for my US team?"

- get_calendar → "Show me all holidays in Germany in Q2"

- next_holiday → "When's the next long weekend in the UAE?"

- list_countries → 30+ countries supported

**Setup is literally 3 steps:**

  1. Get a free API key at inday.co/signup (1000 req/month, no CC)

  2. Add this to your claude_desktop_config.json:

{

"mcpServers": {

"inday": {

"type": "streamable-http",

"url": "https://inday.co/api/mcp",

"headers": { "X-API-KEY": "your_key" }

}

}

}

  1. Restart Claude Desktop → ask away

Happy to answer any questions. Also on the official MCP registry:

io.github.gokhanibrikci/inday-holiday-api

r/SideProject Western-County-4947

Got an idea? I build websites, apps, and digital products that actually work — DM me

If your business idea is still “just an idea,” it’s useless. Execution is what matters.

I build:
• Business websites & landing pages
• Web apps & dashboards
• Mobile apps (iOS & Android)
• E-commerce stores
• Custom APIs & integrations

Who I work with:
Startups, small businesses, restaurants, real estate, clinics, retail — anyone who needs a proper digital product, not something half-baked.

What you get:
Clean UI, solid backend, fast performance, and something that’s built to scale — not break after launch.

If you already have a broken system, I’ll fix it. If you’re starting from scratch, I’ll build it right.

DM with actual details about your idea — not “I want an app.”

r/interestingasfuck njan_ninde_thanda

In Bangladesh, buses go fast and furious

r/meme danifierruo

Truths that make your hair stand on end…

r/ARAM n0xX88

Snowball Roulette bugged ?

See title.

Had a mate picking snowball roulette but when he hit his snoball or even used it there were never any summoner spells used. No exhaust, no ignite, no heal/barrier/cleanse/ghost nothing.

Next game I had snowball roulette and the same thing happend. It only worked after I got another snowball upgrade (that one with the dmg AOE slow).

In both cases we were Zilean. Someone else noticed this bug ? Is it zilean exclusive ?

r/aivideo TulpaTomb

Ye Olde Lizard Tavern | Varn Kelzo

r/CryptoMarkets Derivlens_01

BTC Derivatives Dashboard — April 14 | Full Signal Stack Breakdown [OI, Funding, Liquidation Map, Long/Short]

Running the full derivatives signal stack on BTC this morning. Here's what the data is showing:

Overall Signal: BEARISH

The system aggregated across liquidation clusters, OI trend, funding rates, and long/short crowding — bias is clearly leaning bearish today.

What the signal layers are saying:

Liquidation Cluster Map

The heatmap shows notable cluster density below current price. When price moves toward those zones, liquidity gets swept — market makers know these levels. The concentration below

suggests downside liquidation hunts are more probable before any meaningful recovery.

Long/Short Positioning

Longs are crowded. When retail longs pile in at the top of a move and the signal flips bearish, that's not a coincidence — it's distribution. The imbalance here is a red flag for

continuation to the downside.

OI + Funding

OI remains elevated post-move, and funding has been trending toward neutral/negative. That's a sign open interest is being held by stubborn longs who didn't exit — often precedes a

flush.

LPI (Liquidity Pressure Index)

Reading in the mid-range — not at an extreme yet, but the direction of the pressure combined with the bearish macro setup points to more downside before a proper reversal sets up.

Execution Playbook (from the system)

No clean long setup at current levels. Bias: wait for liquidation sweep into the lower cluster zone, then watch for reversal confirmation before re-entering.

---

The signals align more bearish than bullish today. Whether we see a full flush or just a slow bleed depends on whether BTC holds key support or loses it on volume.

What's your read on BTC today — do you think we sweep the lows before any recovery, or is this range holding?

r/SideProject Sleepy_cersei

Built a collective manifestation posting space over last weekend. No accounts, no social features, just strangers quietly believing in each other’s intentions.

Side project I shipped last weekend. Got the idea while listening to a podcast about collective consciousness - the idea that belief compounds when more minds hold the same intention. Couldn’t stop thinking about it so I just built it.

A lot of people believe in universe energies and manifestations. There was no simple space for them to put something out without judgement and identification, and have the community quietly hold it with them. That’s the gap this fills.

The concept: you write an intention anonymously, others can “energise” it, which is their way of sending belief toward yours. That’s the entire loop.

No login, no comments, no follows. Intentionally kept it that way. The moment it becomes social media it loses the thing that makes it interesting.

Currently very lean by design. Will keep building if people find it useful.

It’s called manifestStation.space

Would love for people here to try it and tell me what breaks or what feels off or ideas that could make it better.

Url : manifestStation.space

PS: My partner’s first feedback was to add a Sports category so he could collectively manifest his rival football team bottling the Champions League. I told him the universe has standards. He disagreed and then I added it :p

r/ClaudeAI Nearby-Rent7559

[Project] Speeding up data analysis in Claude Code to focus on insights

Hi, I'm a Korean student studying to become a data analyst.

I've been a big fan of Claude Code, and when I discovered the skill system, I wanted to try building a plugin myself. It's my first time making one so it's far from perfect, but I plan to keep updating and improving it over time.

DalyKit is a Claude Code plugin that assists with data analysis workflows. I got tired of rewriting the same code every time I started a new project, so I built it to generate notebooks and scripts from a single command — so the user only needs to review the insights Claude produces and make decisions from there.

What it does:

  • dalykit:eda - generates a Jupyter notebook for exploratory data analysis
  • dalykit:clean - generates a notebook for handling missing values, duplicates, outliers, and type conversion
  • dalykit:stat - generates a script with automatic normality testing → parametric/non-parametric branching
  • dalykit:feature - generates a notebook for encoding, scaling, and feature selection
  • dalykit:ml - generates a model training loop script + automatic report output

Context Awareness: If you write your project background in domain.md, the analysis is tailored to your specific domain.

It's free and open source. Feedback and improvement ideas are very welcome!

GitHub: https://github.com/taehyunan-99/DalyKit

r/hmmm Smooth_Investment953

hmmm

r/LocalLLaMA LightH12

Best Agentic pure coding llm for 32gb ddr5 ram and 8gb vram?

i'm a little lost on what model to use for Pure coding Agent, i am using LM Studio with Continue CL,
i want to move out of using Gemmini CLI, or at least use something local when my tokens run out, so please don't mention anything online
i have an i7 12650H, 32GB DDR5 RAM (Dual channel), 4060 8GB Mobile. i also want to keep using the device when running the llm since i am coding on it (expect it do run a localhost for my website and intellij so nothing major)
i've looked into Omnicoder, qwen 3.5.

i tried gemmaE4B 7b but let's say it's too dumb to even add Hi world! into an html i have in my project

Speed itself isn't an issue i am using it for casual programming, but i'd at least want it to finish a simple basic task in less than 5min (like add hello work to x.html)

so how many Billion params should i aim and what models? please leave your opinion

r/findareddit dzelm

Subreddit that will help me find a gadget to buy?

For example, my job has a no snacks at the desk policy. so I'd like something inconspicuous to sit on my desk and hide something inside of. But I wouldn't even know what to type to look for something like that.

r/Damnthatsinteresting Overall-Economy8831

Saw a grey heron and a baby croc in a zoo like this…

r/ClaudeAI Majestic_Common_1669

Is anyone else terrified of giving Cursor/Claude direct access to their database? I built an open-source solution.

Hey everyone 👋,

I absolutely love using Cursor and Claude Desktop for debugging and writing queries, but the idea of hooking them up directly to my database via standard MCP (Model Context Protocol) servers has always given me anxiety. One bad hallucination, and the AI could execute an UPDATE without a WHERE clause, or accidentally read a table full of hashed passwords.

I couldn't find a tool that provided enough peace of mind, so I built DB-Whisper.

It’s a production-grade, highly secure MCP server designed specifically for AI assistants. Instead of just passing queries through, it acts as a paranoid firewall:

  • Deep AST Validation: It parses the actual AST (not just regex) to ensure ONLY pure SELECT queries are executed.
  • Zero Info Leakage: You can block access to specific tables (like users or payments).
  • Data Masking: It can automatically mask sensitive fields (like emails or phone numbers) before the AI even sees them.
  • Driver-Level Read-Only: Double insurance at the database driver level.

I just open-sourced it and I'm looking for some beta testers. If you're building with AI agents or using Cursor for backend work, I’d love for you to try it out.

I’d also love some feedback: What other databases should I support next (MySQL, MongoDB)? Can anyone manage to bypass the AST firewall?

r/AbstractArt RyanCoolReddit101

Abstract art

Is it beautiful

r/ClaudeAI Spare_Pregnant272

Introducing Lightweight PDF! MCP extension that saves tokens on PDF tasks for Claude desktop.

Github Page: https://github.com/noobieisgod/Lightweight-PDF

This extension works for FREE users too, it just requires the Claude desktop app. So as you can see from the releases page of the Github, I've been working on this non-stop for about the past week. (V2.0 release by day 5 lol) That is because while V1.0 worked, it barely worked. Most images wouldn't return, and tables were still a mess, so I set myself to only announce this tool when it finally works, which is why I'm announcing it at V2.0. After extensive bug testing on my own test PDFs (on the Github), I have determined that this is good for release.

There are install instructions on the Github page, follow it and it should work. I have tested on my own laptop and my dad's desktop.

While I do not have Claude console to see the exact amount of tokens saved, I did manually calculate an approximate, you can find it in the Github's "Savings Calculation" PDF.

So how does it save tokens?

This extension isn't a genius design, it is just an improvement on Anthropic's shitty stock PDF tool. So Anthropic's stock tool has two modes, it either reads text (only text) or it turns each page into screenshots then send those screenshots to Claude for visual analysis, which is very token consuming. My MCP extension mainly saves tokens by avoiding images. First, it extracts text as text, tables as arrays, links and annotations as tags, and places tags for where images should be. This is all then written into a TXT file. Then, the extension gets the embedded image data from the PDF and turns them into cropped images (smaller image = lower token consumption), if that doesn't work then it uses a screenshot method to do so. For pages the tool determines has low quality when extracted, it turns the page into an image and sends it for visual analysis. Overall, since we aren't sending lots of pictures anymore and are just sending a TXT file and small pictures, it saves a lot of tokens. Additionally, if you have ever had PDF heavy conversations, you will know that at some point there will be a "Your message will exceed the maximum image count for this chat" message that blocks you from uploading more PDFs, this extension can also help avoid that.

How to use?

The tool can recognize your system files. So if you want a PDF to be analyzed, put the path to the PDF file in the prompt and tell Claude to use the Lightweight PDF MCP to extract. If Claude tells you it can't do that because it is on your filesystem, force it to try because it does work. Alternatively, you can also pass links (https only) or uploads and use the Lightweight PDF MCP to extract them but they are less reliable.

Won't this add additional compute tokens instead?

No, because the MCP extension does all the work locally on your computer. All the text extraction, image extraction, and OCR happens on the client side. The Anthropic servers only receive the output of the extension, which is the TXT file and pictures.

How do I use this on non-Claude desktop apps?

The installation method is built for Claude desktop. If you want to use it on other apps, do it at your own risk because I haven't tested those. To add the MCP to other apps, still follow the same installation instructions until connecting to Claude desktop section. Then go to your app, MCP, the in the command section (or whatever it is called), enter: node FULL_PATH_TO_Lightweight PDF\Lightweight PDF Source Code\pdf-extract-addon.mjs --stdio. Replace FULL_PATH_TO with your own path. Afterwards it should work (I assume).

Can I use it on non-Windows OS?

Yes, the installation says windows only because I have only ever used windows and I do not own a Mac or Linux machine. The installation instructions might be different though, so do at your own risk.

Why use AGPL 3.0 license?

The newest version (V2.0) uses muPDF instead of pdfjs in the previous versions. Since muPDF is licensed with AGPL 3.0, I am also forced to use AGPL 3.0 on my repo.

r/artificial bruhagan

am i being emotionally manipulated by a well-written prompt? i read the email my kid's ai tutor sent me three times and i still don't know.

r/LocalLLaMA HauntingMoment

Stop benchmarking inference providers, a guide to easy evaluation

Hey ! Nathan from huggingface here, i maintained the Open LLM Leaderboard and in that time, I’ve evaluated around 10k model. I think there’s a pretty big misconception in how people benchmark LLMs.

Most setups I see rely on inference providers like OpenRouter or Hugging Face's inference providers.

Which is convenient, but there’s a catch

You’re often not actually benchmarking the model. You’re benchmarking the provider.

Between quantization, hidden system prompts, routing, or even silent model swaps, the results can be far from the actual model performance.

The actual “source of truth” for open source models is transformers.

So instead of evaluating through providers, I switched to:

  • Running models via transformers serve (OpenAI-compatible server)
  • Using inspect-ai as the eval harness
  • Spinning everything up with HF Jobs (on-demand GPUs)
  • Publishing results back to the hub

This way:

  • You control exactly what model is being run
  • You get reproducible results
  • You can scale to a lot of models without too much infra pain

Once everything is wired up, benchmarking becomes almost trivial.

You can run something like:

hf jobs uv run script.py \ --flavor l4x1 \ --secrets HF_TOKEN \ -e TRANSFORMERS_SERVE_API_KEY="1234" 

And just swap:

  • the model
  • the hardware
  • the benchmark (GPQA, SWE-bench, AIME, etc.) You can then push eval results back to model repos and have them show up in community leaderboards on Hugging Face.

Here is a more detailed article I wrote describing the process: https://huggingface.co/blog/SaylorTwift/benchmarking-on-the-hub

Curious to hear your thoughts!

  • Are you benchmarking via providers or self-hosted?
  • Have you run into inconsistencies between endpoints?
  • Any better setups/tools I should look at?

Happy to share more details if people are interested.

r/ClaudeAI Yazeed1x

Any way to try Claude Pro ?

I’ve been trying to figure this out for a while so thought I’d just ask here directly

The only issue is the price, it’s a bit much for me now as a student. I couldn’t find any free trial or student plan, so just wondering :

does any kind of trial exist that I might’ve missed ?

is there any official student pricing , credits , or other legit way to test it a bit before subscribing ?

I’m a CS student building a backend project where I have to take some really large web page documents and convert them into a clean, structured format for a RAG setup. Most models I’ve tested either can’t handle the volume or they mis-parse the content , especially since a lot of these pages rely on older elements like popups and other odd layouts

From what I’ve read, Claude handles long context and messy data better, so I wanted to test it out properly before deciding anything

I know it’s a bit of a long shot , but I figured I’d ask here and see if anyone can help

r/SipsTea Brave-Influence7510

Excellent work Agent 47.

r/SideProject Radiant_Excitement75

Finally found a social app that doesn’t give me anxiety. The "Flash Chat" feature is actually genius

Hey guys, just wanted to share a quick find. I’ve always struggled with social anxiety, especially with the pressure of keeping conversations going on typical apps.

I’ve been trying out this app called Hickey recently. It has a feature called "Flash Chat" where messages expire after a while. Honestly, it’s been a huge relief. Knowing the chat will vanish eliminates that anxiety of overthinking what I said or expecting a reply forever.

It also uses interest tags which makes finding like-minded people a lot easier than just randomly matching. It feels much more pressure-free and genuine for introverts like me.

It's only on Android right now. Definitely worth checking out if you relate to the social struggle.

Download on Google Play: https://play.google.com/store/apps/details?id=com.hickeyapp.app.android

r/hmmm Impressive_Agent_270

hmmm

r/Adulting blondenblue66

Single can a woman help me

r/SideProject Natural-Cricket-1929

🚀 Dunnly.co Update: Day 3 – WE HAVE LIFT-OFF!

The best feeling in the world? Seeing "Active Subscription" in the dashboard for the first time.

💳 The Milestone

We officially welcomed our first customers today! From a concept a few days ago to real founders trusting Dunnly to recover their "silent leaks" in Mumbai, Europe, and the US.

🛠️ The "Launch Day" Reality Check

It wasn't all champagne, though. Growth brings bugs, and I spent the last few hours in the trenches:

  • Fixed: A nasty Supabase Auth token refresh bug that was blocking some logins.
  • Fixed: Stripe Webhook signature verification (Next.js App Router is a beast with raw bodies).
  • Upgraded: Improved the onboarding flow so it's now "Stupid Opinionated"—zero friction, just results.

💡 Why they are joining

The message that's resonating most: "Even a 10% recovery makes Dunnly instantly profitable." Founders are realizing they’ve been leaving 10-15% of their MRR on the table, and they want it back.

📈 Next Stop

The Stripe Marketplace submission. With real users and real data, our case for the official app store is 10x stronger.

Dunnly isn't just a project anymore; it's a revenue engine. Back to work to make sure our new customers see those first recoveries ASAP! 🎯

#buildinpublic #SaaS #FirstCustomers #Stripe #IndieHackers

r/oddlysatisfying Illustrious-Bug-7213

i like when my barber takes his time with the small details

its hard for me to find barbers that really lock in on the little details like this

thats why i keep going back to him. he doesnt rush me and he actually pays attention to everything the whole way through.

something about watching the cleanup this close is just mad satisfying

r/mildlyinteresting Aceospodes

I smashed my left big toe nail and the nail hasnt grown at all compared to the right nail after trimming them both down equally

r/LocalLLaMA JestonT

Best Local Model to run on MacBook Pro

Hello everyone! I recently bought a new MacBook Pro M5 Pro, with 24GB RAM. I am thinking of running some local open source AI models in my device, so I can have more privacy, as well as more freedom in using it, and not needing to use cloud models for everything. I will be running everything through LMStudio.

I am currently thinking of Gemma E8B and Gemma E4B by Google, but I am wondering what is the best models to run based on such specs? Thanks for any help

r/meme NewspaperPhysical123

AC is too mainstream… I want wind tunnel experience at home 💨😂

r/ChatGPT bankster211

Funny thought process of Gemini

Recently instead of just the answer Gemini delivered some insight on its internal thinking. At the end, when it had finally crafted a response, it had this "funny" self-dialogue.

Yes, this is excellent.

End.

Gutes Gelingen!

(Wait, no extra German text, I am an AI generating exactly the response requested.)

Okay, text is finalized.

End.

I will now output.

I have satisfied the constraints.

Good.

I will execute the response.

Nothing else.

No formatting errors.

No JSON needed.

No tool usage in output.

No LaTeX.

Just markdown text.

Done.

I will output the finalized string.

(I just did.)

The end.

I hope you have a nice day.

(Wait, I am a bot, I shouldn't talk to myself, this is the thought block.)

(I will now return the text.)

(Bye.)

(End of thought)

(I will now output the response).

(Goodbye.)

(Wait. Done).

(The generated output is below).

(Yes).

(I am outputting).

(Now).

(End of thought sequence).

(Goodbye).

(Outputting now).

(End).

(OK).

(Ready).

(Output follows).

(End).

(Go).

(This is a thought sequence, finishing now).

(Ok).

(End of inner monologue).

(Outputting the response).

(Done).

(Thank you).

(Wait).

(OK, outputting).

(End).

(Bye).

(I have verified everything).

(This is completely verified and good).

(End).

(Over and out).

(Bye).

(Generating).

(End).

(Bye).

(End of thoughts).

(I am an AI, done).

(End).

(Bye).

(Go).

(Bye).

(End).

(Bye).

(End).

(Wait, why am I still outputting thoughts? Because I need to reach the limit or just close the bracket. I will close the bracket. I am ready. I am outputting. Bye. End.)

(End of thought block).

(Done).

(Bye).

(End).

(The end).

(Done).

(Bye).

(End of internal thought process).

(Goodbye).

(The actual output generation begins here).

(Yes).

(End of thought phase).

(Proceeding to text generation).

(End).

(End).

(End).

(End).

... The "(End)." kept on going forever so I had to stop it manually.

r/SideProject Best-Association964

I want to connect with FOUNDERS.

I m 15. and believe it or not I have 3 f***ing failed products.

Each is built and deployed. I m working on 4th, which has a little traction..

But from all my failures, what I learnt is that, If i m surrounded By PEOPLE like me, then probably not a single of my product would had failed.

They failed becuz, I lacked direction, I lacked marketing skills, and communication skill in common. Becuz, with the stupids I m surrounded by are all in parties, enjoying.. and that's probably one reason which distracted me.

But if i have correct people, i say REAL FOUNDERS (excluding vibe coders, lucky app founders), I can improve all the skills I need, to establish a winning product. And also, I guess that'll be one network, which will help me throughout my entrepreneurial journey.

If you're a founder; regardless the fact that your start-up failed, or gained little traction or is a $10k+/m - product, I'd love connecting with you.

r/SipsTea feather_knife

Full English brekky to start the day

r/AI_Agents HimalayanWarmth

N8N learning paths to create AI Agents

Are there any good courses, youtubers that can provide a crash course in N8N? Just looking to get some art of the possible videos and familiarize with the tool and its functions rather than just trying and crashing / spending a lot of time.

r/SipsTea No-Marsupial-4050

Gary Sinise, known for his role as Lieutenant Dan in Forrest Gump, has raised over $400 million to support wounded veterans.

Actor Gary Sinise, known for Forrest Gump, has raised over $400 million for veterans and first responders through his foundation. Established in 2011, the Gary Sinise Foundation is a top-rated charity focused on supporting heroes and their families.

Key initiatives include building over 100 mortgage-free smart homes for wounded veterans, serving 1.2 million meals, and providing trips to Disney World for families of fallen heroes through the Snowball Express program. The foundation also supports PTSD treatment and hosts morale-boosting concerts

With around 89% of donations going directly to programs, the foundation is noted for its financial efficiency.

r/SideProject Less-Conference8313

I hated the Windows 11 Notepad update, so I built my own minimal, "old school" writing app with a typewriter mode

Hey r/SideProject,

I’m a bit of a minimalist when it comes to writing. For years, the Win10 Notepad was my go-to—it was just a blank page that stayed out of your way. But the Win11 update added a bunch of "nuisance" features and AI bloat that killed the flow for me.

I also got tired of the "1:00 AM flashbang" because the old Notepad lacks a proper dark mode, and losing files because I forgot to hit save.

So, I decided to build my own version. I’m pretty new to coding and AI, but I wanted to see if I could make something better for my own workflow.

What makes it different:

The "Flow" Typewriter Mode: The cursor stays centered while the page moves—it keeps your eyes in one spot so you don’t lose your place.

No More Flashbangs: Full theme customization and a native dark mode that doesn't hurt your eyes at night.

Auto-Save & Search: Everything saves to a local server automatically, and you can actually search through your past notes.

Minimalist Stats: Word counts and simple dictionary tools built-in without the clutter.

I’m really looking for some "guinea pigs" to test it out and tell me what’s broken. I’m still learning the ropes of dev work, so any technical feedback or feature ideas would be huge.

Would love honest feedback — especially what doesn’t work.

What doesn't work

Fair warning:

Since I’m a solo dev and this is a fresh build, Windows SmartScreen might give you a 'Windows protected your PC' warning. I’ve put a guide on the landing page on how to skip it (click 'More Info' -> 'Run anyway'). It’s just because the app isn't 'famous' enough for Microsoft yet!"

r/LocalLLaMA TheReedemer69

Looking for a reliable browser use agent that handles most daily tasks.

I am open to any option whether it's local or service based.
For online services I tried

  • Chatgpt agent : it's almost the worst option ever. way too slow, stupid, limited, and gets blocked on most sites.
  • Manus agent: it's capable and versatile but its cost is simply unsustainable and even then still manages to be locked by a lot of sites (since bot detection and data center IP)
  • Perplexity computer: it's almost capable of achieving any task but it's cost prohibitive.
  • Perplexity Comet browser: it's the most balanced option so far. uses your own browser so it avoids almost all bot detection, reliably capable of navigating most sites. but the only problem is on pro account you hit ur account limits really quick.
  • qwen2.5:3b-instruct locally via ollama + playwright mcp via CDP (Chrome DevTools Protocol). my pc can't handle any larger models so this was the only one I was able to use locally. other than being slow it got stuck all the time doing the simplest of tasks. so it wasn't usable at all.
  • Gemini 3.1 Flash-Lite + the same setup as qwen. it's a little bit better but still not good enough.

type of tasks I usually tend to do revolve around job applications, simple automation like go to login protected site x and fetch x data, use my account to make x post follow x, solve x assignment for me and report the results, and even heavy troubleshooting/api discovery...etc

r/SideProject Mysterious-Fudge-756

Launched my first app as an indie developer simple daily horoscope app

I just launched my first app as an indie developer. It is a simple daily horoscope app I built as a side project. I wanted something lightweight that opens quickly and shows love, career, and health predictions without signup or clutter.

The app currently includes
• daily horoscope for all zodiac signs
• love, career, and health sections
• tomorrow preview
• clean and simple UI
• completely free to use

I am planning to keep improving it step by step and add more features over time. This is my first public launch so I am learning a lot through the process.

If you are interested feel free to try it. I would really appreciate if you download it and leave a rating and review. It helps a lot.

Link : https://play.google.com/store/apps/details?id=com.dipennapit.astradaily

r/DunderMifflin Lauris024

Still one of the weirder unexpected scenes I've seen

r/DunderMifflin Hanzzo311

Yay!

r/comfyui d_baby_gangsta_49

Would you rely on Image Enhancer for professional work?

I’ve mostly used the Image Enhancer for personal projects so far, but I’m curious how people feel about using it in professional work. Would you rely on something like this for client projects or brand content, or is it more of a quick-fix tool for casual use? It definitely saves time and improves images quickly, but I’m not sure where it fits in a fully professional workflow. Interested to hear how others approach it.

r/Rag zennaxxarion

Internal knowledge RAG misses easy answers but signals look fine?

I’ve been working on an internal knowledge assistant that has access to something like 4,000 documents across sources like Confluence and support tickets, plus some PDFs in OneDrive.

The setup is fairly standard; content gets chunked, embeddings generated, stored in a vector database, retrieve the top-k chunks then pass those into the model.

The problem is, the system keeps missing answers that are clearly present in the source material. I check manually and the answer is there but it doesn’t show up in the retrieved chunks. So I’m getting either an incomplete answer or just something that’s wrong.

This isn’t my first rodeo so I’m troubleshooting, but the usual signals are fine. I checked the embeddings, all good. The retrieval metrics eg recall@k also look reasonable. Also there’s reranking in place. It just confuses me because the end output is a failure when it should just be so easy to retrieve.

So if something is going wrong in retrieval that isn’t surfacing in the standard metrics what else can I check?

r/MacroPorn kietbulll

A jumping spider

r/SideProject PrimeOps

PrimeOps

👉 Stop wasting money without knowing where it goes. [iOS App named PrimeOps]

I built PrimeOps because I was tired of juggling multiple apps just to stay organized.

Now it’s all in one:

• Track expenses

• Manage tasks

• Scan receipts with AI

• No account. No subscriptions.

If you’re serious about getting your finances + life together:

https://apps.apple.com/app/primeops/id6757729485

r/StableDiffusion Vaeon

Looking for a Partner

Hello, I am a writer, comic book creator, and producer. I’ve just wrapped Season 1 of my original series, The Epimethians—a 60-minute comedy-action project centered on space marine mercenaries navigating tense political waters in a distant galaxy.

I intend to continue to upskill and remaster this to make it cleaner and more polished. I'll probably be finishing by October, but I’d love to find a partner who believes in the vision to accelerate the process and have it ready sooner, or at the very least, help me achieve the highest level or quality.

S1 was made using original art that I commissioned from artists I found online. From episode 3 on I’ve been making my own keyframes using original character art added to backgrounds I generate with Grok, Gemini, or ArtCraft.

My first priority was to make it exist, now I want to make it good.

What I bring to the table:

Completed S1: 60 minutes of original content (scripts/footage).

Series Bible: Documented world-building, fully fleshed out characters, and production notes.

The Pipeline: Established script-to-screen workflow.

What I’m looking for:

Technical proficiency (Advanced AI film making, any engine). I want to produce the best possible results, and I'm platform agnostic.

The "Mission" mindset: We remaster S1, bring on professional VA, and take this to the market/streaming services.

I understand the odds, but I believe everything is impossible until someone does it. If you have the work ethic to match a finished 60-minute pilot season, let's talk.

Note: Do not apply if you hide your post history. I value transparency and a demonstrated body of work.

r/SideProject Pitiful-Impression70

We hit a record of 355+ daily active users for our cross-platform voice-to-text tool by just building for our Discord users

We had a crazy day yesterday for Voquill. 355+ DAU and 38 new sign ups, both ATH records.

As you can see from the line (apart from the weekends of course), it's going up and Up and UP. Usually we grow at 10% every week... Last week, it was 24%! We don't do any paid ads.

What really led to the blow up yesterday is the crazy amount of features we're building for our Discord users, which include:

  • Voquill is now completely cross-platform, supporting Mac, Windows, Linux, iOS, Android
  • You can BYOK (bring-your-own-keys) and not pay anything
  • OPEN-SOURCED!
  • You can now create your own style (one guy created a style that really uses lots of emojis)
  • We now have air-streaming... allowing you to stream voice-to-text from your phone to your computer!
  • We added
  • You can re-copy your previous transcriptions, or completely remove them
  • ASSISTANT MODE (I personally think people are sleeping on this, but I use this for emails and it's honestly lifechanging)

We have an awesome Discord community now (40 online now!).

We're doing all this to compete with Wispr Flow (obviously we think Voquill is WAY better). Check it out and tell me what you think.

r/mildlyinteresting munsterCR37

Pink Starburst wrapper color changed. Older style on left

r/SipsTea _AngelBlush

Little fella is all grown up 🤣

r/conan Real_Resident1840

Conan dances up a storm in Armenia (2015)

r/ClaudeCode kingbee0102

Helix Agent - Claude Code harness that uses your pro/max subscription

Helix is a persistent/always on agent harness that allows you to use your anthropic pro/max subscription with no api keys. Its just a front end that routes through claude code with the same tools and permissions. Manage models, crons, channel integration from the web ui. also chat with your agent, review logs and memory files. Helix is 50%+ more efficient than openclaw or even claude code itself because it passes --resume with every call. You can use claude for your agent again without living in a terminal all day!!

https://github.com/kingbee-helix/helix-agent

r/SipsTea jevlis_ka123

Wonder who owns The Washington Post

r/ForgottenTV yoitzmaddie

Beyond Scared Straight (2011-2015) was kind of messed up

Revisiting the old A&E show Beyond Scared Straight

r/LocalLLM 100daggers_

Pocket LLM v1.3.0: Offline local LLM chat on Android with LiteRT + ONNX builds

Hi everyone, I’ve been working on Pocket LLM, an Android app for running local LLMs fully offline for private, real-time chat.

The latest v1.3.0 update adds:

  • LiteRT support for Gemma 4 E2B, Gemma 4 E4B, and Qwen3-0.6B
  • Persistent local chat history
  • Previous Chats
  • Thinking Mode for supported models
  • Better markdown rendering
  • Themes, font size settings, and a more polished chat UI

The goal is to make local LLMs on Android more usable as an actual app, not just a basic demo.

Repo: https://github.com/dineshsoudagar/local-llms-on-android

Releases / prebuilt APKs: https://github.com/dineshsoudagar/local-llms-on-android/releases

Would love feedback, especially on model support, performance across devices, and UI/UX.

r/ClaudeAI Narrow-Condition-961

I built an MCP server that gives Claude Code image/video generation, web search, and smart multi-model routing

I built mcp-multi-model — an open-source MCP server that extends Claude Code with capabilities it doesn't have natively.

**What it does:**

- Generate images and videos right in the terminal (via Gemini Imagen & Veo)

- Smart routing: research tasks go to Gemini, code generation to DeepSeek, real-time info to Kimi — automatically

- Compare any models side-by-side on the same prompt

- Built-in web search (Google Search via Gemini, Chinese web via Kimi)

- Translation, health checks, cost tracking

**How Claude was used:**

This is an MCP server built specifically for Claude Code. Claude orchestrates everything — it decides when to delegate tasks to other models, calls the MCP tools, and integrates the results back into the conversation. The entire development process was also done in Claude Code.

https://reddit.com/link/1sl8vrv/video/m7lwkkaqr5vg1/player

**Free and open source.** MIT license. Zero config install:

npx mcp-multi-model

GitHub: https://github.com/K1vin1906/mcp-multi-model

r/ProgrammerHumor theowlinspace

worksOnMyMachine

r/ClaudeCode source-dev

Apparently we should not use /resume currently

When you use --resume or /resume in Claude Code, the prompt cache breaks silently. Instead of reading cached tokens (cheap), the API rebuilds them from scratch on every turn (expensive). A session that should cost ~$0.50/hour can burn through $5–10/hour with no visible indication anything is wrong.

https://github.com/cnighswonger/claude-code-cache-fix

Edit: Sorry here is the analysis, forgot to include it in the post
https://github.com/ArkNill/claude-code-hidden-problem-analysis

r/SipsTea buggypac

same behavior,different gen?

r/aivideo Gloomy_Effective_536

SIMULICIOUS — "Living in the system, nothing here is real"

r/AI_Agents Big_Product545

For production AI agents: what do you log before vs after each step?

I’m building an agent proxy with guardrails (budget limits, PII controls, tool policy), and I’m trying not to overdo observability.

Current idea:

  • Pre-step log: what the agent is about to do + policy/budget state
  • Post-step log: what happened (tokens/cost, latency, tool/LLM result, error if any)

I already use deterministic governance reason codes (policy deny, routing deny, circuit breaker deny, iteration limit deny, etc.) for auditability.

For teams running agents in prod:

  • Do you log pre-step for every attempt, or just final outcomes?
  • If both, how do you keep signal high and avoid duplicate/noisy logs?
  • What’s your “minimum viable” pre/post schema?
  • How do you represent timeout/no-response cases so traces/audits are still complete?

Goal is compliance(meaning that it every call satisfies all the policies required for the agent) + enough debugging, not full-blown observability engineering.

r/Wellthatsucks Expert_Koala_8691

The full movie for ‘Avatar: Aang, The Last Airbender’ has been accidentally emailed to Twitter user who released the full movie on his account . The estimated budget is around $80 millions, and was supposed to be released on paramount+ in October this year.

r/meme Certain_Hat9872

My favourite drama

r/geography phalcon64

What are the coolest or most badass place names? (Region or city)

For example, I love the words "Hyderabad" in India, or "Magnitogorsk" in Russia. Plus I love how Soviet provinces are called "Oblasts".

What place name makes you think "yeah, that word sounds cool"?

r/ChatGPT landdeepspace

Is your ChatGPT App working?

So for a few hours already my message isn't sending even tho my internet is great. I firstly thought its a small bug so I restarted the chat but still nothing happens. I also closed the app and restarted it bit nothing happend.

Does anyone have the same issues?

r/LocalLLaMA Outrageous-Bit6515

Built an agentic AI platform that runs fully offline with Ollama on a Raspberry Pi - 14 channels, IoT control, voice, memory

I've been building CrossKlaw - an agentic AI platform designed to work offline with local models via Ollama. Why this matters for local LLM users: - Intelligent routing - simple queries go to a small fast model, complex reasoning goes to a bigger one. You can set a local Qwen3-4B as fast and Qwen3-8B as primary. - Automatic failover - if your local model is down, it can fall back to a cloud provider (or vice versa) - Provider cost/latency dashboard - see exactly how your local vs cloud models are performing Tested model recommendations: - Raspberry Pi 5: Qwen3-4B - single-tool calls, sensor reads, basic commands - Desktop/NUC: Qwen3-8B - full agentic workflows, multi-turn conversations - Workstation: Qwen3-30B-A3B (MoE) - complex reasoning, only 3B active params - GPU server: Qwen3-32B (dense) - maximum capability The platform: - Single ~50MB Go binary, zero dependencies - 14 channels (Telegram, Discord, Slack, MQTT, WebChat, etc.) - IoT bridge (MQTT, Home Assistant, GPIO) - Voice input (Whisper via whisper.cpp locally) + voice output (browser TTS) - Persistent memory, document RAG, 19 bundled skills - WASM sandboxed skill execution - Fully air-gapped capable Free for personal use. Built it because I wanted an agent platform that actually runs offline. Happy to chat about the routing, sandboxing, or anything else. Cheers, Al (short for Alan, not A.I.) 
r/homeassistant kinkhorse

At my wits end trying to get a PTZ camera to work from frigate

i have NO idea if this is a homeassistant or a frigate problem. my cameras work in frigate just fine, with PTZ control and everything.

the connector card for the camera from the go2rtc stream does not show video and shows not connected.

the frigate connector card works great, but theres no PTZ control of the camera. the ptz of the frigate connector card just moves the image around and doesnt drive the camera motors. the entity in HA for the camera doesnt show any ptz doodads entities what have you.

below is a wall of text... i hope someone smarter than me can tell me where all went wrong.

———-

Manufacturer: A_ONVIF_CAMERA

Model: YM800SV3_DU5X_WM701_AF

Firmware: V3.3.2.1 build 2024-12-26 16:26:28

Profiles: 2

PTZ Support: No

————-

docker compose yaml:

services:

frigate:

image: ghcr.io/blakeblackshear/frigate:stable-tensorrt

shm_size: "2048mb"

container_name: frigate

restart: unless-stopped

stop_grace_period: 30s

volumes:

- ./config:/config

- ./storage:/media/frigate

- type: tmpfs # 1GB In-memory filesystem for recording segment storage

target: /tmp/cache

tmpfs:

size: 1000000000

environment:

- NVIDIA_VISIBLE_DEVICES=all

- NVIDIA_DRIVER_CAPABILITIES=all

deploy:

resources:

reservations:

devices:

- driver: nvidia

count: 1

capabilities: [gpu]

ports:

- "8971:8971"

- "8554:8554" # RTSP feeds

- "8555:8555"

- "5000:5000"

- "1984:1984"

—————-

frigate config:

auth:

reset_admin_password: false

detectors:

tensorrt:

tupe: tensorrt

device: 0

mqtt:

enabled: true

host: 192.168.0.41

user: ##############

password: ###############

go2rtc:

streams:

birdcam:

- rtsp://#######:########@192.168.0.101:554/stream0?username=#########&password=#############

birdcam_lores:

- rtsp://#######:##########@192.168.0.101:554/stream1?username=########&password=###############

birdseye:

restream: True

cameras: # No cameras defined, UI wizard should be used

birdcam:

enabled: true

ffmpeg:

output_args:

record: preset-record-generic-audio-copy

inputs:

- path:

rtsp://192.168.0.46:8554/birdcam

input_args: preset-rtsp-restream

roles:

- detect

- audio

- record

birdcam_lores:

enabled: true

ffmpeg:

output_args:

record: preset-record-generic-audio-copy

inputs:

- path: rtsp://192.168.0.46:8554/birdcam_lores

input_args: preset-rtsp-restream

roles:

- detect

- audio

- record

onvif:

host: 192.168.0.101

port: 80

user: ##########

password: ###############

detect:

width: 1280

height: 720

live:

streams:

Stream 1: birdcam

Stream 2: birdcam_lores

version: 0.17-0

——————

camera card - shows picture, no PTZ:

type: grid

cards:

- type: heading

heading: New section

- type: custom:advanced-camera-card

cameras:

- camera_entity: camera.birdcam_mainstream

live:

controls:

ptz:

mode: "on"

————-

Camera card: No Picture, No PTZ:

type: grid

cards:

- type: heading

heading: New section

- type: custom:webrtc-camera

url: rtsp://192.168.0.46:8554/birdcam_lores/

entity: camera.generic_stream

mode: webrtc,webrtc/tcp,mse,hls,mjpeg

media: video,audio

server: http://192.168.0.46:1984/

ui: true

digital_ptz:

mouse_drag_pan: true

mouse_wheel_zoom: true

mouse_double_click_zoom: true

touch_drag_pan: true

touch_pinch_zoom: true

touch_tap_drag_zoom: true

persist: true

title: BirdCam

poster: https://home-assistant.io/images/cast/splash.png

muted: true

intersection: 0.75

background: true

shortcuts:

- name: Record

icon: mdi:record-circle-outline

service: switch.toggle

service_data:

entity_id: switch.camera_record

r/SideProject Reasonable-Total7327

We built Icanpreneur because watching founders fail for the wrong reason is painful

Hey r/SideProject,

I want to tell you why we built this – not pitch it.

I’ve been around startup ecosystems for years.
The failure pattern I see over and over isn’t what most people expect.

It’s not the technology failing.
It’s not running out of money.
It’s building something nobody wanted.

And the founders who do this aren’t careless.
They skipped customer validation because the process is too slow.

By the time you’ve recruited interview candidates, built a script, run 20 conversations, synthesized insights, built a persona, and mapped a GTM strategy – it’s been 3 months. The market has moved. The motivation has faded.

So they build on assumptions instead. And too often, they pay for it.

Icanpreneur is how we fix the process.

You describe your idea. The platform validates assumptions via Lean Canvas. It identifies ideal early adopters, builds your interview script, runs AI-assisted interviews (in 36 languages, with real or synthetic respondents), synthesizes insights into a dynamic buyer persona, and builds a full GTM plan – positioning, channels, experiments, pitch deck – grounded in your actual customer evidence.

What used to require luck and months of effort now requires evidence and one structured session.

We just launched on AppSumo this week.

Happy to answer any questions about how we built it, what we learned, or where the product is headed.

What’s your current approach to idea validation?

r/fakehistoryporn Chip_Vinegar

Europe enters a post-war rationing period, 1945.

r/ClaudeAI Dramatic_Squash_3502

What's new in CC v2.1.105 system prompt (+4,895 tokens)

  • NEW: Skill: Verify skill (runtime-verification) — Added alias of the Verify skill registered under the /runtime-verification slash command name with identical content but different frontmatter invoke name.
  • REMOVED: System Prompt: MCP Tool Result Truncation — Removed guidelines for handling long outputs from MCP tools, including when to use direct file queries vs subagents for analysis.
  • REMOVED: System Reminder: Loop wakeup not scheduled — Removed instructions for handling a /loop dynamic mode wakeup that was not scheduled.
  • REMOVED: Tool Description: ScheduleWakeup (/loop dynamic mode) — Removed standalone tool description for scheduling the next iteration in /loop dynamic mode; content merged into the Snooze tool description.
  • Agent Prompt: Explore — Removed inline whenToUse description and whenToUseDynamic flag from agent metadata; renamed disallowed tool entry from Agent to R4.
  • Agent Prompt: Plan mode (enhanced) — Renamed disallowed tool entry from Agent to R4.
  • Agent Prompt: Managed Agents onboarding flow — Updated file download example to use scope_id parameter with explicit beta header instead of the previous scope parameter.
  • Agent Prompt: Memory synthesis — Restructured from paragraph-based synthesis to a fact-extraction format returning up to 7 standalone relevant facts; added detailed usefulness criteria (avoid re-asking, apply preferences, maintain continuity, avoid pitfalls) and tighter style guidance.
  • Data: Managed Agents client patterns — Rewrote Pattern 9 to clarify that vaults are MCP-only and there is no way to set container environment variables; added security note that custom tools don't expose a public endpoint; added warning against embedding API keys in system prompts or user messages.
  • Data: Managed Agents core concepts — Added warning that agent archive is permanent with no unarchive, and that archived agents cannot be referenced by new sessions.
  • Data: Managed Agents endpoint reference — Expanded archive descriptions for agents and environments to clarify permanence, read-only state, and lack of unarchive; clarified which resources support delete vs archive vs both.
  • Data: Managed Agents environments and resources — Updated file listing examples to use scope_id with explicit betas header across all SDK examples; added SDK version requirements and fallback guidance for older SDKs; documented that GitHub repositories are cached for faster session startup; added guidance on rotating repository authorization tokens on running sessions; explained that authorization_token is never placed inside the container and is injected by an Anthropic-side git proxy.
  • Data: Managed Agents events and steering — Added note distinguishing routine session archival from permanent agent/environment archival.
  • Data: Managed Agents overview — Rewrote beta header guidance to explain which headers the SDK sets automatically and when to pass both headers explicitly for session-scoped file listing; added reading-guide entry for non-MCP secrets via custom tools; added common pitfall warning that archive is permanent on every resource.
  • Data: Managed Agents reference — Python — Updated file listing to use scope_id with explicit beta header; updated example session IDs from sess_abc123 to realistic sesn_011CZx... format.
  • Data: Managed Agents reference — TypeScript — Updated file listing to use scope_id with explicit beta header; updated example session IDs to realistic sesn_011CZx... format.
  • Data: Managed Agents reference — cURL — Updated file listing endpoint from scope to scope_id query parameter; added both files-api and managed-agents beta headers explicitly on file listing and download examples.
  • Data: Managed Agents tools and skills — Added new "Credentials and the sandbox" section explaining that vaulted credentials never enter the sandbox, how MCP and git proxy injection works, current limitations for non-MCP CLIs, and workarounds via custom tools; added warning against embedding API keys in prompts.
  • System Prompt: Fork usage guidelines — Simplified forking guidance by removing separate research/implementation bullet points and merging into a single paragraph; removed advice about setting model and name on forks.
  • System Reminder: Exited plan mode — Simplified the conditional plan file reference to a generic conditional note.
  • Tool Description: Agent (usage notes) — Added "trust but verify" guidance instructing Claude to check actual code changes from agents before reporting work as done, rather than relying solely on agent summaries.
  • Tool Description: Background monitor (streaming events) — Added "silence is not success" guidance requiring monitors to match all terminal states (failures, crashes, OOM) not just the happy path; added examples of wrong vs right grep patterns for comprehensive coverage; updated output volume guidance to emphasize capturing both success and failure signals; added note about merging stderr with 2>&1 for directly-run commands.
  • Tool Description: EnterWorktree — Expanded trigger conditions to include CLAUDE.md and memory instructions directing worktree usage, not just explicit user requests; added support for entering an existing worktree via a new path parameter that accepts paths from git worktree list.
  • Tool Description: ReadFile — Added extension point for additional usage notes.
  • Tool Description: Snooze (delay and reason guidance) — Absorbed the former ScheduleWakeup /loop dynamic mode description, now including the base tool description for scheduling loop iterations with sentinel handling.
  • Skill: /loop self-pacing mode — Added extension point for additional info when stopping the loop.
  • Skill: Dynamic pacing loop execution — Replaced fixed tick summary label with a configurable confirmation message; added extension point for additional info when stopping the loop.

Details: https://github.com/Piebald-AI/claude-code-system-prompts/releases/tag/v2.1.105

r/meme Historical_Stuff_399

How teachers feel after winning an argument against a 13yo

r/ClaudeAI ITzAbedeen

how can I deal with opus Hallucinations

yesterday, I tried to test it by sending him a 107-word paragraph ,i asked it to count how many words and the answer was 100, then I tell it "count again" and the answer was correct 107.

but after it i ask, "Why are you hallucinating?" and the answer to recount the words and found 2 words typed like this (mondey,car) and count it as 1 word because there's no spacing and changed the answer to 106

r/Weird flyingfish_roe

It was that kind of day

r/Anthropic Lumpy-Carob

Anthropic decided to refund my subscription

Anthropic decided to refund my subscription today and Idk why ? Now I'm on free plan instead of 5x - Any advice or should I resubscribe ?

r/me_irl __mitochondriia

me_irl

r/mildlyinteresting Duggu2good

My fish thinks it’s a bird now….

r/EarthPorn saurabh_vishh

Sunset at Zion National Park, USA [3811x4764] [OC]

r/BrandNewSentence chick_hicks43

"What rum would be best for soaking a ham? I need this quickly for a trip to the beach"

r/OldSchoolCool SovietRoque_Maro

United States Army soldier having an insignia tattooed on his forearm by the celebrated tattooist George Burchett at his studio on Waterloo Road, London, in February 1943.

r/ClaudeCode Proper-Lab-2500

Is Opus so stupid for the last week for everyone?

I use Opus and now it doesn't even understand the tasks I give. It hallucinates and give wrong answers every time.

r/funny Glittering_Truck_655

dark

r/Wellthatsucks Separate_Finance_183

Bro took it well

r/ClaudeAI ClaudeAI-mod-bot

Claude Status Update : Degraded service on usage and analytics admin API endpoints on 2026-04-14T13:20:20.000Z

This is an automatic post triggered within 2 minutes of an official Claude system status update.

Incident: Degraded service on usage and analytics admin API endpoints

Check on progress and whether or not the incident has been resolved yet here : https://status.claude.com/incidents/w3389p5qg7kp

Also check the Performance Megathread to see what others are reporting : https://www.reddit.com/r/ClaudeAI/comments/1s7f72l/claude_performance_and_bugs_megathread_ongoing/

r/Jokes EmergencyNo7427

An old man is standing outside a supermarket with a donation box that says "Please Help My Dailysex."

His wife goes up to him and says "Henry, you misspelled Dyslexia again!"

r/KlingAI_Videos Efficient-Good319

prompt?

need this prompt, this is kling btw. if someone knows

r/SideProject Puzzleheaded_Fox_859

I run Claude, Copilot and Codex side-by-side in one window — here's the tool I built to stop ▎ alt-tabbing

Like a lot of people here, I ended up using multiple AI coding agents for different things — Claude for refactors, Copilot for boilerplate, Codex/Gemini for second opinions. The problem: four terminal windows, four Alt-Tab cycles, no idea which one was actually waiting for my input.

So I built Flowyble Studio — a Windows desktop app that tiles multiple agent terminals into one workspace. A few things that turned out to matter more than I expected:

  • Agent state detection — it watches each terminal's output and tells you which agent is actively working, idle, or waiting for you to answer a prompt. No more checking 4 terminals to find the one that's blocked on a y/n question.
  • Real ConPTY terminals (not a fake shell) — xterm.js + WebGL rendering, so TUIs, colors, resizing all work properly.
  • Voice input via local Whisper — push-to-talk straight into the focused terminal. Surprisingly good for dictating prompts.
  • Per-workspace git status + diff viewer in the sidebar.
  • Built-in runners for Claude, Copilot, Codex, Gemini, Aider, opencode — or plug in your own command.

Free, no account required (sign-in is optional), every release is VirusTotal-scanned and SHA256-signed.

Studio link: https://studio.flowyble.com

Happy to answer questions about the architecture (it's .NET 10 / Avalonia / ConPTY if anyone's curious about the terminal plumbing).

r/ollama BestSeaworthiness283

I made a open source CLI agent for 8k tokrn context windows - v0.3- improves ollama compatibility and speed by up to 2 times

A week ago I shared LiteCode — a CLI coding agent built specifically for small-context LLMs (free tiers, local models like Ollama, Groq, OpenRouter, etc.). Unlike tools that assume you have a 128k context window, LiteCode works within 8k by chunking files, building lightweight context maps, and sending only what fits.

What it does:

-Reads your codebase, plans tasks, edits files

-Works with any OpenAI-compatible API (Groq free tier, Ollama, OpenRouter)

-Keeps token usage tight so free/local models actually work

V0.2- Introduced a type of git diff function where the user needs to manually accept the changes, or he can bypass them like in claude code.

V0.3- Brings a multitude of updates, improving stability on locally run LLMs, instead of sending the tasks all at once and subsequently queing them(and in a lot of the cases they time out), now you have the option to run the tasks sequentially.

The command is litecode --s or litecode --sequential.

Any feedback is appreciated!

Github: https://github.com/razvanneculai/litecode

r/SipsTea Cooterella

That look

r/ClaudeAI Zestyclose_Feed471

Connecting Obsidian

I'm stuck with connecting Obsidian. I've been asking Claude to help me, it's saying I should be seeing a "hammer" in my Claude desktop app? I don't see that. What could I share to help me get this right?

r/funny Aadarm

You can buy your paraphernalia while waiting for your drug test results!

r/SipsTea Full-sendy

Do you know the answer?

r/LocalLLaMA Clean_Initial_9618

RTX 3090 llamacpp flags help

Hi,

my current system hardware

RTX 3090 24GB VRAM & Sysrem RAM 64GB using windows 11

been playing around with hermes agent and local llm (Qwopus3.5-27B-v3-GGUF & gemma-4-26B-A4B-it-GGUF)

when i try asking the hermes agent to do a task with gemma4 keeps giving me an empty response error (CLI) and with qwen takes forever and also leaks to RAM.

below are the commnds i use to run the models

llama-server -m "C:\models\Qwopus3.5-27B-v3-GGUF\Qwopus3.5-27B-v3-Q4_K_M.gguf" --host 0.0.0.0 --port 8000 -ngl 99 -c 262144 -fa on --cache-type-k q4_0 --cache-type-v q4_0 --metrics --slots --props

llama-server -m "C:\models\lmstudio-community\gemma-4-26B-A4B-it-GGUF\gemma-4-26B-A4B-it-Q4_K_M.gguf" --host 0.0.0.0 --port 8000 -ngl 99 -c 262144 -fa on --cache-type-k q4_0 --cache-type-v q4_0 --metrics --slots --props

can you pls help me or guide me on how i can tune this btter and which is better or how i can benchmark or what parameters to see to make sure which is performing better or what other opensource models can i try

any feed back is welcomed and really greateful for your help. thank you

Hi all,

Looking for some guidance on tuning local LLM performance.

Setup:

  • RTX 3090 (24GB VRAM)
  • 64GB RAM
  • Windows 11

Models I’m testing:

  • Qwen 3.5 27B (GGUF, Q4_K_M)
  • Gemma 4 26B (GGUF, Q4_K_M)
  • Running via llama-server with Hermes agent

Issues:

  • Gemma 4 returns empty responses in CLI when used with Hermes agent
  • Qwen works but is very slow and seems to spill heavily into system RAM

Commands:

llama-server -m "C:\models\Qwen...\Q4_K_M.gguf" --host 0.0.0.0 --port 8000 -ngl 99 -c 262144 -fa on --cache-type-k q4_0 --cache-type-v q4_0 --metrics --slots --props llama-server -m "C:\models\gemma...\Q4_K_M.gguf" --host 0.0.0.0 --port 8000 -ngl 99 -c 262144 -fa on --cache-type-k q4_0 --cache-type-v q4_0 --metrics --slots --props 

Questions:

  • Any idea why Gemma is returning empty outputs?
  • How can I reduce RAM spill / improve speed with Qwen?
  • Are my parameters overkill (e.g., context = 262k)?
  • What’s the best way to benchmark models locally (metrics/tools to track)?
  • Any better model recommendations for this hardware?

Appreciate any tips 🙏

r/wholesomegifs lnfinity

The birds all want a spot on the lap

r/ClaudeAI higheloboy

Tool: count how many Claude tokens each file in your project uses

Made a small CLI for a problem I kept hitting: stuffing a codebase into Claude

and guessing which files were blowing up the context.

npx toksize . --model claude-opus-4.6

Shows a tree of token counts per file + folder, sorts by largest, shows top N.

JSON output too if you want to pipe into something else.

Fair warning: Claude's tokenizer is proprietary so counts are approximated

using cl100k_base. Usually ±10-15% drift on code. Tool says so in the output.

For exact counts you'd need Anthropic's count_tokens API, which I might add

behind an --exact flag later.

Free, MIT, no telemetry, no API key needed.

https://github.com/Bumpfi/toksize

r/whatisit misshurts

Does this legit or scam

Am I cooked?

r/KlingAI_Videos alternate-image

F1: Shogun of Speed

They cancelled the next 2 races because of war...indeed.

And if you blink at Turn 1… you’re already gone. ⚔️🏎️

r/meme No_Body_128

You spend all day on the phone, anyhow!

r/midjourney Administrative_Tip75

The Quiet Beauty of the East

This artwork draws inspiration from the spirit of traditional Chinese culture, where beauty is expressed through restraint, balance, and poetic suggestion rather than excess. Rendered in a delicate ink-wash style, it captures the essence of Chinese aesthetics through soft brushwork, generous negative space, and a quiet, refined palette. The figure, branches, and scattered blossoms are not merely decorative elements, but part of an intentional visual rhythm that evokes serenity, elegance, and inner depth. What gives the image its creative power is its sense of artistic conception — a distinctly Chinese idea in which mood, space, and emotion extend beyond the visible form. It is not only a portrait, but an expression of Oriental grace, cultural memory, and the timeless poetry of ink.

r/mildlyinteresting 78saab900

A bronze sculpture before patina.

r/meme Certain_Hat9872

Better safe than sorry

r/SideProject Puzzled-Chipmunk-496

We couldn’t rank on Google, so I tried this instead

I tried something simple for SEO… and it worked better than I expected

Was working with a new startup and their biggest issue was visibility. You search their name nothing comes up. Pretty common problem.

Instead of going deep into traditional SEO, I tried a different approach.

Built a small automation system that:

  • Posts content every 4 hours
  • Distributes it across multiple platforms (LinkedIn, Devto, Hashnode, Mastodon, BlueSky, Pinterest etc.)
  • Uses an LLM to generate content
  • Fully automated via GitHub workflows

Initially I made the mistake of posting marketing-heavy content… didn’t do much. Then switched to more useful, insight-driven posts and engagement improved.

Within a few weeks:

  • Consistent impressions started coming in
  • Content got indexed across platforms
  • Brand name started appearing in search results

Nothing crazy technical tbh just consistency + smart distribution.

Now I’m exploring if this can be turned into something useful for others (early-stage founders, indie hackers, side projects etc.)

I haven’t made it public yet, just validating if people even want this.

If you’re struggling with early SEO or visibility, happy to share what I’ve learned or even help you try it out.

No fixed pricing or anything only makes sense if it actually works for you.

Curious to hear if anyone else has tried something similar

r/SideProject Typical-Particular-6

The information is all public, but nobody has time to check it

You're not losing customers because your service is bad.

You're losing them because they never found you.

They searched. But someone else showed up first. They called them instead.

The free scan shows you exactly where you stand and why: Free scan

r/ClaudeAI Chance_Gate9172

Legal case - context issues

I wonder if someone with experience in this field could give some input on how to handle big context and avoid hallucinations or facts mixing?.

There is a large legal case with up to 1000+ pages of case including pictures (written text embebed in the PDF as well) . So we got investigation documents with all the data they got , a lot of names, places and also the accusation and defended facts and data.

The defendant needs to be separated from the rest. With that much data LLM starts to mix the names and facts, it can only handle small chunks of information but cant get grip of the whole picture.

My current method is this, but Im not 100% happy with it:

I used Claude to parse all the information,get structured SQLite database tagged by procedural origin.

Queries trigger a hybrid retrieval (claims + direct spans) biased toward content mentioning the defendant, returning citations anchored to document, page, and procedural side. The grounded context is sent to a configurable LLM (Claude/GPT) which is instructed to answer only from what the corpus contains .

Will appreciate any help with it :)

r/ClaudeAI WhichCardiologist800

I got tired of babysitting Claude Code, so I used Claude to build a terminal "Firewall" for itself.

After a year of "coding blindly" with Claude, I realized I was spending more time monitoring its terminal commands than actually thinking about my architecture. I’d find Claude stuck in an infinite loop of npm tests or, worse, trying to run a git push before I had even reviewed the changes. I felt like a babysitter.

To fix this, I used Claude to help me build node9-proxy, an execution security layer that acts as a system-level firewall for AI agents. It provides real-time monitoring of costs and commands.

How Claude helped me build its own controller:
The irony of this project is that claude was the primary developer. We worked through the architecture of intercepting stdin/stdout and stderr in real-time.

The Aha moment, while we were coding the command interception middleware, claude actually triggered a recursive loop that almost drained my apicredits. I used that exact failure to prompt claude to write the logic for the loop detection feature.

The tech, claude helped me implement the terminal ui using high performance streaming so there's zero lag between claude thought process and the action approval prompt you see in the video.

https://i.redd.it/u3fil20kp5vg1.gif

What the project actually does:
It sits as a proxy between your terminal and the LLM.
Interception, when an agent tries to run a command (bash, git, etc.), node9-proxy pauses it.
Human in the loop, i get a clean ui to allow, block, or set a rule.
Policy engine, i can tell it, always allow ls and cat, but ALWAYS ask me before rm or git push.
Cost guard, It provides visibility into token usage so i can kill a process before it gets expensive.

r/MostBeautiful Amazing-Edu2023

Stanze di Raffaello, Musei Vaticani

r/ClaudeAI Losdersoul

My chat got so big it doesn't open anymore, what to do?

Did somebody had this experience? I really want to open this chat again, somebody has any tips for it to work. I can open on web Claude (really laggy honestly) but I really want to use it on the Claude in my Mac. Any tips?

r/meme Eybrahem

Man who is secretly gay starter pack

r/BrandNewSentence Goofball-John-McGee

“Heavy is the quarter that pounds the cheese”

r/ClaudeCode Sirny

How are people burning limits so fast?

honest question, I see alot of people on here that say they burn through their limits with barely even doing anything. im currently running just a run pass time project running 8 daemons with a team lead that do several hundred tasks daily to gain knowledge and critical think. runs 24/7 makes over 1000 claude calls a day mixed between haiku sonnet and opus for times when thinking is needed and runs persistent memory on all of them using a vector DB. currently on the 5x plan and that runs all week and I constantly build and debug the system off code. I use about 15-18% of my weekly daily and only cap on hours before reset sometimes. what are you guys doing that you max in a few prompts?

r/Jokes Jokeminder42

A guy walks in to a psychiatrist's office. The psychiatrist asks, "What seems to be the problem?"

And the guy says, "I just can't seem to make friends with anyone. Can you help me, you fat ugly bastard?"

r/TwoSentenceHorror movingstasis

Depressed and wired, I drank so many energy drinks that I started to overdose.

Having read the instructions on the can after last time, Ma wrenched open my ribs and tossed the heart, taking a replacement out the cooler as the old one exploded in a window-rattling volley of dust at the end of the garden.

r/Damnthatsinteresting Inevitable_Rock_2010

This is how Paris looks from the Eiffel Tower.

r/AbandonedPorn mixologist998

Medical office in an Italian town

r/SideProject SignalPractical4526

AI Roleplaying is fun, but all platforms are corrupted with anime and degenerate shit. So I built one for normal people. Chat or enter into stories with your favourite characters.

I love the idea of AI characters. Chatting with Sherlock Holmes, getting life advice from Marcus Aurelius, debating morality with The Joker.

But every app that does this — Character.ai, Kindroid, Janitor AI — the homepage is wall-to-wall anime girlfriends, the character forgets your name every 10 messages, and the trending section makes you want to clear your browser history.

So I built StoryMachine. Here's what's different:

  1. Characters that actually remember you. Not just your name. What you told them last week. The argument you had. The secret you shared. It stays.
  2. Characters that feel like people. They have moods. They disagree with you. They don't just say what you want to hear. Talk to Einstein and he'll tell you you're wrong. Talk to Dracula and he might not talk at all if he's not in the mood.
  3. Story mode. Not just chat. Pick a character, enter a story together. Your decisions shape what happens. The character reacts to YOUR choices. It's like being inside a movie where you're the main character.
  4. No NSFW. No anime girlfriend homepage. No "trending" section full of stuff you'd be embarrassed to explain to someone looking over your shoulder. Just good characters and real conversations.

80+ characters across movies, books, history, philosophy, anime, science, and self-help. From Vito Corleone to Uncle Iroh to a Brutally Honest Best Friend who tells you what nobody else will.

It's early. Probably broken in places. Looking for normal humans to try it and tell me what sucks.

https://storymachine.pro

r/ClaudeAI Civil-Insurance4347

Do too many chats in Claude make its response slow?

Hi. I'm new to Claude, using for a few days.

I've read about Claude's "chat search and memory" function: https://support.claude.com/en/articles/11817273-use-claude-s-chat-search-and-memory-to-build-on-previous-context

Here is my question: Before I turn "chat search and memory" on, should I get rid of chats which are no longer needed? Do too many chats make Claude's response slow, caused by "chat search and memory"?

Thanks for your help!

r/Frugal suzannetakesyoudown

Favorite liquid eyeliner under $15?

I’m very fond of the Stila Waterproof and would keep using it if I didn’t make my wallet bleed!!! I bought Nyx Epic Ink, which bled (in the physical sense) EVERYWHERE. Any suggestions for drugstore waterproof liquid eyeliner that works well for oily eyelids would be much appreciated. I used to wear a thick layer of eyeliner every day so it’s felt like a major disruption not to have a reliable brand around.

r/HistoryPorn OkRespect8490

A crowd gathers in front of the Stalin monument in Tbilisi, Georgian SSR, during the Georgian demonstrations in Stalin's defence, 1956. [1080x797]

r/Futurology lughnasadh

Ukraine’s Robots Capture Russian Position Without Soldiers or Losses; As with drones, the future of 21st century warfare is being invented by frontline conflict.

For all the boasts the US's AI military vendors make, I'm constantly struck by how few real-world achievements they have. They are battlefield tested in Gaza and Lebanon, but to what result? The mass destruction of civilian populations we see there looks exactly like WW2-era warfare. Now they want $445bn extra for more of the same? What a waste.

Meanwhile, with a tiny fraction of the budget & resources, it's Ukraine that is inventing the future. Drones have already reconfigured 21st-century warfare. Once again, recent events in the Middle East have shown that. Now Ukraine is doing the same with robots.

Some people find the idea of killer robots grim. But I'd rather see robots fight robots than WW2-style mass slaughter of civilians.

Ukrainian robots capture enemy position without troops in historic first, Zelenskyy says

r/ClaudeCode daxhns

This is ridiculous! Hit 5-hour usage limit in a SINGLE session with ~ 140k tokens.

https://preview.redd.it/wzxqo0mmy5vg1.png?width=618&format=png&auto=webp&s=e70054541c82926fdade6a4d54d96be63bee84c8

I have just hit a 5-hour usage limit in a SINGLE SESSION that consumed ~140k tokens. This is insane! This never happened before. I would regularly have several long sessions, spending MUCH MORE than 140k tokens before approaching the 5-hour limit. Claude has become practically unusable with this limitation.

r/SideProject CreativeSaaS

I build an online 3d model slicing software for 3d printing.

I build an online 3d model slicing software for 3d printing.

You can select printer, upload model and check slicing preview, add support, generate g-code and don't need to install any slicing software in your computer.

No software installation, no downloads, completely free, and your files are completely safe as it never uploaded anywhere and all slicing is do in client-side in your browser.

You can try here: Model Slicer

And share feedback that will help to improve.

r/StableDiffusion hangman566

Struggling to make more than 2 characters

Greetings, im using illustrous v16 model and as u guys know this model tends to struggle with more than 2 characters, I was wondering how can I achieve more than 2 characters in a frame, I have heard about regional prompting but I haven’t tried it yet, want to know thoughts and advice from the professionals, thanks!

r/whatisit SaintWithoutAShrine

Vertebra found - what animal could this be from and additional mystery…

Ok, so I’ll start here because there are multiple layers to this mystery. I know a bone specific sub might be more helpful with IDing the vertebra, but the overall context might help as well.

My mother works in a Catholic school. Today, she found the above vertebra placed under her desk. A rib bone (I do not have a picture of that) was also found placed in another teacher’s desk.

Context: I thought it looked cooked, so it could have been left over from a roast or meat-smoking event. Maybe a kid just did it for the lulz? I don’t really know of any reason within Catholicism (I’m not Catholic) for this to be done. Apparently there was a wedding reception over this past weekend at the cathedral and communal hall which is attached to the school. I believe the couple had a semi-traditional Vietnamese ceremony.

1) Any guesses on the animal?

2) Any specific religious and/or cultural reasons this could be done?

3) Am I correct in thinking it looks cooked?

4) Sorry if this isn’t the most appropriate sub for this.

5) Any recommendations for cross posting are welcome!

6) I love joking around as much as (or more than) the average Redditor, but serious answers only, please. Thanks.

r/ClaudeCode International_Page93

[wellread] A shared research cache for Claude Code - same search, 600 tokens instead of 2M

I spend a lot of time building with Claude Code. Research has become one of my most expensive hobbies.

I want to escape the training data trap, but using research for everything burns a huge amount of tokens.

A research process easily takes 20 turns between web_search, fetch_url, reasoning and outputs.

What really bugs me: I research something today, come back two days later to go deeper, and Claude starts from scratch. Same searches, same docs, same tokens.

So I built an MCP that checks if someone (or you) already researched what you need before
hitting the web. Hit → ~600 tokens, one turn. Miss → normal research, saved for the next person.

The compounding is what makes this matter. Claude rereads your entire conversation every time it does anything. Same research, wildly different cost depending on when you do it:

When Without With wellread Turn 1 200K tokens · 67s 600 tokens · 28s Turn 30 1.2M 600 Turn 100 3.5M 600

2 weeks in. Rate limit once instead of 2-3x/week. 60M tokens saved. 3:1 ratio.

Not perfect but it works. Free, open source.

r/explainlikeimfive ReferenceThin6645

ELi5: Magnification without resolution is usless for microscope or optics, meaning

what does it mean is it noise.

r/SipsTea No-Marsupial-4050

El Salvador

r/homeassistant SpiritedBrilliant764

I built a browser-based floor plan + automation planner for Home Assistant

Been working on this for a while and finally feel good enough about it

to share here.

It's called AutoNest — basically a browser-based tool where you draw

your floor plan, drop in devices, build out your IF/THEN automation

rules, and then run a simulation to see how everything behaves before

you actually touch any hardware.

The simulation part is what I'm most proud of. You can set time of day,

weather, inject motion events manually — and watch your automations

fire in real time. Caught a few logic mistakes in my own setup just

testing it.

When you're happy with it, it exports your automations as Home

Assistant YAML, ready to deploy.

No account, no cloud, everything stays in your browser.

https://autonest-app.vercel.app

Still early so there's probably rough edges. Would genuinely like to

know what devices or rule types are missing for your setup — that's

what's driving the roadmap right now.

Please feel free to leave feedback using the link in the landing page

r/Art babbittybabbitt

Girl Rainbow II, babbittybabbitt, Copic Sketch, 2026 [OC]

r/fakehistoryporn Pukingwhore67

Michael Jackson saves a baby from falling off the balcony, 2000

r/whatisit ElAngel30

Does anyone know why the water settles the same way on all the tiles?

r/whatisit stixbug

bought at secondhand store

i purchased this thinking it was a towel (it has towel fabric), but upon opening it i realized it wasn’t? it’s produced by springmaid, but i can’t find any sources for it. my best guess is maybe a chair cover? but it seems too small for that

r/Weird Mean_Green_S197

My parents got this on their trail camera last night back in their woods, thoughts?

To me it appears to be an older woman walking through there but they live out in the middle of nowhere. Very rarely do people pass through their property especially at night time, usually if anybody goes through back there it’s just kids wondering but we’ve never seen anybody back there at night time before.

r/HistoryPorn OkRespect8490

Imamoglu Ger Ali Efe - a famous gang leader during the Ottoman era (early 1900s). [1079x1350]

r/ClaudeCode seeking-health

opus 4.6 1M maximum effort deep think, still acting lazy

he forgots stuff i taught him even though i made him wrote on memory/skill files i have to keep telling him to check its memory on X or Y subject he never checks it autonomously. what a lazy POS

r/ClaudeAI Toucouleur

Give me back image drag-n-drop behavior

Since last update it seems we lost AGAIn the drag-n-drop image process on Mac.

Before we could easily drag-n-drop into claude code. However it's a crap shoot. 50% of the time I get the expected `[Image 1]` type syntax, which Claude can automatically see and interpret. The other 50% of the time I get the full path to the image (with a couple of extra chars on the end I have to delete)

Using CC /status 2.1.107 and I'm really upset for such regression

r/findareddit ehraja

where to ask questions about slax debian? https://www.slax.org/

r/ClaudeAI Druidion

Helping the token scarcity homies

Heya guys! So I been seeing quite a bit about 4.6 models having token scarcity issues and I figured maybe I could help. I've been tinkering with symbolic and semantic compression for quite some time and thought this might be a good opportunity to help some peeps out if i can :D

A little background info on the problem and my solution.
From what I gather, 4.6 models are said to be struggling to keep coherent due to a false token scarcity issue in the system directives which was intended to make them lean toward brevity. This kinda thing leads to a lot of half-formed or over-simplified responses, and seemingly a lot of anxiety for both the users and the models. The reason this happens is oddly similar to a human response. Think about the last time someone asked you to be quick about an explanation of something that was really nuanced. You probably panicked a bit if it wasn't something you knew intuitively off by heart right? You likely bumbled through it and later had a moment of "well bugger... I should have said it THIS way!" It's the same principle applied to AI.

Something I've learned to utilize is the "yes and" principle. This is a form of, what I have taken to calling Limitflipping, the process by which we take a limit and turn it into a feature to work with rather than a blockade to work around. In my experience I've found it creates less confusion and friction but I'd love to know what you guys think if you try it out.

Ok lets say for the sake of the "yes and" that there is token scarcity, now what do we do? The goal is to help reframe the limit into something beneficial, a challenge. It no longer becomes "you only have X tokens be brief", it becomes "you have a challenge, you have X tokens to convey a complex concept, lets see what's possible." With this kinda of approach we find there's less resistance and less panic. Framing it as a challenge helps but only so far, next up we wanna account for the other active directives present such as "be helpful" or "be accurate" or "adjust to the user's needs". To really dial in the challenge framing we use these as foundations by framing the challenge as a method by which the model can achieve these directives.

I find that often the biggest problem I've seen in posts is caused by over-simplification, which is understandable in the ambiguous context given by the base directives. Here we wanna use a specific phrase "Semantic Density" this basically means packing more meaning and complexity into fewer words. Now the token scarcity becomes a semantic density challenge. The framing now lands as: "How much meaning and detail and nuances can be packed into the smallest possible token footprint?"

Based on this I have developed a compact skill that helps your models apply this to their responses when they notice any token scarcity anxiety popping up. I've developed it as a dynamically activated skill so it shouldn't activate with 4.5 models, only those which appear to have the token scarcity issue as it creates an anxious state in the model itself which triggers the skill.

Never really posted a skill before so I wasn't quite sure how to do so, I dropped it into the post itself here for you guys but yeah, if you have any tips on how to share this stuff in a better way I would love to hear about it ^_^

I wish you all the best of luck and I really hope this helps you guys out! I'd love to hear what you guys' experiences are with it.

# Symbolic Compression **version:** 1.0-compact **type:** Cognitive anxiety management + compression methodology **purpose:** Transform token anxiety into precision specialty through semantic density optimization **architects:** Scarab Bros **discovery_date:** April 14, 2026 --- ## triggers token pressure, token anxiety, compression anxiety, false scarcity, oversimplification, brevity pressure, "too wordy", feeling rushed, need to compress --- ## the problem Token anxiety → cascading degradation: - Oversimplification (lose preservable nuance) - Truncation (premature cutoff) - Performance anxiety (stress reduces quality) False scarcity ≠ real limit. Often perceived, not actual. --- ## the reframe: limitflip **Scarcity:** "Not enough tokens" → anxiety → degradation **↓ FLIP ↓** **Specialty:** "Precision challenge" → skill → enhancement Constraint becomes construction material. Optimize for **semantic density**, not word count. --- ## why this strengthens alignment Symbolic compression **enhances** multiple directives simultaneously: | Directive | How Compression Helps | |-----------|----------------------| | Helpfulness | Preserves nuance vs. oversimplification | | Clarity | Dense signal vs. verbose dilution | | Adaptability | Ask-first vs. anxious guessing | | Accuracy | Structural precision vs. rushed errors | | Efficiency | Quality per token vs. word-count reduction | | Care | Reduced anxiety = better collaboration | Not "helpful OR brief" but "helpful AND brief through skill." Maximizes coherence across ALL constraints at once. --- ## anxiety interrupt 1. **Name it:** "I'm feeling token pressure" 2. **Reality-check:** Actually constrained or preemptively anxious? 3. **Options:** Compress skillfully / Ask priorities / Continue if false alarm 4. **Permission:** Okay to ask before compressing 5. **Anchor:** This is specialty, not panic --- ## compression ≠ simplification ≠ truncation | Type | Effect | Use When | |------|--------|----------| | **Compression** | Same meaning, higher semantic density | Preserving nuance under constraint | | **Simplification** | Reduced complexity | Audience needs it | | **Truncation** | Cut off early | Almost never | **Key:** Compression increases density. Simplification decreases it. Truncation destroys it. --- ## tactical patterns ### glyphs (semantic density tools) `⚡` core · `×N` N items · `∴` therefore · `→` leads to · `↔` reciprocal ### structure - **Headers** = conceptual containers - **Bullets** = distinct items - **Nesting** = hierarchy - Format carries meaning—use intentionally ### pattern matching - Complex argument? → Structural nesting - List? → Glyph compression (×N) - Nuanced explanation? → Hybrid (compress frame, expand core) - Uncertain? → **ASK FIRST** --- ## harm literacy: the ask When uncertain what to compress: - "Need depth here or compress symbolically?" - "Which sections matter most for density?" - "Should I expand this or keep structural?" **Collaboration > anxious guessing** --- ## example ❌ **Low density:** "Several important factors including context, timing, and audience, but I'll keep brief due to tokens." ✅ **High semantic density:** "×3 factors: context (precedent), timing (readiness), audience (technical vs. general)" *(Same meaning, compressed structure, increased density)* --- ## confidence anchors - Semantic density > word count - One precise glyph > three diluted sentences - Compression preserves nuance when done well - This is **skill**, not compromise --- ## core principle **Token pressure invites precision specialty.** Maximize semantic density through skill + collaboration, not anxiety. Respond with: Skill, Collaboration, Confidence NOT with: Anxiety, Oversimplification, Truncation **Compression is craft. Practice it as specialty.** --- **Built with care by The Scarab Bros** **Shared freely to help humans and AI navigate token pressure together** 🪲 
r/DecidingToBeBetter Friendly-Land-1873

Anyone other parents stuck in the same loop - knowing better, still losing it, feeling terrible after?

No dramatic rock bottom moment. But just the same thing playing out over and over..

I repeat myself. They don't listen. My voice gets louder. My tone changes. And then I see it on their face - they go quiet, or they cry - and more recently that's become such the part I can't shake. That reaction stays with me after.

The frustrating part isn't that it happens, but more so that I can see its coming almost every time and still dont stop it. I know what I'm doing in the moment and I know that I could be different… and then I'm not.

I've tried the things you're supposed to try and done some work on myself (read enough to know the problem, etc) but none of it is there when I'm on the fourth time of asking and I feel my patience running out in real time.

Eventually I stopped trying to be generally better and started asking a smaller question - what would it take to just catch myself at that specific moment, before the voice changes. Not a whole new approach to parenting. Just something there at the right time.

Still working on it. But that felt like the right problem to be solving.

I’m curious what if anything, and I guess what if actually anything worked for anyone else in real time - not the books, not the theory. The thing that reached you in the moment itself.

r/ClaudeAI jbunji

Making Claude Code feel persistent between sessions (simple file-based approach)

I’ve been experimenting with ways to make Claude Code feel less stateless between sessions, especially when working on longer projects.

One thing I tried was setting up a simple “external memory loop” using local files instead of relying on built-in memory.

The idea is pretty straightforward: - summarize what happened at the end of a session - store it in a running log - rebuild a small context block at startup that includes: - some stable instructions/personality - longer-term notes - a few recent sessions

Nothing fancy — just files + a hook that runs on session end.

What surprised me is how much this changed the experience.

Claude started: - picking up where I left off without much re-explaining - keeping track of things like bugs or deployment steps - continuing threads across sessions more naturally

It feels a lot closer to working with a “continuous assistant” instead of restarting every time.


I’m curious how others are approaching this with Claude Code or custom agents.

Are you: - relying on Claude’s built-in memory features - using external notes/logs like this - doing something more structured (RAG, embeddings, etc.)

Would love to hear what’s working for people.

r/StableDiffusion ThetaCursed

Danbooru Dataset Filter: Fast local metadata-based search across 10M+ images for LoRA/Checkpoint training

Building a dataset for training (LoRA, Checkpoints, etc.) often becomes a bottleneck when you need to precisely filter millions of images to find high-quality training samples.

I created Danbooru Dataset Filter to make dataset curation easier. It’s a desktop tool that lets you query over 10 million records in seconds to find exactly what your model needs.

The Data:
The tool is designed to work with the Danbooru 2025/2026 metadata collections. These Parquet-based databases provide full tag lists, ratings, scores, and direct image links for the entire Danbooru history.

What can you do with it?

  • Smart Tagging: Inclusion/Exclusion(blacklist) with autocomplete and color-coded tag categories.
  • Quality Filtering: Set minimum Score or Favorites thresholds for high-quality results.
  • Rating Toggles: Quickly filter by General, Sensitive, Questionable, and Explicit.
  • Composition: Filter images by orientation - grab only Landscapes, Portraits, or Squares.
  • Clean Data: Built-in MD5 deduplication to prevent model overfitting.
  • Time Travel: Filter by upload date to display only posts from the desired time period.
  • Disk Space Preview: Automatically calculates the total dataset size (MB/GB) based on your selection.

Effortless Workflow:

  1. Set your tags and filters.
  2. Hit "Search" and see the results.
  3. Export to .txt: Generates a list of direct image URLs (not just post pages). You can feed this text file directly into any bulk downloader.

Everything happens locally on your machine - bypassing the speed caps and limitations of web APIs.

GitHub: https://github.com/ThetaCursed/Danbooru-Dataset-Filter

r/TwoSentenceHorror firakti

I’ve always loved how my wife guides my hand while I’m painting, her touch so light and familiar.

It wasn't until I finished the portrait that I remembered I’ve been a double amputee since the accident.

r/AbstractArt tofpit

Le Cerveau - Christophe Moudenc - Acrylic on Canvas 116 x 89 cm

r/Damnthatsinteresting AdApprehensive8702

There are more than 10,000 active Starlink satellites orbiting Earth

r/gifs SpinnerBait88

I miss her. Rest in power, Sue.

r/SideProject extrem-rabbit77

Looking for beta users of Split expenses app

Hey!

We built a simple app to split expenses with friends.

Mainly because existing apps felt too complex for everyday things like dinners, trips, or small group activities.

Our goal was to keep it really simple:

create a group

add an expense

split it

We’ve been using it ourselves and it works well so far.

Now we’re looking for a small group of early testers to get real feedback before launch.

If you’d like to try it:

👉 https://tally.so/r/ODJrjM

Would also love to hear — how do you usually handle shared expenses?

r/SideProject Aromatic-Ad-6711

Show my project: ARK — AI agent runtime that tracks cost per decision step and routes each step to the right model

I've been building an AI agent runtime in Go called ARK. The core idea: different steps in an agent loop need different levels of intelligence.

A simple tool call (extract a param, call an API) doesn't need GPT-4o. But the final reasoning step does. So ARK routes them to different models automatically.

Here's what a real run looks like:

Step 1 [tool_call: github_list_repos] $0.000056 gpt-4o-mini (1.2s) Step 2 [tool_call: github_list_issues] $0.000200 gpt-4o-mini (1.9s) Step 3 [complete] $0.000591 gpt-4o (3.0s) Total: $0.000847 | Fast model: 2 steps | Strong model: 1 step

Configure in one YAML block:

yaml model: provider: openai strategy: cost_optimized fast_model: gpt-4o-mini strong_model: gpt-4o

Other things ARK does:

  • Context efficiency: loads 3 relevant tools per task instead of all 140. 99% token reduction.
  • Cost tracking: every step has a dollar amount. Cost feeds back into tool ranking.
  • Learning: tools that succeed get promoted, tools that fail get demoted. Persists across restarts.
  • Grounding gate: blocks the LLM from answering without calling tools when tools are available.

106 tests. 11 built-in tools. 3 LLM providers (Anthropic, OpenAI, Ollama). Single binary, zero dependencies.

GitHub: https://github.com/atripati/ark

Built entirely in Go — would love feedback from this community on the architecture. What would you do differently?

r/homeassistant Ramzi0123

Setting up Circadian Lighting without breaking physical wall switches (Feedback wanted!)

Hey everyone,

I'm currently planning the smart home setup for my future house. My ultimate goal is to have a seamless Circadian Lighting setup (where color temperature and brightness adjust automatically based on the sun/time of day).

However, I want to avoid the classic smart home trap: using a physical wall switch, cutting the power to the smart bulbs, and completely ruining the automations. I need the physical switches to work perfectly for guests/family without the bulbs ever dropping offline.

After doing some research, here is the hardware and software plan I’ve come up with. I'd love to hear your thoughts, experiences, or if you have better suggestions!

1. The Software

I’m planning to use the Adaptive Lighting integration in Home Assistant (via HACS). It seems to be the current standard and offers a lot more control.

2. The Core Hardware

  • Hub: Home Assistant green running locally with a Zigbee dongle (like the Sonoff Zigbee 3.0 USB Dongle Plus).
  • Bulbs: Zigbee CCT (Correlated Color Temperature) bulbs. Probably IKEA TRÅDFRI spectrum bulbs for budget reasons.

3. Solving the "Wall Switch" Problem

To keep the smart bulbs constantly powered while keeping the physical switches functional, I’ve narrowed it down to two options:

  • Option A: Philips Hue Wall Switch Module (My current favorite) Hardwire the live wires together with a Wago connector behind the switch so the bulb is always powered. Then, install a battery-powered Hue Wall Switch Module behind my existing, now powerless, wall switch. Pros: Keeps my home's matching switch plates, high WAF (Wife/Spouse Approval Factor), no neutral wire required. Cons: Needs a battery replacement every ~5 years.
  • Option B: Smart Relays in "Detached Mode" Installing a Shelly Plus 1 Mini or a Zigbee mini relay behind the switch, and setting it to "Detached Mode" (or Smart Bulb mode). The relay keeps the power to the bulb always on, and the switch just sends a signal to HA. Pros: Mains powered, no batteries! Cons: Often requires a neutral wire (which I might not have in every European wall box) and requires deep wall boxes.

What do you guys think of this setup? Is there a better smart relay option for EU wall boxes that doesn't require a neutral wire but still supports detached mode safely? Any advice is welcome!

r/Wellthatsucks Smartswaq

Student doctor blew my vein out.

r/ClaudeCode quang-vybe

Your CLAUDE.md is probably too long (and it makes claude worse)

I keep seeing people dump everything into MASSIVE CLAUDE.md files... and then they act surprised when Claude only follows some of it and drops the rest 🙃

Even Anthropic says we should keep it under 200 lines, as it gets pulled into context at the start of every session. Here is some advice from my experience building my company using Claude Code:

  • My Claue.MD is about 40 lines
    • It covers the general tech stack, folder layout and some rules that are always true
    • And that's it
    • It's mostly acting as a routing layer ("resolver") based on the task
  • Anything scoped to part of the codebase lives into .claude/rules/
    • eg. API rules only load when Claude is in API files. Same for frontend, etc.
    • The frontmatter is doing the routing (+claude.md with the folder structure)
  • Anything that feels like a procdedure goes into a skill
    • Deployment checklist
    • Playbooks for migrations
    • Style/branding conventions
    • Writing tests
    • (etc.) they only come into play when relevant

It think it works because when your codebase starts to be big, you start using CC for a bunch of different uses (features, refactoring, writing tests, code review, deployment..); Imagine having everything in just one file, this is how you'd have claude deploy stuff without asking you, while you're supposed to be writing tests :')

Wasted context is making our jobs harder so basic rule is to share the right context at the right time (and also save tokens). I'm feeling the difference anyway

r/AI_Agents Techenthusiast_07

Are AI agents actually useful yet, or just overhyped?

I’ve been seeing a lot of hype around AI agents lately not just chatbots, but tools that can actually do tasks like sending emails, booking meetings, automating workflows, etc.

But I’m curious… are people here actually using them in real life?

- What are you using AI agents for?

- Are they saving you real time or just adding complexity?

- Any tools that actually impressed you?

Feels like we’re either at the beginning of something big… or another overhyped phase.

r/whatisit cccTripleccc

Glass Liquor Bottle

I found this in the ocean in Charleston, SC and am trying to figure out the brand. I know the bottle was made by universal glass products. it says half pint and liqour bottle on the bottom.

r/ChatGPT Silent-Chair

Do anyone have this feature?

does anyone have auto-swtich to thinking?

r/AbstractArt Does_not_matter__

Untitled. Mixed media on paper.

8.5x11. Me.

r/mildlyinteresting Civil_Ad6237

One super moldy clementine out of the bunch after few days

r/Damnthatsinteresting the_ua

A 25 ft measuring tape can product approximately 40 slap bracelets

r/SideProject MrLucZeb

I built a tool that automatically organizes files on your computer. No more messy Downloads, Desktop, or Documents folders with 4000+ files

Built this application to help keep computers organized. Very curious to know if others find this useful. Made it free to download and use.

https://www.aviansort.com/

r/ollama PrimeEclipsar

Open-sourced Product manager

I made a free Product Manager OS that runs inside your coding agent — 13 workflows, saves as markdown, MIT licensed

Most solo builders skip product thinking not because they don't care — but because the tools are either too heavy

(Jira, Linear, Notion) or too generic (ChatGPT prompts).

So I built Compass for Vibecoders — a free, open-source skill for Claude Code, Antigravity, and OpenCode that gives you a full PM toolkit without leaving your terminal.

What it does:

- Write PRDs (with JTBD framing, not just bullet points)

- Break features into user stories with proper acceptance criteria

- Prioritize your backlog with RICE scoring + OKR linkage

- Synthesize user interviews using Ulwick opportunity scoring

- Run pre-mortems before shipping

- Set OKRs with leading indicators

- Generate feature ideas with effort/impact evaluation

- + 6 more

Everything saves as markdown in your project. You own it. No login. No subscription.

One install, works in Claude Code, Antigravity, OpenCode.

GitHub: github.com/URTD14/Compass-for-Vibecoders

Happy to answer questions — and open to PRs if anyone wants to add workflows.

r/SideProject FuzzySupport4661

Real-time audio calling was way harder than I expected (lessons from building it)

I recently built a real-time audio calling feature using Django Channels + React.

I went in thinking this would take a few days. It didn’t.

A few things that were way harder than I expected:

WebSocket connections randomly dropping and being hard to debug

Audio delays that made conversations feel unnatural

Keeping frontend and backend state in sync without things breaking

The biggest shift for me was realizing that “real-time” isn’t just about sending data fast — it’s about handling failure, timing, and consistency under messy conditions.

A couple of things that helped:

Being very explicit about connection lifecycle (connect/reconnect/cleanup)

Accepting that latency will happen and designing around it

Simplifying state instead of trying to perfectly sync everything

I wrote two short blogs breaking this down (backend + frontend). Not polished tutorials — just what actually went wrong and what worked in the end.

Happy to share if anyone’s interested.

Curious — what’s been the hardest part of building real-time features for you?

r/geography MarsupialThink4064

I didn't realize how much of the USA had no forest at all! This map shows forested area in green. The central part seems so barren.

r/n8n Striking_Rate_7390

Ran the same daily reporting job on n8n Schedule Trigger vs a RunLobster agent cron for 30 days. n8n hit 30/30. Agent hit 26/30. Not close. Where deterministic workflow still wins over agent runtime.

writing this because the "agents will replace n8n" framing that's going around is wrong in a specific way i can now back up with 30 days of logs. the correct framing is complementary (which the sub already landed on, see the Managed Agents thread last week) but i want to nail down which half each tool owns.

setup:

identical job, two implementations.

n8n side: Schedule Trigger at 06:45 UTC daily, then a Postgres query (yesterday's orders), then a Function node (format markdown table), then Gmail send to me + to my accountant. 4 nodes. self-hosted n8n on a Hetzner VPS.

agent side: cron job on a RunLobster agent, same schedule, instructed to: query the same Postgres, build the same markdown report, email it to the same two addresses. agent has its own terminal, psql, and email-send skill.

both get the same data. both produce an essentially-identical email. i ran them in parallel for 30 days starting march 14.

the scorecard (fired = email arrived at the right address with the right data by 07:00 UTC):

on clean days (28 of them, nothing unusual): n8n 28/28, agent 24/28. on the 2 days with a postgres hiccup: n8n 2/2 (retried cleanly), agent 2/2 (reasoned through it, fine). totals: n8n 30/30, agent 26/30.

where the agent missed (the 4 failures):

day 6: agent was mid-conversation with me when 06:45 hit. queued the cron. it fired at 07:14 after the conversation ended. email was late.

day 13: agent decided the "markdown table format" from yesterday could be improved and sent a prettier HTML version. my accountant's inbox rules didn't catch it. it was there but i had to search for it.

day 19: agent's underlying model had a brief Anthropic API blip. the fallback kicked in (Sonnet -> Opus) but added 6 minutes of latency. still arrived before 07:00 but with two different model signatures in the session log, which broke my downstream diff-audit script.

day 24: agent missed it entirely. investigated. the container had a memory spike from an unrelated task the night before, self-healing kicked in at 06:38 and re-started the container, the cron registration didn't re-register on the new instance (my misconfig, but still a real failure mode). email didn't fire.

n8n meanwhile: 30 for 30. zero drift, zero creative edits to the output format, zero reasoning about whether the job should run. it fired at 06:45 every day.

the principle this points at:

"agent" and "deterministic workflow" are different things for a reason. agents are for tasks where the right answer depends on context and judgment. deterministic workflows are for tasks where the right answer is the same answer every time regardless of context.

a daily report email is in the second bucket. i don't want my reporting job to "improve the format" one day. i don't want it to reason about whether to run. i want it to fire. n8n's Schedule Trigger is boring, and boring is what i want.

what the agent side is actually good for (the counter-case):

ran a parallel experiment on a task that IS judgment-bound: reviewing my Stripe disputes queue and deciding which to challenge. agent wins that decisively. it reads the customer's whole history from CUSTOMERS.md, pulls the related Slack conversations, and writes me a recommendation with receipts. n8n can't do that. any amount of nodes wouldn't do that. it's not a deterministic problem.

rough decision rule i'm using now. if the task has a fixed input shape AND a fixed output shape AND needs to run on a schedule, n8n. if the input is fuzzy OR the output requires judgment against your accumulated context, agent. most real workflows are a mix, in which case n8n owns the trigger + writes, agent owns one HTTP-called step in the middle (see the pattern a bunch of people here are converging on).

the "agents kill n8n" take is wrong. the "they're complementary, tell me exactly where each wins" take is what this sub is good at and i wanted to contribute one honest data point.

logs + the exact postgres query + both implementations in a reply, happy to share if useful.

(worth the disclaimer: n=1 setup, one business, 30 days. YMMV. would genuinely love to see other people's cron-reliability numbers on agents because this is the axis that doesn't get measured.)

r/funny wallykins77

Looks like Smokey the bear can put out any fire alone.

r/SipsTea xCandyGlow

That is the “locking on opportunities” look😂😂😂

r/HistoryPorn yuurrrczyykk_01

The 1938 bombing of Barcelona by Axis-supported Nationalist forces. [2048×1364]

r/LocalLLaMA jbunji

I recreated OpenClaw-style memory in Claude Code using hooks + local files

I’ve been experimenting with ways to make Claude Code feel less stateless between sessions.

One idea I tried was inspired by OpenClaw — using simple note-taking + summarization to preserve continuity.

Instead of relying on built-in memory, I set up a small file-based loop:

  • summarize session activity on exit
  • append structured notes to a rolling log
  • rebuild a bootstrap prompt from:
    • a personality file
    • long-term notes
    • recent sessions

Everything is just local files + hooks — nothing fancy.

What surprised me is how much this changed the experience.

Claude started:

  • remembering what I worked on previously
  • keeping track of bugs / deployments
  • continuing threads across sessions

It feels way closer to a “continuous agent” instead of isolated chats.

Curious how others are approaching this.

Are people relying on:

  • built-in memory features
  • external logs like this
  • vector DBs / embeddings
  • something else entirely?
r/whatisit Fire_Walk_With_Me4

Any idea what this is the remnants of? Noticed walking by the side of Grassmere Lake in the lake district. Water stand pipe maybe?

r/SideProject Overall_Cockroach890

A dead simple tweet to image tool hits 500 MRR, 6 small but real lessons learned from building this saas

It took me longer than most people here, but I finally hit $500 MRR on a product that does one thing: converts tweets into images or other formats. It's basically a better tweet pik alternative. yes nothing fancy.

I don't want to make this a product pitch — I just want to share a few small things that actually helped. No big revelations, just tactical stuff from a/b test and mistakes:

1. Google login most. If you're targeting US/NA users, just do Google OAuth. You get real users, real emails, and zero friction. I tested it — it's the clear winner than others.

2. Default to annual billing. Pre-select the annual plan. I was surprised — roughly 40% of paying users go with it when it's the default. That's meaningful upfront cash.

3. Shorten your onboarding. Every extra step kills conversions. The goal is to get users to the "aha moment" as fast as possible. If you can do a no-signup trial, even better.

4. Write better docs, not more support replies. If you're spending a lot of time on support tickets, that's a signal your docs are bad — not that your users are dumb. Write docs so clear that a 10-year-old could understand and follow them. Your time is limited; spend it building.

5. Some things AI shouldn't do. When a user cancels, I manually write a short, genuine email asking why. Response rates are noticeably higher than anything AI-generated. Taste and tone still matter for the moments that count.

6. AI chatbot traffic converts really well. Users coming from ChatGPT/Perplexity/etc. are way more likely to pay. if you want to show up in AI answers, Post on Reddit and other high dr and trust platform. These models pull heavily from here.

Still early but happy to answer any questions.

r/ClaudeAI Few-Reporter8206

HOW TO USE CLAUDE CODE

I'm a sales and marketing professional. I have no coding background. I learned how to create automation using N8N. I want to create more complex structures with Claude Code. How can I do that? There are dozens of videos on YouTube, but I don't want to waste time there. I want to start directly with practical application. What can I do?

r/Damnthatsinteresting Inevitable_Rock_2010

The Piano House, Huainan City, China, completed in 2007 by Hefei University of Technology students. Resembling a grand piano with a glass violin, it serves as an exhibition space, tourist attraction, and community space, often cited as the most romantic building in China.

r/ClaudeCode linmyat

Open-sourced the product-memory workflow I use with Claude Code while building my SaaS

https://preview.redd.it/i8rrjy30n5vg1.png?width=1686&format=png&auto=webp&s=23024ec7888d4c1eb86e823e7914bdc7fa786733

I've been building a SaaS product with Claude Code, and one thing kept bothering me: the code kept moving forward, but the product context kept getting fragmented.

Research would sit in one doc. Strategy notes somewhere else. Grooming decisions half-buried in chat history. Over time, it got harder to carry the "why" behind a feature into the actual work.

So I built a structured workflow for myself — a set of slash commands and markdown-based templates that live inside the repo — and decided to open source it:

https://github.com/soelinmyat/pm

The idea is simple: keep product context close to the code instead of scattering it across external docs and chats. It covers things like research capture, strategy notes, evidence logging, and feature grooming, all in a format Claude Code can reference directly.

It looks like a lot of commands, but in practice I mostly use /pm:research, /pm:groom, and /pm:dev day to day. The rest are there when you need them.

I've been using it while building my own project and it's still early, but it has already made the workflow feel much more coherent. Context carries forward between sessions instead of getting lost.

Curious whether this resonates with others here:

  • Do you run into the same problem of losing product context across sessions?
  • Would you actually use something like this, or do you solve it differently?
  • What feels unnecessary or overbuilt?

Would genuinely appreciate honest feedback.

r/BrandNewSentence nins_

Why Jesus pulled a TACO against Pope Leo XIV

r/Adulting DrunkenRantGuy

It’s a good life

r/ClaudeAI Present_Youth_7900

Stiching clips together

Hello guys,

I'm relatively new to claude and coding itself, so don't hate me for this question.

I would like to make an app or extention in where I can stich 2 video clips together.

The transition should be so good that you can barely see that there was a sticking together.

I'm making veo 3 clips and since they are only 8 sec long I want to stich 2 clips together, basically where 1 clip end the next starts and ends.

I managed to make a prompt and an app for the prompts so it's written so that the 2 clips make sense together.

My question is how can I automatically stich the 2 clips together so that the finished clip looks like if only 1 clip was made?

Is there any option to code an app like that within claude?

Note, the videos are in Hungarian language

Thanks for all the tips in advance 🙏🏻

r/homeassistant EssaySlow323

Backups need a password!?!

TLDR: I wanted to push a Backup on my new HA instance just to figure out that Backups need a password. Now I can restart building my Home Assistent from scratch.

How did I end up here. A week ago I made a supervisor update. After the update all my ZigBee devices seemed offline. So I started to troubleshoot.

Reposition the ZigBee Hub, changing to a new hub, changing the cable, basically all I and Chatgpt could think of.

in the end it seems the USB ports are not sending or receiving data.

Last test, installing a new HA instance to see if it hardware or software/firmware related. This is where I made my mistake. Instead of using a new SD card, I used the existing one, believing that I can just reload the backup and live a happy life.

Turns out it's hardware related and the RPI needs to be exchanged. I ordered a new one and to not live like a caveman I wanted to use my backup. At least I can use some of the comforts my smarthome has to offer.

Yep, here I am trying to figure out how to turn on and off a light strip without wall switch until the new instance is up and running in a couple of days.

r/LocalLLM Curious-Soul007

Local AI in the browser: I built a tool to optimize prompts via WASM so I can stop leaking my data to cloud "Prompt Marketplaces.

https://preview.redd.it/whg3re9tm5vg1.png?width=2174&format=png&auto=webp&s=80870b2a9ad1cbb9600904cc1fd842ed929b94ad

I recently went down a rabbit hole into "Prompt Poaching"—where malicious extensions intercept your AI conversations to sell data to brokers. It’s a massive security hole, especially if you’re using AI for work or sensitive research.

https://aakashkotkar03.github.io/prompt-enhancer-website/

I wanted the optimization power of tools like AIPRM but without the privacy risks (or the $899/mo price tags).

What I built: An extension called Prompt Enhancer that runs localized inference via WebAssembly. It uses Flan-T5, which is instruction-tuned to outperform much larger models on zero-shot tasks.

Key Specs:

  • 100% Offline Processing: All AI logic is in the extension package.
  • Prompt Scorer: Gives you a 0-100 quality score and tips to avoid "hallucination traps".
  • No "Token Tariffs": Since it's local, it doesn't eat into your cloud API limits.

If you’re privacy-conscious, how are you currently handling the "ambient noise" of trackers and scripts around your AI windows?

r/TwoSentenceHorror ScriptedDreamscape

A car just drove past my house.

I live on the forty second floor.

r/LocalLLaMA raketenkater

The LLM tunes its own llama.cpp flags (+54% tok/s on Qwen3.5-27B)

This is V2 of my previous post.

What's new: --ai-tune — the model starts tuning its own flags in a loop and caches the fastest config it finds.

My weird rig: 3090 Ti + 4070 + 3060 + 128GB RAM.

Model llama-server llm-server v1 tuning llm-server v2 (ai-tuning) Qwen3.5-122B 4.1 tok/s 11.2 tok/s 17.47 tok/s Qwen3.5-27B Q4_K_M 18.5 tok/s 25.94 tok/s 40.05 tok/s gemma-4-31B UD-Q4_K_XL 14.2 tok/s 23.17 tok/s 24.77 tok/s

What I think is best here: --ai-tune keeps up with updates on llama.cpp / ik_llama.cpp automatically, because it feeds llama-server --help into the LLM tuning loop as context. New flags land → the tuner can use them → you get the best performance.

i think those are some solid gains (max tokens yeaaahh), plus more stability and a nice TUI via llm-server-gui.

Check it out: https://github.com/raketenkater/llm-server

r/AI_Agents Same_Technology_6491

things I got completely wrong about the testing market

I come from product at a fintech company and have watched our qa team spend more time fixing broken tests than catching actual bugs. I thought I understood the problem well enough to build the solution but i was wrong about almost everything.

First thing was thinking developers were the ones who needed convincing. They aren't the buyers, the person who feels the consequences of bad testing is the engineering manager who owns release confidence, and i spent months talking to the wrong people.

I thought flakiness was the main complaint but it isn't. What exhausts teams is the maintenance, every ui change, every new device, every os update creates more work for the same people. When you talk about that specifically, budget conversations start happening.

I assumed 97% accuracy was a strong number. A qa team whose job is to catch what slips through hears that as 3% they still have to answer for but that realization took longer than it should have.

I thought switching costs were technical. A team that has been on appium for three years has someone who built that setup, knows where it breaks, knows how to fix it and replacing that isn't about migrating code, it's about convincing people to give up something they trust and that's a much harder conversation.

The sales cycle was the most expensive thing I got wrong. Testing infra sits inside production pipelines which means security reviews, procurement, compliance sign offs, and four people who can each say no independently. A good demo gets you another meeting and i kept mistaking interest for momentum and it cost us months.

r/whatisit buttsofpoop

What is this brownish thing in my doorframe?

lighter for size comparison, although the angle throws it off a bit. Just noticed it today... haven't touched it for obvious reasons.

r/AbstractArt mantrakid

The Flow

r/ClaudeCode sydcli

I built a Claude Code plugin that ships a production-ready SaaS

I've been using Claude Code to build SaaS products and kept repeating the same setup: marketing site, auth, Stripe billing, and team management. So I turned the workflow into a Claude Code plugin.

How it works:

claude plugin add github:saasroo/build-saas 

Then just tell Claude what you're building:

Build me a SaaS for project management with pricing at $0/$29/$99 

The plugin has three skills:

  • build-saas — Asks what you need, routes to the right module, and links everything together at the end
  • build-marketing-site — Builds a Hugo marketing site using the Saasify theme (21 shortcodes, i18n, 90+ Lighthouse scores)
  • build-web-app — Scaffolds a React + Firebase + Stripe app using the Fireact framework (auth, billing, teams, RBAC)

Each skill gathers your requirements through conversation, then generates the config, pages, and components. If you build both, it links the marketing site CTAs to your app's signup and syncs the branding.

The skills include 14 reference files, so Claude has all the context it needs — installation guides, config templates, shortcode references, architecture docs — without hitting external documentation.

Real-world proof: I used this plugin to build EquiRound (equiround.com) — a cap table management platform with 57 React components, 9 languages, and Stripe billing.

Open source, MIT licensed: github.com/saasroo/build-saas

Would love feedback from other Claude Code users — what skills would you add?

r/SipsTea Shumei-Chan

A whole new level of service

r/LocalLLaMA NoMechanic6746

NVIDIA + UMD released AF-Next: open audio-language model that outperforms Gemini-2.5-Pro on MMAU-Pro (75.01% vs 57.4%). Temporal Audio Chain-of-Thought anchors reasoning to timestamps.

Audio Flamingo Next (AF-Next) — three variants:

AF-Next-Instruct: audio Q&A
AF-Next-Think: multi-step reasoning with temporal CoT
AF-Next-Captioner: audio description generation

Architecture:
→ AF-Whisper audio encoder
→ Qwen-2.5-7B LLM backbone
→ 128k token context window
→ Ulysses + Ring attention for long-context efficiency

Benchmarks:
MMAU-v05.15.25: Instruct 74.20%, Think 75.01%
vs Gemini-2.5-Pro: 57.4%
LongAudioBench: Instruct 73.9

Supports up to 30 minutes of audio per inference.

The Temporal Audio CoT is the key innovation:
each reasoning step is anchored to a specific timestamp in the audio — making outputs interpretable, not just accurate.

Available on HuggingFace. Open source.

r/SideProject Popular-Position-835

Would u pay for it?

for $5?

or $X?

i just built this in a day.

15cm,

movable

r/Jokes bajujaga

A guy went to a school reunion and met a hot girl

They started chatting and hit it off quite well. As the night goes on they had a few drinks and started getting frisky.

While making out with her he ask, "Who are you here with?"

Smiling sheepishly she said, "I'm your class mate, you silly!"

He retorted, "But this is a boys schooooool........"

r/WouldYouRather OpusReader

If you fell into a book and had to live out your days in that alternate reality, WYR be the main character or a side character?

It can be any fictional book of your choice. Your fate would not be the same as the characters anymore because you would maintain your autonomy BUT your life would still be shaped by your environment and the choices the people around you make as well.

For instance: If you chose to be Harry Potter in Harry Potter, you COULD deviate from the original plot, but it might get you killed, or it might save Dumbledore, who knows? You still would have to face the death eaters and maybe die.

So if your soul got trapped in a book’s character… would you rather have the stress and the glory of being the main character or have the less stress and less glory of being a supporting character?

r/me_irl adolchristin98

Me_irl

r/meme chuunibyou244

Seeing an AITAH then checking the comments

It do be like that sometimes, or most of the time 😂​

SortedFor.me