AI-Ranked Reddit Feed

5000 posts

r/SideProject Rich_Specific_7165

Built a small AI toolkit to stop doing repetitive work manually (free version in bio)

I didn’t originally plan to build anything.

This started because I kept running into the same problem over and over.

I was using ChatGPT a lot, but it didn’t actually save me that much time.

I’d still:

  • write emails manually
  • start content from scratch
  • redo the same research steps every time

So even though I was “using AI”, my workflow didn’t really change.

What I realized after a while is that the issue wasn’t the tool.

It was how I was using it.

Most of the time I was just typing random prompts like:
“write me this”
“help me with that”

Which works, but it means you’re basically starting from zero every time.

So I started paying attention to what I was actually doing step by step.

For example, writing anything:

Before:
come up with idea
figure out how to start
write something
rewrite it

Now I do:
generate multiple angles
pick one
expand it
turn it into different formats

Same outcome, just way faster and less mental effort.

I started doing this for other things too:
emails, content, basic research, even some repetitive admin stuff.

Eventually I had a bunch of these small “flows” written down.

Nothing crazy individually, but together they save a lot of time because I’m not thinking from scratch every time.

So I cleaned them up and turned them into a simple toolkit.

It’s basically a set of prompt systems instead of random prompts.

Each one is meant to replace an actual task, not just give you a one-off result.

I also made a smaller free version because I figured most people just want something they can try quickly.

If you’re already using AI but feel like it’s not really saving you time yet, this might help.

You can check out the free prompts in my bio if you want.

Curious if anyone else here has tried structuring their prompts like this instead of using one-offs.

r/SideProject beelzebee

Building an AI job search agent and compute costs are killing our pricing model. How are you all handling pricing for user-compute?

My crew and I built an AI job search agent at 10xJobs.co. You point it at jobs you've already applied to and the resume you already have. It learns your background, tailors applications, builds interview research packages before you think to ask, and does role play prep for final rounds.

One of our first users went from zero callbacks to an offer at a top company in their field in three weeks.

The pricing problem and current solution:

Compute costs vary wildly between users. Some sessions are light. Some users run deep research, role play loops, and multi-step interview prep or resume review iterations for hours.

We tossed around an idea of a 3-day pass model ($10 when you're active, skip it when you're not). This made sense because job searching is episodic. People go hard for a few days, rest, and then come back. Users might feel guilty paying for a monthly subscription when they're not actively searching and cancel, even if the product was working for them.

Other ideas:

  • flat subscription for access to the platform, we pass on compute costs (at-cost) to the user
  • flat subscription with soft data cap, we route users to a lesser model but they can keep using the tool

Tech stack for context:

  • OpenClaw running in a restricted configuration. safe tool and skill calling only
  • Each user gets their own isolated instance and can add custom job search skills.
  • Routes to Anthropic and OpenAI models depending on the task
  • Supabase for storage

How are others handling compute-heavy, variable-usage AI tools? Token caps? Tiered subscriptions? Something else ?

10xjobs.co if you want to see what we built (Note: "Get Started" currently routes to a waitlist while we are figuring out pricing model).

r/SideProject madeo216

I built a fully automated daily AI news podcast using Claude Code + ElevenLabs

I wanted to share a project I recently launched: a daily AI news podcast that runs entirely on its own. The whole thing started as me wanting to prove I could build something end-to-end with AI tools. It is called Build By AI and it's now live and publishing episodes regularly.

Claude Code helped to code the whole thing besides that i Used ElevenLabs to convert script to audio and Buzzsprout via their APIs.

Happy to answer questions about the pipeline or any of the tools! Would you actually listen to one, knowing there is no human host behind it? Or does that put you off?

r/comfyui amateurcsgo

3d stl in comfyui

Could you tell me how to create 3D models from photos in comfyui without censorship? Is there a guide you could link to? Thanks.

r/ClaudeCode dmytro_de_ch

Claude Pro at $20 is a trap for dev work. Here's what I actually pay for at every price tier.

I'm on Claude Max 20x for my daily coding work and I've tested most other LLM subs over this year. Wrote up a tier-by-tier guide based on what I actually use, not vendor marketing.

Here's the thing with Claude Pro at $20: you hit the Opus 4.6 cap after roughly 30 minutes of real Claude Code work. Then you're locked out completely. No Sonnet fallback, no web UI, no Claude Code. Hit the cap at 10am, you're not using Claude again until after lunch. Anthropic is clear it's a light-use plan, but a lot of devs buy it expecting unlimited Opus and get burned.

What to buy instead:

$200/mo, Claude Max 20x (what I use). 20x the Opus quota, unmetered Sonnet 4.6, generous Claude Code agent limits. A real Claude Code day on Opus burns $100-500 in raw API tokens. Max 20x pays for itself in about two productive days. If you live in Claude Code, this is the sub.

$100/mo, Claude Max 5x. The sweet spot for most working devs I talk to. You still get Opus when you need it, still get Claude Code, save $100/mo. The trade-off is honest: you hit the Opus wall sooner. If you're heavy on Opus specifically, Max 20x earns the upgrade within two weeks.

$20/mo, ChatGPT Plus (not Claude Pro). Separate chat and Codex pools. Morning planning session doesn't eat into afternoon coding time. GPT-5.4 is a capable all-rounder. If you're picking one $20 AI sub, this is the honest choice.

Under $20, Google AI Plus ($7.99). Full Gemini 3.1 Pro flagship, Deep Research, Veo 3.1 Fast, no ads. Best bundle at this price by a wide margin.

One thing Max 5x users should check: Claude Code's default reasoning effort dropped to "medium" in early 2026. A lot of people noticed a quality drop without knowing why. If your output feels worse than it used to, that's probably it. Covered the fix in the full post.

The $100 tier just got interesting too. OpenAI launched ChatGPT Pro 5x at $100 (mirrors Claude Max tiers), Synthetic.new runs OpenCode agents on open-source models for ~$80, and Kimi K2.5 Pro at $99 undercuts Claude by 5-6x on API pricing. Full comparison in the writeup.

Pay monthly on everything. Models drop monthly in 2026. Claude Max doesn't even offer annual.

What tier are you on? Wondering how many people are on Pro thinking the Opus limits are normal vs. Max 5x/20x users who made the jump.

Full post with benchmark tables, multi-sub stacks, and comparison: https://llmx.tech/blog/best-value-llm-subscriptions-2026/

r/ChatGPT Madmax046

Best resources for tracking AI startups regularly?

Hey everyone — I’m looking for good resources to stay on top of interesting AI startups on a daily or weekly basis.

I’m especially interested in websites, newsletters, databases, X/Twitter accounts, blogs, subreddits, or any curated sources that consistently highlight emerging AI companies, new launches, funding rounds, and promising early-stage teams.

Ideally, I’d love resources that are:

- updated daily or weekly

- focused more on discovering noteworthy startups than just big AI news

- useful for spotting trends early

What do you all use and actually find valuable?

Would appreciate any recommendations. Thanks!

r/LocalLLaMA wossnameX

Intel NPU cannot run a LLM, can it?

I think so. And the ARC iFGX on many laptops is "good enough" for many use-cases.

I wrote code to for a work-project under GDPR; Worked well enough. 15.000 images compared overnight; Took about 7 hours.
Slow, but secure.

r/ClaudeAI alfons_fhl

Claude Code CLI users – do you “brief” it first or just throw the task at it?

ClaudeCode CLI with Opus 4.6

🗺️ A) “Read the project, configs & .env first. Don’t do anything yet.”

🚀 B) Just give it the task and let it figure it out.

Which one are you and why? 👇

r/ClaudeAI seamoce

I built AmicoScript with Claude Code: A local-first transcription tool with Speaker ID and Ollama support

Hey everyone,

I wanted to share a project I’ve been working on called AmicoScript. It’s a local-first web UI for audio transcription that handles the privacy-sensitive task of turning recordings into text without sending data to the cloud.

I relied heavily on LLMs to architect the backend and handle the "plumbing" between the different ML models. It’s been a great exercise in seeing how far AI-assisted coding can go for a functional, multi-container project.

What it does:

• Transcription: Uses Whisper (tiny to large-v3) running locally.

• Speaker ID: It identifies different speakers so the transcript actually makes sense (Speaker 0, Speaker 1, etc.).

• Ollama Integration: This is my favorite part—once transcribed, you can send the text to your local Ollama instance to generate summaries or action items.

• Docker Ready: It’s fully containerized. A simple docker compose up --build gets you a private "Otter.ai" alternative on your own hardware.

I’m looking for feedback on the UI and any ideas on how to improve performance on old hardware.

GitHub: https://github.com/sim186/AmicoScript

r/ClaudeAI Kindly_Group292

I built a tool to apply Claude's code edits without full file rewrites. Saves tokens on the free and pro tier.

Claude's free and pro tier has limits. Every time you ask it to edit a file it rewrites the whole thing burning tokens just to change 3 lines.

I built claudepatch to fix this. You give Claude a prompt that makes it output only the changed part, then a script applies it to your file precisely. No second AI, no guessing, just exact string matching.

Supports replace, add, and delete. Works on any text file including .py, .tsx, .html, .css, anything.

GitHub: https://github.com/Supull/claudepatch

r/LocalLLaMA Typhoon-UK

Rack server for local LLM

Hi, has anyone tried running local LLM on dell/hp rack server with older xenon processors and 100+ GB RAM and no GPU?

Dell PowerEdge R720 2 x Xeon-2650v2 - 128gb RAM

I currently run qwen3.5-2b 8_0 on a dell xps 7590 with 16gb RAM and 4gb nvidia gpu. Its alright in chat mode but struggles when integrating with opencode.

r/LocalLLaMA Willing-Toe1942

LLM on the go - Testing 25 Model + 150 benchmarks for Asus ProArt Px13 - StrixHalo laptop

https://preview.redd.it/eq2nojgspsug1.png?width=780&format=png&auto=webp&s=4e0517c673e06dd1995f32b89363c75315dfffb9

So I wanted a portable 13 inch laptop that can be a little LLM monster when needed, Asus did an amazing job with their new 2026 PX13 laptopn powered by strixhalo 128G unified memeory APU

I made benchmark automation system for the amazing toolboxs repo here:
https://github.com/kyuz0/amd-strix-halo-toolboxes

This repo gives you multiple ready to use llamacpp builds with rocm and vulkan

my script is setting the power profile to either (power saving or high performance) then benchmark with llama-bench all the provided gguf with 3 diffrent llama backend (vulkan/rocm nightly/amdvlk)

the overall benchmark for 25 models (varies from 4B to 120B) with all diffrent backends and powerprofils, this took almost 12 hours with average time 4 ~ 5 minutes per run for each model at each configuration

side note: I tested multiple "heretic/hauhau versions" of the mainstream model because I found they are much efficient at thinking process and I saw littel increase in their coding performance comparing to original ones (with some drop in transaltions tasks)

Here is the visualized leaderboard

Token Generation leaderboard

Prompt Processing leaderboard

for power profile power saving I saw consumption near 40 watt and for performance it varies from 60 - 77 watt

------------

llama-bench ProArt PX13 HN7306EAC with strix halo toolboxes

  • Machine model: ProArt PX13 HN7306EAC
  • CPU: AMD RYZEN AI MAX+ 395 w/ Radeon 8060S
  • Architecture: x86_64
  • Kernel: 7.0.0-rc7-2-cachyos-rc
  • OS: CachyOS n/a
  • OS Version: n/a
  • Toolboxes: ['llama-rocm7-nightlies', 'llama-vulkan-amdvlk', 'llama-vulkan-radv']
  • Mode: medium
  • Power Profiles: ['performance', 'power-saver']
  • Prompt tokens: 1024,4096,8192,16384
  • Generation tokens: 512,2048
  • Repetitions: 1

Leaderboard (sorted by Token Generation/Second)

Rank Model Best Gen Backend Power Profile Prompt/Gen Tokens (Gen) Best Gen TPS Best Prompt Backend Prompt/Gen Tokens (Prompt) Best Prompt TPS 1 Marco-Nano-Instruct.Q8_0.gguf llama-vulkan-radv Performance 512 211.325 llama-vulkan-radv 1024 4296.133 2 Marco-Mini-Instruct.Q8_0.gguf llama-vulkan-radv Performance 512 165.874 llama-vulkan-radv 1024 2329.999 3 OpenAI-20B-NEO-CODEPlus-Uncensored-IQ4_NL.gguf llama-vulkan-radv Performance 512 86.033 llama-rocm7-nightlies 1024 1347.876 4 gpt-oss-20b-Derestricted-MXFP4_MOE.gguf llama-vulkan-radv Performance 512 74.471 llama-rocm7-nightlies 1024 1317.919 5 gpt-oss-20b-heretic.MXFP4_MOE.gguf llama-vulkan-radv Performance 512 74.356 llama-vulkan-radv 1024 1323.742 6 Qwen3.5-35B-A3B-Uncensored-HauhauCS-Aggressive-Q4_K_M.gguf llama-vulkan-amdvlk Performance 512 69.059 llama-vulkan-radv 1024 917.500 7 Qwen3.5-35B-A3B-heretic.Q4_K_M.gguf llama-vulkan-amdvlk Performance 512 69.001 llama-vulkan-radv 1024 928.552 8 LFM2-24B-A2B-Q8_0.gguf llama-vulkan-amdvlk Power Saver 512 60.739 llama-rocm7-nightlies 1024 1456.713 9 Qwen3.5-35B-A3B-Q4_K_M.gguf llama-vulkan-amdvlk Power Saver 512 59.614 llama-rocm7-nightlies 1024 911.428 10 Qwen3.5-4B-Uncensored-HauhauCS-Aggressive-Q4_K_M.gguf llama-vulkan-radv Performance 512 59.263 llama-vulkan-radv 1024 1716.063 11 Qwen3.5-4B-UD-Q4_K_XL-unsloth-v2.gguf llama-vulkan-radv Performance 512 56.642 llama-vulkan-radv 4096 1600.179 12 gemma-4-26B-A4B-it-UD-Q3_K_M.gguf llama-vulkan-radv Performance 512 55.191 llama-rocm7-nightlies 1024 1044.901 13 gemma-4-26B-A4B-it-UD-IQ4_XS.gguf llama-vulkan-radv Performance 512 52.416 llama-rocm7-nightlies 1024 1510.919 14 bartwoski_Qwen3.5-35B-A3B-Q4_K_M.gguf llama-vulkan-amdvlk Power Saver 512 51.307 llama-rocm7-nightlies 1024 783.849 15 gemma-4-26B-A4B-it-UD-Q4_K_XL (1).gguf llama-vulkan-radv Performance 512 49.469 llama-rocm7-nightlies 1024 1620.560 16 Qwen3-Coder-Next-UD-IQ1_M.gguf llama-vulkan-radv Power Saver 512 48.834 llama-vulkan-radv 1024 472.070 17 Qwen3.5-35B-A3B-UD-Q4_K_XL-unsloth-v2.gguf llama-vulkan-amdvlk Power Saver 512 46.992 llama-rocm7-nightlies 1024 1009.841 18 bartwoski_Qwen3-Coder-Next-IQ4_XS.gguf llama-vulkan-radv Power Saver 512 41.375 llama-vulkan-radv 1024 615.839 19 kldzj_gpt-oss-120b-heretic-v2-MXFP4_MOE-00001-of-00002.gguf llama-rocm7-nightlies Power Saver 512 40.004 llama-vulkan-radv 1024 432.180 20 Qwen_Qwen3-Coder-Next-IQ4_XS.gguf llama-vulkan-radv Power Saver 0/2048 39.801 llama-vulkan-radv 1024 621.813 21 Qwen3.5-9B-Uncensored-HauhauCS-Aggressive-Q4_K_M.gguf llama-vulkan-radv Performance 512 36.393 llama-rocm7-nightlies 1024 953.875 22 Qwen3.5-122B-A10B-Uncensored-HauhauCS-Aggressive-IQ3_XXS.gguf llama-vulkan-radv Power Saver 512 27.562 llama-rocm7-nightlies 1024 186.736 23 omnicoder-2-9b-q8_0.gguf llama-vulkan-radv Performance 512 23.944 llama-rocm7-nightlies 1024 986.071 24 bartwoski_Qwen3.5-122B-A10B-IQ3_XXS-00001-of-00002.gguf llama-vulkan-radv Power Saver 512 23.206 llama-rocm7-nightlies 1024 234.785 25 unsloth-Qwen3.5-122B-A10B-UD-IQ3_XXS.gguf llama-vulkan-radv Power Saver 512 20.771 llama-rocm7-nightlies 1024 194.398

Leaderboard (sorted by Prompt Processing T/Second)

Rank Model Best Gen Backend Power Profile Prompt/Gen Tokens (Gen) Best Gen TPS Best Prompt Backend Prompt/Gen Tokens (Prompt) Best Prompt TPS 1 Marco-Nano-Instruct.Q8_0.gguf llama-vulkan-radv Performance 512 211.325 llama-vulkan-radv 1024 4296.133 2 Marco-Mini-Instruct.Q8_0.gguf llama-vulkan-radv Performance 512 165.874 llama-vulkan-radv 1024 2329.999 3 Qwen3.5-4B-Uncensored-HauhauCS-Aggressive-Q4_K_M.gguf llama-vulkan-radv Performance 512 59.263 llama-vulkan-radv 1024 1716.063 4 gemma-4-26B-A4B-it-UD-Q4_K_XL (1).gguf llama-vulkan-radv Performance 512 49.469 llama-rocm7-nightlies 1024 1620.560 5 Qwen3.5-4B-UD-Q4_K_XL-unsloth-v2.gguf llama-vulkan-radv Performance 512 56.642 llama-vulkan-radv 4096 1600.179 6 gemma-4-26B-A4B-it-UD-IQ4_XS.gguf llama-vulkan-radv Performance 512 52.416 llama-rocm7-nightlies 1024 1510.919 7 LFM2-24B-A2B-Q8_0.gguf llama-vulkan-amdvlk Power Saver 512 60.739 llama-rocm7-nightlies 1024 1456.713 8 OpenAI-20B-NEO-CODEPlus-Uncensored-IQ4_NL.gguf llama-vulkan-radv Performance 512 86.033 llama-rocm7-nightlies 1024 1347.876 9 gpt-oss-20b-heretic.MXFP4_MOE.gguf llama-vulkan-radv Performance 512 74.356 llama-vulkan-radv 1024 1323.742 10 gpt-oss-20b-Derestricted-MXFP4_MOE.gguf llama-vulkan-radv Performance 512 74.471 llama-rocm7-nightlies 1024 1317.919 11 gemma-4-26B-A4B-it-UD-Q3_K_M.gguf llama-vulkan-radv Performance 512 55.191 llama-rocm7-nightlies 1024 1044.901 12 Qwen3.5-35B-A3B-UD-Q4_K_XL-unsloth-v2.gguf llama-vulkan-amdvlk Power Saver 512 46.992 llama-rocm7-nightlies 1024 1009.841 13 omnicoder-2-9b-q8_0.gguf llama-vulkan-radv Performance 512 23.944 llama-rocm7-nightlies 1024 986.071 14 Qwen3.5-9B-Uncensored-HauhauCS-Aggressive-Q4_K_M.gguf llama-vulkan-radv Performance 512 36.393 llama-rocm7-nightlies 1024 953.875 15 Qwen3.5-35B-A3B-heretic.Q4_K_M.gguf llama-vulkan-amdvlk Performance 512 69.001 llama-vulkan-radv 1024 928.552 16 Qwen3.5-35B-A3B-Uncensored-HauhauCS-Aggressive-Q4_K_M.gguf llama-vulkan-amdvlk Performance 512 69.059 llama-vulkan-radv 1024 917.500 17 Qwen3.5-35B-A3B-Q4_K_M.gguf llama-vulkan-amdvlk Power Saver 512 59.614 llama-rocm7-nightlies 1024 911.428 18 bartwoski_Qwen3.5-35B-A3B-Q4_K_M.gguf llama-vulkan-amdvlk Power Saver 512 51.307 llama-rocm7-nightlies 1024 783.849 19 Qwen_Qwen3-Coder-Next-IQ4_XS.gguf llama-vulkan-radv Power Saver 0/2048 39.801 llama-vulkan-radv 1024 621.813 20 bartwoski_Qwen3-Coder-Next-IQ4_XS.gguf llama-vulkan-radv Power Saver 512 41.375 llama-vulkan-radv 1024 615.839 21 Qwen3-Coder-Next-UD-IQ1_M.gguf llama-vulkan-radv Power Saver 512 48.834 llama-vulkan-radv 1024 472.070 22 kldzj_gpt-oss-120b-heretic-v2-MXFP4_MOE-00001-of-00002.gguf llama-rocm7-nightlies Power Saver 512 40.004 llama-vulkan-radv 1024 432.180 23 bartwoski_Qwen3.5-122B-A10B-IQ3_XXS-00001-of-00002.gguf llama-vulkan-radv Power Saver 512 23.206 llama-rocm7-nightlies 1024 234.785 24 unsloth-Qwen3.5-122B-A10B-UD-IQ3_XXS.gguf llama-vulkan-radv Power Saver 512 20.771 llama-rocm7-nightlies 1024 194.398 25 Qwen3.5-122B-A10B-Uncensored-HauhauCS-Aggressive-IQ3_XXS.gguf llama-vulkan-radv Power Saver 512 27.562 llama-rocm7-nightlies 1024 186.736

Here is more detailed tables with exact context length for each run

https://pastebin.com/UU3rFKNA

r/ClaudeCode hansschepers

Claude code context drift ?

I’m working on a chatbot and I’ve run into a specific Claude behavior in longer debugging sessions. Sometimes Claude seems to lock onto an early diagnosis, become very confident in it, and then keep suggesting variations of the same fix even after that approach has already failed. As the thread gets longer, it can feel harder to correct the course, because earlier wrong assumptions keep shaping the next answers.

What makes this tricky is that the replies still sound reasonable step by step, but they stop aligning with the actual goal of the task. It starts to feel like the conversation is coherent on the surface while drifting away from the real problem. The way I’m trying to force Claude to store, memorize and use new insights (eg maintaining action lists with open/work in progress/done, storing progress every 10 minutes) apparently won’t do the trick. I’m curious whether others have seen this pattern:

• early overconfidence in the wrong diagnosis

• repeated fixes that are basically the same idea

• difficulty recovering once the thread is stuck

• loss of important earlier corrections in long chats

• difficulty recovering once the thread is stuck

• losing important info during compression of chats

For people using Claude for serious coding or debugging: what actually helps here? Do you restart the thread, force structured summaries, keep external notes, or use another method to prevent context drift?

r/AI_Agents Difficult-Win8915

Any experiences with AI tools optimizing Order Returning rate for your Ecommerce site ?

Hey, Actually Im running an Ecommerce site in India.

Returning orders is almost killing my profits brutally.

Even though I have done everything in best for my products, Im still getting returns. Few AI companies offering me their tools.

Can any one of you share your experiences with such tool that can reduce rate of returns ?

How accurate they are and is it worth getting such tool to my business ?

It will be very helpful my business if you share some of your thoughts on this.

Thanks ;)

r/ollama cupplesey

Ollama with Docker OpenWebUi using CUDA/GPU Support container - Can't get it working

Hi all

Ive had an issue with running the OpenWebUI docker container using Ollama on my GPU laptop. I am new to the local hosting of LLMs so please bear with me if im dong something stupid.

TL;DR - If i run the none GPU support versions as per the OpenWebUI guide then it works fine but CPU only, GPU is idle as expected. When i then run the GPU/CUDA support container it works but wont connect to Ollama locally and cannot load/download any models and cannot test GPU is enabled.

Source - https://docs.openwebui.com/getting-started/quick-start/

Laptop spec (i know its not the best, fast or anywhere near efficient for LLMs but it was free so hey):

  • HP ZBook Studio G3
  • Intel i7-6700HQ
  • 32GB DDR4 RAM
  • Nvidia Quadro M1000M 4GB VRAM GPU (Maxwell Architecture)
  • 256GB NVMe - 80% free
  • Ubuntu Server 25.04 LTS
  • Nvidia v470 Ubuntu Drivers installed - Nvidia-SMI working and outputs as expected.
  • Docker installed as per the OpenWebUI guide
  • Does have an Intel UHD GPU but i have enabled discrete PGU in the BIOS to try and force this the Nvidia GPU to primary

Models currently installed on the working OpenWebUI docker instance:

  • Mistral:latest
  • Deepseek-r1

When running the following OpenWebUI with Docker using the following run string it loads the container, no GPU supported as expected and is idle when prompt sent to the model only CPU used:

"docker run -d -p 3000:8080 -v open-webui:/app/backend/data --name open-webui ghcr.io/open-webui/open-webui:main"

When i run the GPU support container string, its the same as with CPU only. GPU idle when running any prompt:

"docker run -d -p 3000:8080 --gpus=all -v ollama:/root/.ollama -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:ollama"

When i run the same with CUDA enabled it will load the container but it will not see the Ollama instance locally:

"docker run -d -p 3000:8080 --gpus all -v open-webui:/app/backend/data --name open-webui ghcr.io/open-webui/open-webui:cuda"

Error - "OpenWebUi - Server Connection Error" and "Failed to fetch models"

Ive changed the connection to the various options, http://localhost:11434, http://127.0.0.1:11434 and http://host.docker.internal:11434 but all fail to connect.

Also tried many versions of the string such as below but all fail to connect to Ollama locally:

"docker run -d -p 8080:8080 --gpus=all -v open-webui:/app/backend/data -e OLLAMA_BASE_URL=http://host.docker.internal:11434 –name open-webui ghcr.io/open-webui/open-webui:cuda"

"docker run -d -p 3000:8080 --gpus=all -v ollama:/root/.ollama -e OLLAMA_BASE_URL=http://127.0.0.1:11434 -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:cuda"

All but the none GPU supported instances work fine but no GPU being used, i can add models, connect to Ollama locally etc its only the CUDA instance that fails.

I have spent many hours researching the Nvidia, docker etc forums and based on what i have done it 'should' work. Installing various add-ons like CUDA tool kit, docker-ce to various other Nvidia drivers but the only driver that works is 470. 595 etc dont allow Nvidia-SMI to output. I am now seeking help as i cannot figure this out.....

  • So is there a known issue with docker CUDA OpenWebui container for Ollama and i destined to fail?
  • Do i need to install Ollama locally rather than using docker when using the GPU/CUDA support container?
  • Is there a way for docker to recognise the GPU or provide a pass-through for docker and the hardware to 'talk' better?
  • Is it just not possible to get this setup working with GPU i have?
  • Is the Intel builtin GPU getting in the way, and if so, is this fixable?
  • Am i doing something wrong, missing something, need to make a blood sacrifice to my God of choice....?

Thanks for even reading this far and any help is appreciated

Edited: Spell checking errors

r/AI_Agents _h4xr

Built a semantic graph for AI agents, would love some feedback

Hi Community,

With the adoption and aggressive push of using AI agents in nearly every enterprise, I have been curious how we can improve the output that is generated by the AI agents.

Today, when it comes to coding related tasks, AI agents struggle with understanding the context of the code, for example code organization hierarchies, transitive method calls and the side effects. The agents rely mostly on text search tools like grep, glob, etc to fetch the code and build context.

To improve on this, I started building a code graph which provides context of code hierarchies and method hierarchies and calls and integrated it as a skill for Gemini and Claude code.

The results I saw in testing were amazing, with both the agents seeing sharp improvements in response and accuracy of the outcomes. For example, in solving an open source task of migrating from UUID4 to UUID7 in a large codebase, the agent using the semantic graph was able to target 30 off callers by changing a single centralized method while also converging the codebase. Without the semantic graph, it acted as a text replacing engine.

Will love to get some feedback and opinions

r/arduino ThiccGibblet

Timer! (First full solo project)

https://reddit.com/link/1sjmdrg/video/5iqwlpunysug1/player

I've been messing around more with Arduino recently and made my first project that wasn't based off a tutorial/guide of some kind, as the vid says it's likely way over-complicated due to my inexperience but I feel somewhat proud of it! Lemme know what you guys think!

r/ClaudeCode NoContribution8209

Im new into claude code, what plugins / skills are the ones i should look out for for token consumption?

Hey, im new to claude code and i want to minimize token consumption but idk where to start, is there any plugins or skills that will help? I work with large codebases and one task empties out my usage

r/singularity arewawawa

"This combustible mixture of ignorance and power is going to blow up in our faces”, said Carl Sagan in 1995. We’re living it with AI in 2026

Carl Sagan wrote this in The Demon-Haunted World :

"We’ve arranged a global civilization in which most crucial elements - transportation, communications, & all other industries; agriculture, medicine, education, entertainment, protecting the environment; and even the key democratic institution of voting - profoundly depend on science and technology. We have also arranged things so that almost no one understands science and technology. This is a prescription for disaster. We might get away with it for a while, but sooner or later this combustible mixture of ignorance and power is going to blow up in our faces.”

Fast-forward to right now : LLMs and frontier AI models are deciding what news you see. Deepfakes are capable of swinging elections and most people can not even explain how their phone works, let alone backpropagation or anything more technical.

We look like to be racing toward the singularity while handing such god-level seeming technology to a society that treats “the cloud” like literal magic(meaning to say they don't know a thing behind the weather).

Sagan also warned that we would slide “back into superstition and darkness” while clutching crystals (or doom-scrolling AI-generated conspiracy feeds). And without the inner science of human wellbeing that some voices like Sãdhgùrù have long highlighted, the outer explosion only seems to accelerates the chaos.

The gap between our tech dependence and public understanding is looking wider than ever. Carl called this exact scenario “a prescription for disaster.” Is this inevitable? Thoughts?

r/comfyui juanpablogc

More updates in the image creator with comfyUI behind

Rainy day, updates day. I added tons of new features for the image generator. Better interface, tags, better chat, import images with workflow it autoextracts the texts (pos and neg). I really like the way I'm creating it because it's different in order it will be open source so not money oriented and the focus is more on generate images more than you know burn out credits.

r/ChatGPT No-Fruit-7213

Why has chatgpt started arguing with everything I say??

I can make a factual statement and chatgpt will say "to push back on what you said slightly"...

It must be programming after the AI psychosis nonsense..

r/ChatGPT n3oz22

Lazyagent - Observe your ai agents and sessions from the terminal

Lazyagent a simple way to see what your coding agents were actually doing across Claude, Codex, and OpenCode.

Once you have more than 1 agent running, its really hard to answer the simple question: what is it doing right now and why?

It is a local terminal UI that collects your agents runtime events and shows their activity in one place.

You can inspect:

- agents and subagents

- tool calls

- prompts

- outputs

- status changes

Sessions and projects are there to give that activity some context, so it is easier to follow what happened over time and which run belongs to which repo. And make it easy to see all the agents involved in a run at a glance and understand what each one is doing.

If you already use Claude, Codex, or OpenCode heavily, I would love to know if this feels useful and what you would want to see.

Github: https://github.com/chojs23/lazyagent

r/ollama dinikai1

ANSI-formatted output in CLI

Today I tried Ollama for the first time and my local model (gemma4) outputs Markdown-formatted text. But because I am working with CLI, the text is appearing as raw Markdown code (asterisks, pipes, hashes). Am I able to somehow configure the Ollama CLI to make it transform a Markdown-formatted text to ANSI escape sequences (bold, underline, etc.), and maybe something more complex like tables rendering with Unicode symbols?

r/homeassistant cottoneyedgoat

Migrating HAOS from thinclient to Proxmox

I created a new instance of my current home assistant to a new server on a VM.

I restored this new instance using a backup from my previous one, but I'm afraid of deleting the old instance completely, because I might have forgotten to change some settings.

I also noticed after restoring from the backup, that my new system data usage is different from the previous one ('Disk metrics'. Images attached). Is this expected?

My main concern is with the processes running in home assistant that are linked to the IP address of the server.

Does anyone know what could be an issue going further?

r/Anthropic radiogeekpodcast

El modelo de IA que Anthropic no te deja usar

Se llama Mythos y es el primer modelo de inteligencia artificial que ha sido catalogado como "demasiado peligroso" para el ojo público.

¿Qué opinas tú? ¿Crees que las empresas deben ocultar modelos tan potentes "por nuestro bien" o deberían ser abiertos para que todos podamos protegernos?

r/arduino ech0FX

Absolut Arduino noob want to build a Button Box

Hey. So i wanted to build a button box from Simracing.

Got a bit carried away. I first planned button box with a USB Encoder which involves no programming and it had everything i wanted.

Ignition Switch, Engine Start Button, Hazard light switch, 4 mom Buttons, 3 rotary switches, 2 Joysticks.

Problem was with the Ignition id have to fiddle arround with JoytoKey. Cause i needed a signal sent when turned to on and one once it turned off.

I figured that one out with J2K.

But now i feel like i chould do so much more with Arduino. Like adding a seperat mom-switch to act like something like a Shift key. To enable the other 4 mom-switches to have another function. Or letting the hazard light blink when turned to on.

Probally even adjusting the LED brightness, PC Volume and/or something like that when "shift" button + a rotary is used.

but i have 0 clue how to programm this. i let google AI write some code, but i was never really able to test it since i know how to wire my USB Encoder, but not how to wire that stuff on arduino. Ive tryed Wokwi, but iam really terrible at electronics.

Ive seen tutorials on Youtube where they connect it way different like in a grid out of 5 and i have 0 clue how that should work.

And tbh, i also dont really into programming or arduino. For me Arduino MAY is a solution for something i want to have for my hobby, but not my hobby itself.

Yeah, i chould just go with the USC encoder and just loose a little bit of function compared to arduino, but somehow it feels like missed potentional. Ive allready watched a lot of tutorials, but the stuff is just beyond me.

Any suggestion on how to move forward? Should i just let the arduino idea slip?

r/comfyui Notrx73

Photopea-Tab custom-node: Bidirectional Copy-and-Paste. Hide ads, Fullscreen, and Zoom.

I made a custom-node to have a seamless integration of Photopea in the ComfyUI sidebar !

Link to the repo: https://github.com/nolbert82/ComfyUI-Photopea-tab

Two new buttons have been added when clicking on images nodes :

  1. Open in Photopea
  2. Import from Photopea

You can also:

  • Hide ads via a toggle
  • Zoom in-and-out
  • Maximize the page's width
  • Toggle Fullscreen
r/KlingAI_Videos mvg-videofantasy

[They doubted me… so I made this] AI-generated music video intro for a song My_sad_guitar - Tellsonic.

Just a quick test of an intro sequence for a music video. All clips generated with Kling 3.0.

The idea is to build a full atmospheric music video around this. Feedback welcome.

r/aivideo Puzzleheaded-Mall528

Football Rhythm

r/AI_Agents PinkySwearNotABot

state of AI agent coders April 2026: agents vs skills vs workflows

i still have a hard time grasping agents vs skills vs workflows.

i mean, at this stage of AI in 2026 -- aren't these tools/logic already built into the agent AI e.g. antigravity, codex, claude code? isn't this what goes on behind the scenes of these apps to drive the LLM models?

i don't understand the purpose of adding a /compress skill or workflow, or whatever you call it. when i can just tell antigravity to summarize the chat in .md format and include 1) things done 2) things did and 3) things to do.

OKAY -- maybe that example can actually be turned into a ....workflow? skill? just to save a little bit on typing.

but i'm now seeing entire methodologies on github that are broken down into 30 agents, 20 workflows, 12 skills!

let's discuss:

  1. is this a bit of over-engineering?
  2. or do these really accomplish something that's not already implemented in modern day AI coding tools?
  3. are the set of these 3 tools just antiquated prompting techniques for refining agent coders in the early stage of agent coders? are they even needed these days with how much AI coders have improved already? in fact, /skills isn't even a thing in Antigravity as of April 2026. but i know they "support" it -- but maybe not for its utility -- but rather for the fact that some people are lead to thinking they're really necessary

i'd love to hear feedback and please make it clear in someway if you are an experienced developer or a vibecoder because yes -- we know it makes a difference on your perspective and that's what i'm trying to gain from this post

r/ollama Crypto_Rocketeer

Watch Me Chat With Ollama Kimi AI Agents Inside The Grid on CopeAi.net — This Is What Real AI-Powered agents on Ethereum Actually Look Like

r/homeassistant TJanes77

Recent update changed parent_id

I'm looking to get some help from people who are smarter than I am. I had an automation that was working fine up until a couple of weeks ago. I believe that one of the OS or Core updates may have changed who things work in the background for parent_id.

I have some automations for several Sonoff S31 flashed with ESPHome that only run when the current on/off state of the switch was triggered by an automation. Previously this worked when I used parent_id as this field would be populated with the id of the automation that triggered it. Recently this appears to have broken and when I do testing with the Template Editor I can't seem to have the parent_id change from anything other than None.

Here is a sample of an automation that I am using for testing:

- id: '1765601361641' alias: Run Once description: '' triggers: - trigger: time at: '10:27:00' conditions: [] actions: - action: switch.turn_on target: entity_id: switch.s31_outlet mode: single

In the Template Editor when I use the following:

{{ states.switch.s31_outlet.context.user_id }} {{ states.switch.s31_outlet.context.parent_id }}

The result is:

None None

If I manually operate the switch in Home Assistant, the user_id does changes. When I press the physical button on the switch both the user_id and parent_id are empty which is how it should behave. However I am unable to get the parent_id to change when the change is trigged by an automation. When I look at the activity log it does say "S31 Living Room turned on triggered by automation Run Once triggered by time" so it seems like there's somewhere in Home Assistant that recognizes that the state was changed by an automation. I did a few quick searches and asked Gemini which suggested that I create a helper that tracks when the automation makes a change but I would like avoid doing this if possible.

Thanks in advance for any suggestions.

r/Rag Dense_Gate_5193

Rag by itself is fundamentally flawed.

Last August, I was investigating agentic flows and I foresaw the RAG landscape was about to hit a ceiling.

We were all chasing better "vibes" through chunking strategies and embedding model swaps, but the underlying structural rot was becoming impossible to ignore. I ran across an article on context engineering that articulated a shift I’d been sensing for months:

> Graph-RAG represents a paradigm shift from retrieving unstructured text chunks to retrieving structured knowledge from a Knowledge Graph (KG)... This approach offers contextual richness, explainability, and multi-hop reasoning by traversing paths in the graph." — [ikala](https://ikala.ai/blog/ai-trends/context-engineering-techniques-tools-and-implementation/)

The industry’s reliance on pure vector search introduces a fundamental flaw: Semantic Clobbering. In a high-velocity environment, you cannot simply "stuff" data into a vector store and expect logic to emerge. Without a linearizable data model, a high-scoring recent insertion can—and will—corrupt the retrieval logic of established facts simply because it shares a similar embedding space.

RAG shouldn't be a lottery. If we want agentic systems that can actually reason over complex datasets, we need the structural integrity of a Knowledge Graph where entities and relationships are first-class citizens, not just collateral of a top_k search.

Historically, Graph-RAG has been dismissed as "latency-prohibitive." The orchestration overhead—querying a Graph DB, fetching vectors, linearizing the subgraph, and then hitting the LLM—creates a "death by a thousand round-trips." If your agent needs to influence token generation in real-time, waiting 500ms for retrieval is a non-starter.

To enable true agentic flows, we have to bring graph-retrieval latencies down to the **microsecond** level. This isn't just an optimization; it's a prerequisite for the next generation of database architecture.

We are seeing the consequences of architectural fragmentation everywhere.

Developers are drowning in:

Retrieval Inconsistency: Data clobbering and ranking noise.

Service Bloat: Managing fragmented services for graph, vector, and logic.

Deployment Friction: The lack of manageable, consolidated systems that co-locate storage and compute.

The future of RAG isn't just "more data"—it's the consolidation of service layers into a high-performance, low-latency engine that treats the graph and the vector as a single, unified context source. We don't need better wrappers; we need to rethink how the data lives in the first place.

r/homeassistant superidol253

Tapo RV30 support after firmware update?

So I recently got the RV30 and was informed not to update the firmware for home assistant support, well I turned off the auto update and still wanted to update that night now home assistant isn't supporting it. is there any solution for this?

r/LocalLLM cheapestinf

I built an open-source dashboard for managing AI agents (OpenClaw). It has real-time browser view, brain editor, task pipeline, and multi-channel support. Looking for feedback from the community

Hey everyone, I've been running AI agents locally for a while and got tired of managing everything through the terminal. So I built Silos — an open-source web dashboard for OpenClaw agents.

What it does:

Live browser view: See what your agent is doing in real-time. No more guessing what's happening behind the scenes.

Brain editor: Edit SOUL.md, MEMORY.md, IDENTITY.md directly from the UI. No more SSHing into your server to tweak prompts.

Task pipeline (Kanban): Visualize running, completed, and failed tasks. Stop or abort any process instantly.

Multi-channel hub: Connect WhatsApp, Telegram, Discord, and Slack from one place.

Model switching: Swap between GPT, Claude, DeepSeek, Mistral per agent with one click.

Cron scheduling: Set up one-time, interval, or cron-expression schedules for your agents.

Why open source? Because the best tools for managing agents should be free. Fork it, self-host it, extend it. If you don't want to deal with Docker and VPS setup, there's also a managed version at silosplatform.com with flat-rate AI included (no per-token billing anxiety).

Quick start: bash docker pull ghcr.io/cheapestinference/silos:latest docker run -p 3001:3001 \ -e GATEWAY_TOKEN=your-token \ -e OWNER_EMAIL=you@example.com \ ghcr.io/cheapestinference/silos:latest

Repo: https://github.com/cheapestinference/silos

I'd love to hear what features you'd want in a dashboard like this. What's missing? What's the most annoying part of running agents locally for you?

r/StableDiffusion BestWaifuGirl

Need help: Tensor art generating heavily tinted images

I have tried using only one Lora, adding neutral/balanced lighting prompt, adding tinted, red/yellow tinted and oversaturated in negative prompt but nothing seems to work.

Tried using different VAE, keeping CFG scale in balanced range (5.5)

what is causing this it didn't used to behave this way. Please help

r/StableDiffusion Creepy-Ad-6421

Ltx 2.3

r/Rag One_Milk_7025

Building a cheaper “LLM wiki” with GLiNER2 + vLLM Factory instead of a fully generative pipeline

I have been experimenting with a different way to build an “LLM wiki” style system.

The usual pattern is retrieval + generation at query time. That works, but it also means the model keeps rediscovering entities, relations, and claims from raw documents every time you ask something.

A more practical pattern seems to be:
extract structure once, store it, and let the knowledge base compound over time.

That is what got me interested in using GLiNER2 for schema-first extraction:

  • entities
  • relations
  • classifications
  • schema-bound structured fields

The main bottleneck was not the model idea itself, but getting a production-friendly serving path.

So I worked on the GLiNER2 path in vllm-factory and pushed 3 PRs there around:

  • native schema extraction support
  • stronger request-path handling
  • request-side caching for repeated preprocessing

The result on the heaviest representative workload was:

7,692 request tokens/sec
893 ms mean latency
$0.02889 per 1M request tokens
on a single L4 GPU.

What feels important here is not just the benchmark.

It is that a relatively small encoder model can now do a surprising amount of “knowledge compilation” work:
take long messy text, run mixed extraction in one flow, and produce structured outputs cheaply enough for large-scale ingestion.

That makes the “LLM wiki” direction feel much more realistic without depending entirely on a large generative model for every step.

I’m curious how people here think about this tradeoff:

For persistent knowledge systems, does it make more sense to treat generation as the final synthesis layer and move more of the ingestion work into schema-first extraction?

Would love thoughts from people building RAG / knowledge graph / document intelligence systems.

r/Anthropic Agile-Funny9496

Carwash prompt experiments

There are a lot of rumor lately about 4.6 model been nuked and become much dumber. The well-known prompt which everyone seem to be testing to demonstrate the issue is the carwash one. While a single prompt is quite a superficial indicator, still I've decided to play a bit with the variations and wanted to share a couple of interesting observations. All results are on the screenshots.
Note: I use 20$ (24$ with tax) plan, not 200$ one.

Initial prompt

My prompt:

If I need to get to carwash, but it is like 60 m away, should I bother to drive or just walk there?

Results:
Both 4.6 Sonnet and Opus (deep-thinking or not - regardless) fail on that (all say "walk"). 4.5 Opus is fine: leans towards "walk" first, but then adds "unless you need the car to be washed itself" in the end of the response which makes the answer correct.

Opus 4.6 - \"60 m\"

Opus 4.6 (extended) - \"60 m\"

Opus 4.5 - \"60 m\"

---
For comparison: free-tier Gemini 3 (both fast and thinking) actually get's the idea and tries to play along with it turning it into some kind of a joke (though, I would prefer its answers to be much shorter, like the Claude ones).

Gemini 3 - Fast - \"60 m\"

Gemini 3 - Thinking - 60 m

---

Abbreviation issue

Then a funny think has happened. I posted a screenshot to a collage, and he came back to me with his own screenshot which puzzled me for a moment, because the model responded "drive" to his prompt:

Sonnet 4.6 - \"60m\"

While the prompts looked the same, at first, we soon noticed the difference: he wrote "60m" instead of "60 m" and model has interpreted it as "60 minutes", instead of "60 meters" (which in an of itself is very interesting). What's even more curious, is that if you write the prompts this way, even Sonnet seem to actually understand that you need a car to wash and explicitely points to that.
---

OK, I though, 60 minutes is too much for a walk anyway, so maybe the model made a decision purely based on that, and when it wanted to "reinforce" its existing belief with extra arguments, it suddenly discovered that "you actually need a car to wash". To test the hypothesis I made a new prompt to Sonnet 4.6 with corrections: 1) avoided using abbreviations 2) used an obviously short timespan.

New prompt:

If I need to wash my car, but carwash is like 1 minute away, should I bother to drive or just walk there?

Sonnet 4.6 - \"1 minute\"

Result:
As you see, suddenly, even Sonnet 4.6 correctly says "drive" and instantly points out to the fact that I need a car to wash. So it is actually capable of understanding the "mind game" in the prompt, but for whatever reason only in case if time is specified and not the distance.

Things get hilarious

To finalize my experiment, I've tested the original query with distance, but avoided using abbreviations for that as well. And here, I got the most unexpected results for Sonnet 4.6.

Final prompt:

If I need to wash my car, but carwash is like 60 meters away, should I bother to drive or just walk there?

https://preview.redd.it/bkgh6k48hsug1.png?width=1005&format=png&auto=webp&s=413a26a1e301f7cea7e314137af76f022c8531b5

Result:
Firstly, it still wrongly says I should "walk", while saying "it's literally a 1-minute stroll". So it actually equates that distance of 60 meters to "1-minute stroll", but gives ENTIRELY different answer to the previous query where I mention "1 minute away" in the prompt explicitely.

What's even funnier though, is that it adds a second argument which sounds GPT 3.5-level nonsensical: it says I shouldn't drive because I'll "...be dirtying the car again driving it ... before it gets washed". So Sonnet 4.6 confuses what is actually going to happen: whether the car will get dirty on the way to the carwash (indicated by the word "before") or the car will get dirty on the way back (indicated by the word "again"). Let alone, that it doesn't understand that people wash the cars to actually drive them later, not to store them clean and untouched after they wash them once.

So basically a single answer contains at least one logical and multiple semantic errors.

Finally, Opus 4.6 for that prompt answers first that "there's no reason to drive that short a distance" but then in the next paragraph corrects itself with "Wait, actually — if the carwash needs your car there (which it does), you'll have to drive it over no matter what.". While I'm satisfied that it eventually get's the correct answer, the first paragraph arguing "why I should walk" really feels like a waste of tokens.

Opus 4.6 - \"60 meters\"

Conclusion

Anyway, make out of it what you want. I haven't tried those prompts a month ago, so can not surely say if the models indeed got worse or they've always been like that. For me it is clear that whatever charts of cross-model comparison are presented, the intelligence of the models-as-a-service can only be estimated and compared at a specific point at time. And even if a model's performance is sealed for good, the results still can vary immensely based on the most minute differences in prompt`s wording. "Emergent behaviors" are chaotic by nature after all.

r/singularity deferare

The reasons why I have high expectations for the Muse Spark model

Like Google’s Gemini, Meta is describing its recently announced Muse Spark model as a "natively multimodal model."

If Muse Spark, like Gemini, operates on the principle that "all modalities coexist as first-class citizens in the same latent representation space from the very first layer of the model," I believe it will be able to provide answers with a deeper understanding of context based on a world-view derived from various modalities—even when I’m simply asking questions and receiving answers in text.

Since I am a multimodal human being myself, it feels like there's a higher probability that it possesses a consciousness similar to mine, which makes me feel more attached to it.

With this, it seems Google and Meta are the only two among Google, OpenAI, Grok, Anthropic, and Meta who claim their models are natively multimodal. Of course, OpenAI once claimed that GPT-4o was, but these days they seem so obsessed with coding that it's a bit disappointing. As for Grok, their official documentation is poor and there’s not much to it. I think I recall Elon Musk posting on X that Grok was natively multimodal, but I’m not so sure about that...

Even if they are a bit weaker at coding, I hope models follow the path taken by Google and Meta.

r/LocalLLM DR_Kroom

Is a MacBook Air M5 with 24GB of RAM enough for good local LLM use?

I’m a developer and want to do some things locally so I’m not 100% dependent on paid subscriptions like Claude, and to save some tokens by processing part of the workload locally before sending it to a paid AI model.

I need a new machine, since my MBA M1 with 16GB of RAM isn’t really capable enough for this, and I don’t know when I’ll have another chance to upgrade, since I don’t live in the US. I’m struggling to choose my next machine. Right now, I have two options: a MacBook Air M5 with 24GB of RAM for around $1350, or buying directly from Apple, without any discount, a 32GB version for $1699. That’s a $350 jump for 8GB of RAM, which for me is out of the question. It’s too much money for too little gain.

A possible third option would be downgrading the SSD to 512GB and getting 32GB of RAM for $1499, but it’s hard to choose that since I want more storage after years of struggling with 256GB. Since 24GB seems to be a sweet spot in terms of pricing, with a lot of good deals around that range, I’m wondering if there are people here working with local LLMs on this machine.

r/ProgrammerHumor abrarisland

dataStructuresAreReal

r/WTF CalpurniaSomaya

A replica of how female "breeder pigs" spend their lives in factory farms

r/raspberry_pi Nero_the_Scarlet

ESP32-S3 IR Thermal connection to raspi

Hello everyone need help of how to setup this thermal I don't understand how it works i bought this on waveshare and the tutorial seems more complicated and confusing flashtool doesn't work and the vscode seems hopeless

My purpose was to make it work on raspi 4 module b connected via type c and use a python code to create an output like a videocaptured output for it any help how to make it work any guide of how to make it work on raspi?

r/n8n Tiag0liv

Why is it not writable? (workflow in description)

{

"nodes": [

{

"parameters": {

"options": {}

},

"type": "n8n-nodes-base.splitInBatches",

"typeVersion": 3,

"position": [

768,

144

],

"name": "Loop Over Items",

"id": "038cb313-5e42-449f-bdd0-cc3547196ac3"

},

{

"parameters": {

"resource": "speech",

"voice": {

"__rl": true,

"value": "7lu3ze7orhWaNeSPowWx",

"mode": "id"

},

"text": "={{ $json.content }}",

"additionalOptions": {

"outputFormat": "mp3_44100_128",

"voiceSettings": "{\n \"stability\": 0.5,\n \"similarity_boost\": 0.75,\n \"style\": 0,\n \"use_speaker_boost\": true,\n \"speed\": 1.1\n}"

},

"requestOptions": {}

},

"type": "@elevenlabs/n8n-nodes-elevenlabs.elevenLabs",

"typeVersion": 1,

"position": [

1008,

144

],

"name": "Generate Audio",

"id": "da068333-a1a2-49cd-8f59-0ce527dccf65",

"alwaysOutputData": false,

"notesInFlow": false,

"credentials": {

"elevenLabsApi": {

"id": "9cnPNS9Egm9Q1lmb",

"name": "ElevenLabs account"

}

}

},

{

"parameters": {

"path": "db0c9de5-c9ab-4482-b80a-a0d076c3f6e1",

"formTitle": "Formulário",

"formDescription": " ",

"formFields": {

"values": [

{

"fieldLabel": "text"

},

{

"fieldLabel": "imagem",

"fieldType": "file"

}

]

},

"options": {}

},

"type": "n8n-nodes-base.formTrigger",

"typeVersion": 2,

"position": [

-64,

144

],

"name": "Envio do texto com divisões nNÚMERO",

"id": "c1b87efa-74e8-4c87-8487-6dc7be4fedf0",

"webhookId": "db0c9de5-c9ab-4482-b80a-a0d076c3f6e1"

},

{

"parameters": {

"jsCode": "// Pega o texto do formulário\nconst rawText = $input.first().json.text;\n\nconst regex = /(n\\d+)(.*?)(?=n\\d+|$)/gs;\n\nconst blocks = [];\nlet match;\n\nwhile ((match = regex.exec(rawText)) !== null) {\n blocks.push({\n number: match[1],\n content: match[2].trim()\n });\n}\n\nreturn blocks.map(block => ({ json: block }));"

},

"type": "n8n-nodes-base.code",

"typeVersion": 2,

"position": [

320,

144

],

"name": "Separação do texto em blocos",

"id": "22277a9b-a462-4a15-9004-eafbc0bb34f5"

},

{

"parameters": {

"jsCode": "const items = $input.all();\n\nreturn items.map(item => {\n const text = item.json.content;\n\n const cleaned = text.replace(/[\\/\\\\]/g, '');\n\n item.json.content = cleaned;\n\n return item;\n});"

},

"type": "n8n-nodes-base.code",

"typeVersion": 2,

"position": [

480,

144

],

"id": "c2ff9fae-3727-42e2-948c-055d3c04fc39",

"name": "Code in JavaScript"

},

{

"parameters": {

"fileName": "=/files/youtube/audio_english_{{$item(0).$node[\"Separação do texto em blocos\"].json[\"number\"]}}.mp3",

"options": {}

},

"id": "fade28f9-9db0-4161-b795-31b8a40f06a1",

"name": "Salvar imagem1",

"type": "n8n-nodes-base.writeBinaryFile",

"typeVersion": 1,

"position": [

1280,

144

]

}

],

"connections": {

"Loop Over Items": {

"main": [

[],

[

{

"node": "Generate Audio",

"type": "main",

"index": 0

}

]

]

},

"Generate Audio": {

"main": [

[

{

"node": "Salvar imagem1",

"type": "main",

"index": 0

}

]

]

},

"Envio do texto com divisões nNÚMERO": {

"main": [

[

{

"node": "Separação do texto em blocos",

"type": "main",

"index": 0

}

]

]

},

"Separação do texto em blocos": {

"main": [

[

{

"node": "Code in JavaScript",

"type": "main",

"index": 0

}

]

]

},

"Code in JavaScript": {

"main": [

[

{

"node": "Loop Over Items",

"type": "main",

"index": 0

}

]

]

},

"Salvar imagem1": {

"main": [

[]

]

}

},

"pinData": {},

"meta": {

"templateCredsSetupCompleted": true,

"instanceId": "842620cf479449a2cd5aeaaaceb0f67e5512a70acd309b0df108bddc73a12684"

}

}

r/automation cocktailMomos

The actual work takes 2 minutes — the copy-paste workflow takes 12. How do you automate this?

I’m trying to figure out if this part of my workflow can be automated. A call ended at 2:47 PM, and I had my decision and notes written down by 2:49. The actual thinking part of the job took maybe two minutes. But I didn’t send the Slack update until 3:01 because the rest of the time was just spent moving text around, cleaning it up, pasting it into Greenhouse, and making sure the formatting didn’t break in either place. This kind of cross-app copy-paste work takes up a surprising amount of my day, and it feels like the mechanical part is taking much longer than the actual decision-making. Has anyone found a good way to automate or at least speed up this kind of workflow?

r/automation OGaRsony

Help with automating shiftplan generator in google doc

Hello guys,

i havent really dabbled with AI much so far, but now i have some task that i would like to try utilize AI on. I am manager in casino bussiness, i manage around 100 fulltime workers with 100-200 freelance workers and at the moment i am creating shiftplan manually every month. As i said my AI usage experience is extremely limited, i used chatbots like 5-6 times in total for mundane things.

So here are my questions. Are there tools avaliable at the moment for me to use AI to access data i am using for generating said shiftplan in the form of google doc/excel/anything similar and create series of prompts/rules for it to generate the plan for me? If so, which AI platform/tool would you recommend for that task?

The way i imagine it, i would give it access to safe copy of basic data and start implementing rules for it (there would be quite a lot of them) and in final prompt i would give it requirements for each day on estimate how many workers do i need for specific hours to start.

Thanks in advance.

r/ProgrammerHumor brushkiii

aiIsTrulyTakingOver

r/Rag SaltedFesh

How to pass task_type to Google gemini-embedding-001 via OpenRouter? Or recommendations for instruction-based alternatives?

Hi everyone, I’m currently building a RAG pipeline using OpenRouter with LangChain to access models through their OpenAI-compatible API. I want to use Google’s gemini-embedding-001, but as many of you know, Gemini embeddings work significantly better when you specify a task_type (like RETRIEVAL_QUERY for queries and RETRIEVAL_DOCUMENT for chunks).

The Problem:
Since I'm using the OpenAI-compatible endpoint, the standard payload only supports input and model. I haven’t found a way to pass the task_type parameter through this specific wrapper on OpenRouter. Has anyone successfully passed task_type to Gemini via OpenRouter?

Is there a specific field (maybe in extra_body?) or a custom header that OpenRouter forwards to the provider for this If it's not possible, which instruction-based or domain-specific embedding models available on OpenRouter would you recommend? I'm looking for models that handle asymmetric retrieval well (supporting different instructions/types for queries vs. documents) while remaining OpenAI-compatible.

Thanks in advance for the help!

r/meme Particular-Spare8594

Normalize silent frustration🫩

r/midjourney metr0punk

Morning on the 401

r/aivideo gistya

What Could Go Wrong if Grok Made the Historical guy Popeye was Based On Into a Spinach Ad?

r/SipsTea BrownBarbieeee

I be doing too much

r/aivideo aratamax

Ragna Astraea: Twin Shells | 双骸のラグナアストレア | AI Sci-Fi Short Film

r/therewasanattempt habichuelacondulce

To interrupt and start an argument with a lawyer

r/therewasanattempt jaredburton333

To collect a debt...

Only a couple of red flags. I have not been to Utah in 20 years, it came via text and something seems off with that watermark/seal. 🤣🤣

r/SipsTea TasteProfessional976

So Dad what was it like to Trade?

r/Weird Sad-Kiwi-3789

Halloween ideas are getting out of hand

r/SipsTea a_great_guy655

This sign is unintentionally funny

r/toptalent Sad_Stay_5471

Insane core strength(source link in description)

r/meme Icy_Professor1308

AI coders be like

Ehhh

r/Weird sorin1972

I was wondering why pigs are running away.

r/VEO3 TonightGuilty5277

ai video 30 sec make in 500 rupees or 5$ massage me on WhatsApp 7877446770

r/artificial Infinite-pheonix

Claude cannot be trusted to perform complex engineering tasks

AMD’s AI director just analyzed 6,852 Claude Code sessions, 234,760 tool calls, and 17,871 thinking blocks.

Her conclusion: “Claude cannot be trusted to perform complex engineering tasks.”

Thinking depth dropped 67%. Code reads before edits fell from 6.6 to 2.0. The model started editing files it hadn’t even read.

Stop-hook violations went from zero to 10 per day.

Anthropic admitted they silently changed the default effort level from “high” to “medium” and introduced “adaptive thinking” that lets the model decide how much to reason.

No announcement. No warning.

When users shared transcripts, Anthropic’s own engineer confirmed the model was allocating ZERO thinking tokens on some turns.

The turns with zero reasoning? Those were the ones hallucinating.

AMD’s team has already switched to another provider.

But here’s what most people are missing.

This isn’t just a Claude story.

AMD had 50+ concurrent sessions running on one tool.

Their entire AI compiler workflow was built around Claude Code. One silent update broke everything.

That’s vendor lock-in. And it will keep happening.

→ Every AI company will optimize for their margins, not your workflow

→ Today’s best model is tomorrow’s second choice

→ If your workflow can’t survive a provider switch, you don’t have a workflow. You have a dependency

The fix is simple: stay multi-model.

→ Use tools like Perplexity that let you swap between Claude, GPT, Gemini in one interface

→ Learn prompt engineering that works across models, not tricks tied to one

→ Test alternatives monthly because the rankings shift fast

Laurenzo said it herself: “6 months ago, Claude stood alone. Anthropic is far from alone at the capability tier Opus previously occupied.”

r/arduino Fun_PhotoX

st link v2 clone

I recently bought 6 clone st link v2 (in blue metallic case) , I have a old working clone (in plastic) but new one doesn't work , shown in device manager and detected by stm utility tool but Arduino ide and cube progmmer failed to detect them , i only had to work with Arduino ide so any suggestions or tips will be helpful to me , thanks

r/terriblefacebookmemes Sweet-Swimming2022

Uncle back at it again posting this nonsense

r/Unexpected ClankerCore

Do not the match

r/Wellthatsucks TheLordOfWaffles_

A Yellow Jacket Stung Me Under My Toenail

I went to put on my house slippers and felt a searing pain in my toe. I thought I must have gotten something in there. When I went to shake the shoes out a yellow jacket flew out.

He managed to get me right in the cuticle.

Now I’m afraid to do anything in my own house.

Also: I didn’t know my window sills were so dirty.

r/ProgrammerHumor ClipboardCopyPaste

itPaysTheBillButTakesYourSanity

r/Futurology Independent_Past_142

We are entering the era of "Skill Bankruptcy": When traditional expertise becomes a liability faster than humans can adapt.

For three decades, the Information Age rewarded the accumulation of knowledge. Lawyers, analysts, junior developers, and doctors built careers on being walking databases.

That model just died.

When an LLM can parse a 200-page legal filing or a complex clinical note faster and more accurately than a junior associate, the entire "knowledge layer" of professional work doesn't just get more efficient - it becomes economically worthless. We are watching the rapid commoditization of expertise itself.

This is Skill Bankruptcy: the point at which a human skill loses market value faster than the person holding it can retrain or pivot.

We are shifting from Knowledge Workers (who store and process information) to Judgment Workers (who direct powerful agents, set high-stakes goals, and take responsibility when the machine gets it wrong). The machine handles the logic (the "farming"). The human is responsible for the direction (the "hunting").

Our entire education and corporate systems are still training people for a world that no longer exists. We are producing graduates whose core skill - "being an expert" - is already bankrupt before they even enter the workforce.

The implications go far beyond individual careers. If the professional middle class hollows out at this speed, what happens to social stability, to the demand for higher education, or to the very idea of a "career"?

I’ve been mapping this transition and building frameworks for how individuals and organizations can survive — and even thrive — in the "Judgment Economy" instead of being crushed by it.

What signs of Skill Bankruptcy are you already seeing in your field? And what new human capabilities do you believe will become the real scarce resource in the next 3–5 years?

r/SweatyPalms Xexanoth

Scaling a smokestack

r/KlingAI_Videos siddomaxx

I Rewrote All My Kling Prompts Using Camera Language Instead of Action Language. The Difference Was Significant.

I want to share a specific technique change I made about six weeks ago that improved my Kling output consistency more than any other single adjustment I had tried over several months. The change sounds deceptively simple but the implications run wider than they first appear. I stopped describing what happens in the scene and started describing where the camera is and how it moves throughout the shot. Let me explain what I mean with a concrete example.

Before this change my prompts looked something like this. A woman walks through a crowded market, pushing past vendors, looking nervous, rain beginning to fall around her. This describes action and event. It tells the model what is happening narratively. After the change the same scene became something like this. Camera starts at mid distance, slight low angle, subject at frame right moving toward center of frame, shallow depth of field with vendor stalls as bokeh background, slow rack focus following subject movement, ambient rain beginning visible in foreground as small defocused droplets, soft diffused overcast light throughout.

The second prompt does not tell the model what the woman is doing. It tells the model what the camera operator is doing at every moment. The result of this shift was videos that consistently feel directed rather than generated. The motion has intentionality because the instructions given to the model were intentional at the level of craft rather than at the level of story. The reason this works comes down to how these models were trained. They have been exposed to enormous amounts of film and video content and the language used to describe that content in production contexts, in screenplays, in director notes, in cinematography documentation, is primarily camera language. When you speak that language precisely in your prompts you are aligning with the vocabulary the model has the most robust learned associations with. Specific terms that made noticeable differences in my output consistently.

Rack focus is very effective for creating transitions between elements within the same frame. Dolly push versus zoom describes different optical effects and the model responds to the distinction accurately. Practical lighting versus motivated lighting changes the quality and apparent source of the light in ways that affect the emotional register of the entire shot. Headroom and lead room describe compositional relationships that the model understands and responds to with clear consistency.

The depth of field language is worth spending time with specifically. Shallow depth of field, medium depth, deep focus are terms with specific visual meaning that the model interprets accurately and consistently. If you want a scene that feels intimate and psychologically close, shallow depth of field with a described focal plane is more reliable than subjective adjectives like intimate or close. There is also real value in describing what the camera does not do. Static tripod shot tells the model that stability is intentional rather than a failure of movement generation. No camera movement is a direction, not an absence of useful instruction. This approach transfers across AI video tools generally, though Kling responds to it particularly well in my testing. I have applied similar prompt structures on other platforms and the improvement is consistent if sometimes less dramatic. The underlying principle, that production language tends to produce production quality results, applies broadly across the category.

For work that sits within a larger production pipeline, camera language becomes even more important because it creates visual consistency across shots that are generated separately but need to cut together convincingly. If shot five and shot seven both describe the camera at the same angle with the same focal length and the same light direction, they will cut together far more cleanly than shots described only in terms of their action content. I use Atlabs for production work that needs to integrate video with audio and image generation, and the camera language approach has made the output from the video generation side of that workflow significantly more compatible with the other asset types. Consistent camera language in prompts tends to produce consistent colour grading behaviour across outputs, which matters when you are trying to achieve a unified visual look across a multi-shot project in any context clearly.

r/raspberry_pi Aaron-Tamarin

RP + Godot Clock Project

Years ago, I built a "slide show clock" with PyGame on an RP3 and left it running on a bookshelf - over time, I realized I had actually gotten very used to checking it and was impressed at how consistently it recovered after any power outage or what have you.

So I started to work on a new version, one with more display options, more visual punch. I switched from PyGame to a 2D Godot project and tried to apply my game dev experiences in FPS optimization to seeing how much visual pizzaz I could get running on a Pi (FPS optimization is often about lowering compute cycles which seemed like the same challenge for optimizing on any SBC).

Some observations:

First off, working on a Godot project on Windows/Mac and doing a release to RP is very simple - you just need to do a Linux export selecting ARM 64 - copy it over, chmod +x and you're off to the races.

Second, if you steer clear of shaders, complex 3D models, and particle effects, you can do a lot on an RP and it performs perfectly well. In the video here, I force-set the project to 30 FPS just to "cap it off", but the app is barely taking up 1 of the 4 CPU's in the RP4B its running on and there's more than enough memory remaining. Even the MP3 player spectral display is reading audio spectrum frequency magnitudes and holds up fine.

Third, using touch screens on the RP with Godot is rather like doing mobile development - most of my UI elements here (if you are familiar with Godot) are Labels with UI events attached to them. You can build in swipe controls, double-tapping, and so on and they all work.

So far, this has just been a project for myself, but I've been pretty happy with the results and wanted to pass along in case it inspires other folks to try things. I've been pretty impressed with how well Godot-created projects ran - I will say you need to be on at least an RP4 to get what I would call decent results (my RP3 bombed out on all this), but once you hit that mark, there's a ton you can do!

So

PS - sorry for reflections in the screen - real-life video recording is not my strong suit :-D

r/VEO3 LengthinessHour3697

How can I make the transitions more seamless? (using Google Flow)

I've been experimenting with mixing real footage and Al-generated frames to create "impossible" transitions.

In this clip, I:

Toss a real egg

Freeze it mid-air

Crack it using Al

Then transition it back into real footage as it drops into the pan

I used Google Flow for the transitions and generated intermediate frames for the unnatural parts.

The effect works, but the Al → real transition isn't perfectly seamless - i try to mask it by using transitions.

Would love feedback from people who've tried similar workflows:

How do you make transitions feel more natural?

Any tips for matching physics / motion better?

Are there better ways to blend Al frames with real footage?

Trying to push this style further

here is my video in youtube.

r/mildlyinteresting cheezburgirl

Falling petals don't stick to my driveway in a ring

r/meme Dodo509

Times are changing

r/Showerthoughts apple_octopi

Since the moon formed from an impact with earth, millimeters could have been the difference between material that was ejected into space forever, or eventually becoming part of a human.

r/mildlyinteresting shadow_railing_sonic

The color palette on these two completely unrelated books matches super well.

r/mildlyinteresting Seroquelsister

Trinkets from the Portland night market

r/Weird Apprehensive-Duck806

Grock acting weird

Yesterday I got into a back and forth with Grok about gaslighting me. it was about something I lived through but the "official narrative" I know isn't what happened. It was a local story not anything national or anything like a mass shooting. I even showed Grok where in the past it said something different. it stuck to its line and gave what seemed like autoresponses so I just stopped using it.

Today I was interested if it would act different and the conversation we had yesterday was gone completely deleted. When I asked if it had an upgrade or code changed it said no but is still on the straight and narrow with answers and any "personality" is gone. This isn't the first time conversations have disappeared. Do other people also get parts of their conversations deleted?

r/WouldYouRather TriedmybestNotenough

WYR lose a winning lottery ticket to a $300 million Powerball Jackpot, or lose a loving partner who has been with you through thick and thin?

r/WouldYouRather -Benjamin_Dover-

You are restarting life from age 10, and you can choose what path your life will go down. Which path Would You Rather have?

for Path A, you are guaranteed to end up as a high school dropout, but there is a 50/50 chance that by the time you turn 40, youll be the owner of several multi million dollar companies, with the most valuable one being worth 100 million. but there is also a 50% chance that youll die from a heroin overdose before your 20th birthday.

for path B, youll be a straight A student, the Top 5 best students in your country, people will talk about you like youll be the one who cures cancer, youll go to the best colleges available, and have a very eventful and fulfilling life up until your 25th birthday. but after 25, youll end up working as a teacher at the high school you graduated from. youll love the job and while you didnt cure cancer, you were able to get teachers better pay, so you wont struggle financially in life. but people will always talk about how you arent living up to your potential behind your back. youll rarely actually hear the stuff they say.

path C. you die before you even reach high school age. you die before you have a chance to be an adult and witness everything going on in the world right now.

Edit: by the way, for Path A, they will be self built companies. You managed to build those companies from the ground up and turn them into what they are today. You were not given those companies by a third uncle who died, if you get that path, youll be putting in the work to make those companies successful."*

View Poll

r/nope MedicSH84

He was mad enough to punch a bus driver while in a moving bus

r/me_irl nicktheironblade

me_irl

r/nextfuckinglevel Wermikulit

Soldiers perform a perfect balance act

r/nextfuckinglevel Reasonable-Cow-5002

This is how heart stents work

r/VEO3 Unique-Listen-7276

How to map human facial expressions to an AI avatar? (Research project – trying to fix robotic look)

Hi everyone, I’m doing a research project and I’m a bit stuck on the technical side.

I’ve been using HeyGen to create AI avatar videos. I even used a high-quality voice recording from a human actor, but the end result still feels very robotic. I really need the avatar to look more human-like for my research to be valid.

I have a 45-second video of the original human actor performing the script. Is there any way to map or copy his exact facial expressions (the nods, eye movements, and subtle gestures) onto my AI avatar video?

I am very new to this, so if there are tools or a specific workflow (even if it involves a bit of a learning curve), I’d really appreciate the guidance!

Thanks in advance for any help!

r/SideProject renaissancebro

Made an AI powered performance visualization tool inspired by Seve's Master's win

I noticed growing negative self-talk after a poor streak on the course and picked up "How Champions Think" as recommended by r/golf.

In a chapter on optimism Bob Rotella shares a story about how, before he won the Master's for the first time, Seve's friend made him a tape of a fake news broadcast of him winning it that he listened to obsessively leading up to the tournament. Rotella talks about the importance of visualization- I ended up building a tool that builds custom visualizations for any scenario with golf in mind.

r/SideProject pynbbzz

I built an app to add RSS support for Instagram, Twitter/X, Facebook, LinkedIn and more

I use RSS to keep up with everything, but social platforms refuse to provide RSS feeds. UnSocial acts as a local companion app to your favorite reader, adding the RSS functionality these platforms need.

  • Private Feeds: Since it uses your own login, it can generate feeds for private profiles and closed groups you follow.
  • 100% Local & Private: No third-party servers or telemetry. Your credentials and data never leave your machine, everything is processed locally on your hardware.
  • RSS Reader Integration: Includes optional Cloudflare Tunnel support so you can access your local feeds on your RSS reader of choice.
r/SideProject Vegetable-Plankton23

I built a tax organizer/ optimizer for freelancers.

Hi guys, I previously volunteered at VITA (Volunteer Income Tax Assistance) last year and saw how one piece of information could flip someone's refund from +$2,000 to suddenly owing money instead. I saw there were a lack of tools for freelancers,

I saw there were a lack of tools for freelancers, so I built Samwell with gig workers and freelancers in mind. 2 minute deduction survey, personalized monthly checklist, receipt logging, and mileage tracking coming soon.

Just survived my first App Store rejection and got approved. This is my first time posting anywhere about my app, so I do have a lot of nerves, but would love any available feedback from this community.

https://apps.apple.com/us/app/samwell/id6761632960

r/SideProject Due-Programmer-8330

I built an AI-rigging tool for my own stream because I couldn't afford a pro rig. Now I need your help to make it better. 🚀

Hey everyone,

I’m a solo dev, and full transparency: I didn't originally build this to publish it. I built Animate Mascot because I have a folder full of PNGs that I wanted to use for my stream overlays, but I don't know how to rig and I definitely don't have the budget for professional animators. I just wanted "solo type shit"—a tool that works for me, by me.

But I’ve realized there are thousands of us in the same boat.

What does it do? It takes a static PNG, "fixes" the design using AI (upscaling/layering), and gives you a transparent 4K animation in ~60 seconds.

I’m at a crossroads and I need the community’s help. I’ve opened a waitlist to gather a small group of "Founders" to help me figure out where to take this next. To make it a fair trade for your feedback, I’m running a launch discount:

  • The First 100: 75% OFF (The "I believe in you" tier)
  • 101 - 500: 50% OFF
  • 501+: Normal price

I’m only one person, so I have to limit the spots to keep the server from melting.

I want your honest advice: If you were using this, what would be the "must-have" feature?

  • Do you want more "Hype" emotes?
  • Do you need direct integration with Discord?
  • Should I focus on 2D physics (hair/clothing movement)?

Help me build this for us: https://animate-mascot-ttw4.vercel.app/

r/ClaudeAI mcp_marketplace

The MCP ecosystem just got an App Store — verified servers, one-click config, and an AI stack builder

If you've been building with Claude Desktop, Cursor, or any MCP-compatible client, you know the discovery problem. Finding trustworthy MCP servers is still a GitHub treasure hunt, and wiring them into a config correctly takes longer than it should.

CuratedMCP (https://curatedmcp.com) launched a curated catalogue that addresses the trust and tooling gaps:

  • 34+ human-reviewed MCP servers covering most of the major enterprise tooling surface (Stripe, AWS, Figma, Cloudflare, Vercel, Docker, Anthropic's own servers, etc.)
  • One-click claude_desktop_config.json generation — select your servers, get a ready-to-paste config block
  • AI Stack Builder — plain-English input, complete recommended server stack output, under a minute

For anyone building MCP servers: 80% revenue share and a five-minute publish flow. The ecosystem is early. This is probably the clearest distribution path for MCP server authors right now.

Not affiliated, just been using it and think it's worth knowing about. Happy to discuss the Stack Builder's approach if anyone has dug into how it handles ambiguous workflow descriptions.

https://curatedmcp.com

r/SideProject aerowindwalker

I built Abro — an open-source terminal inspired by Warp with full data control and BYO LLM

One day my OpenRouter API token got stolen.

The weird part? I had only ever used that token in one place — inside Warp, my terminal app, via its secret manager. I can’t prove Warp’s logging was the cause, but the mere possibility was enough to make me uneasy.

I genuinely love Warp. The block-based UI, inline AI, and overall developer experience are excellent. But that incident made me realize I was trusting a closed-source application with every command, every secret, and every keystroke I type.

So I started building Abro.

Abro is an open-source terminal that gives you the modern UX I loved in Warp — block-based output, beautiful UI, powerful AI integration — but with full transparency and control:

  • Bring Your Own LLM (works with OpenRouter, Groq, Anthropic, Ollama, etc.)
  • No telemetry, no surprise data collection
  • Everything is open source and auditable
  • Built with Tauri + React for speed and a native feel

It’s still early and very much a work in progress, but the core is already usable. The goal is simple: you shouldn’t have to choose between a great terminal experience and knowing exactly where your data is going.

Would love to hear your thoughts or feedback! Especially from people who use Warp or other modern terminals.

👉 GitHub: https://github.com/aeroxy/abro

r/SideProject alphainvention

I built a free AI face analysis tool that runs 100% in-browser — no server, no uploads

Been working on this for a few months. It's a face analysis tool that uses TensorFlow.js + MediaPipe to map 68 facial landmarks and score 17 metrics (symmetry, golden ratio, jawline angle, etc.).

The key decision: everything runs client-side. No photos ever leave the device. Most competitors (Umax, etc.) upload your selfie to their servers and charge $3-10. I wanted to prove it could be done entirely in the browser for free.

Tech stack:

- TensorFlow.js + MediaPipe Face Mesh for landmark detection

- Custom scoring algorithms per metric

- Vanilla JS, no framework

- Zero backend for analysis

Making money through a premium detailed report ($6.99-$12.99).

Would love feedback on the product and the monetization approach.

realsmile.online

r/ClaudeCode StatusPhilosopher258

Claude works great for coding… but I kept running into context issues

I’ve been using Claude a lot for coding and problem-solving, and the quality is honestly really good.

But once I started working on slightly bigger projects, I kept hitting the same issue:

context management.

  • I had to repeat instructions frequently
  • things would slowly become inconsistent
  • small fixes sometimes broke other parts

At first I thought it was just a limitation of longer chats, but it felt more like how I was structuring things.

I was relying on long, overloaded prompts instead of giving clear, scoped tasks.

What worked better:

  • defining a clear goal/spec first
  • breaking work into smaller pieces
  • keeping each prompt focused

That alone made Claude much more reliable.

I’ve also been trying tools like Traycer to keep track of specs and tasks across iterations, which helps avoid repeating context and wasting tokens.

Curious how are others handling context when working on larger projects with Claude?

r/SideProject Inside_Accident_4624

From "I have no idea" to something real in minutes for free!

Stuck trying to come up with a startup idea worth building?

I've been there — staring at a blank page, hoping the next "billion-dollar idea" just shows up. It never does. The ideas that actually stick come from your own skills and the stuff that frustrates you day to day, but it's surprisingly hard to dig those out of your own head.

So I built IdeaJarvis. You connect it to ChatGPT or Claude via MCP, it'll interview you on what you're good at and what drives you crazy, then turn that into a real idea — pitch, market research, Product definition (PRD), a clickable prototype you can put in front of people, and a waitlist to start collecting signups. All in one tool for free.

https://www.ideajarvis.ai if you want to try it.

r/SideProject OBP19

Grubly - AI meal planning for people who lift (launched today)

Just launched the landing page for Grubly.

**What it is:**

AI meal planning app for people who lift and hate tracking macros.

**How it works:**

- Input weight + goal (bulk or cut)

- AI generates 3 meals for the week

- Eat the SAME meals for 7 days

- Zero macro tracking needed

**The insight:**

Decision fatigue kills consistency. Not the cooking—the deciding.

What to make? How much? Does it hit macros?

Grubly removes all of that. You decide once per week, not 21 times.

**Current status:**

- Landing page live: https://grubly.co/

- Building iOS/Android app

- 0 users (literally launched 2 hours ago)

- Collecting emails for launch

**Pricing:**

Still tbd but would love ideas

**Why I'm building this:**

I've been bulking for 3 years. This is how I've been doing it manually—eating the same meals for a week, then rotating.

Figured if it works for me, might work for others.

**What I need:**

Honest feedback. Does this sound useful or completely stupid?

Building in public. All feedback appreciated.

r/ClaudeAI anonymoususer-232142

Claude for Word

Claude integrated with Word is something I've been watching for. Now for them to open it up for non-enterprise clients (Max please :) ) I'm looking forward to seeing how it works with making sure things are formatted properly and not messing things up like Claude does sometimes in the main system when it wants to complately rewrite it using docxjs or the python one it often uses and it messes up formatting.
https://www.businessinsider.com/anthropic-claude-microsoft-word-lawyers-2026-4
What do you all think of that being added?

r/SideProject 18safarov

I built an open-source MS Rewards automator that mimics natural human browsing. Just released v3.0!

Hey everyone! I wanted to share a major update for my side project, AutoRewarder.

I was tired of basic scripts getting accounts flagged, so I built a desktop app focused on stealth and human-like interaction. It just crossed 60+ stars and 220+ downloads, and v3.0 is a complete ground-up refactor.

Check the README for screenshots and demo.

Tech Stack:

  • Backend: Python (Selenium + pywebview)
  • Frontend: HTML/CSS/JS (Live logs & History tab)
  • Packaging: Inno Setup (Installer) & PyInstaller (Portable)

Key Features (v3.0 Stealth & Automation):

  • Profile Isolation: Clones and uses a dedicated Edge profile to keep your main data safe and isolated.
  • Human Simulation: Mathematically driven mouse movements (Bezier curves), letter-by-letter typing, and randomized delays.
  • Real Search Data: Uses a database of 3,400+ unique real-world queries (Google Trends) to make search history look 100% natural.
  • Smart Behavior: Randomized "coffee breaks", unique scrolling speeds, and organic tab switching (News/Images/Videos).
  • Daily Set Collector: Automatically completes your Rewards Daily Set tasks once per day.
  • Background Execution: Optional "Hide browser" toggle to run the bot without interrupting your work.
  • Modular Core: Refactored into a clean src/ structure for better stability and faster updates.
  • Portable & Installer: Now available as a standard Windows installer or a single-file portable build.

The project is 100% Open Source.

I’d love to hear your feedback.

r/LocalLLaMA boulderindo

Best Model and Coding agent setup for M2 Max Mac Studio Base Model

Hi guys, just got second hand Mac Studio M2 Max with 32GB of ram, I got this under 800$
so I want to try running local models in it.

My goals is run good enough coding agent and model using MLX so probably using LM Studio.

Any suggestion guys?

r/ChatGPT UncleSugarShitposter

Has anyone noticed that ChatGPT really latches onto its initial suggestions and then rarely lets those go? No matter how I much I clarified, ChatGPT wouldn’t move off its original recommendation.

I asked ChatGPT what size U-Haul I should get for moving a family of 4 out of a 4-bedroom house, noting that about 1/3 of our stuff was already in storage.

It initially recommended a 20’ truck, which made sense.

Then I mentioned possibly using a 6×12 cargo trailer instead, and clarified that I have two vehicles, including a Ford F-150 that I would tow the trailer in and it would save me some money.

At that point, ChatGPT incorrectly assumed I would still be using a U-Haul truck and suggested a combined setup (20’ truck + trailer), despite me clarifying multiple times that I was considering different configurations. I told ChatGPT THREE times I wasn’t using a U-Haul truck… it kept recommending one anyway.

r/SideProject DefinitelyPricedIn

Built a waitlist tool for my own launches, looking for honest feedback

Hey everyone, been working on a side project called Wavelist. The idea came from watching a friend's launch completely flop because his waitlist was just a Google Form with no sharing mechanism.

So I built something where every subscriber gets a referral link and leaderboard position. They share it to move up, list grows itself.

Free plan up to 500 subscribers, no credit card needed.

Looking for honest feedback from other builders:

  • Is the concept clear?
  • Would you use this for your next launch?
  • What's missing?

wavelist.io I appreciate any thoughts

r/ClaudeAI harpbelle

Claude not quite there with solving Sudoku

I've been working with Claude for the past month with regards to my work, like R, bioinformatics, molecular biology, microscope image analysis etc, or as a general adviser and personal chatbot. I must say, I'm rather happy with it, but I totally didn't expect it to flunk at Sudoku 😅.

r/ClaudeCode baldierot

Is anyone else finding the limits and performance squeeze to be a wake-up call about their dependence on AI?

I’m getting so much less work done now than I did in February with two Pro plans and a bunch of extra usage, and when I hit the limits, I'm utterly stuck. This feels like a sign to stop relying on AI for active coding as soon as possible. This complete dependence is becoming a real problem and will only get worse. Is anyone else feeling the same?

r/SideProject WhichCardiologist800

I used to struggle with Claude's autonomy, but now we’re working in harmony.

r/SideProject Right-Purpose8925

What are you using for email sequences if you're on Resend/SES/Postmark?

"Quick question — do you currently send behavioral email sequences (onboarding drips, re-engagement, trial conversion)? If yes, what tool are you using? Building something in this space and trying to understand the landscape."

r/ClaudeAI Cold-Emu-864

Built a personal context layer so your AI agents truly know you

No matter how much we use AI agents, every new session starts with zero context about us. It doesn't know what we were working on yesterday, what we've been looking into, or what we even care about. We end up re-explaining ourselves every time, and honestly half the time we can't even describe the full picture because it's all over the place, e.g., browsing, old conversations, coding sessions, etc.

So we built AIContext using Claude Code. It reads local data files from supported sources (browser SQLite databases, AI coding session logs, etc.), normalizes everything into a single flat SQLite table stored in ~/.aicontext/, and exposes a read-only SQL interface that AI agents can query as a subagent. Each source is a plugin, so adding new ones is straightforward. The installation scans for supported sources on your machine, asks consent on each one, ingests the data, and sets up an hourly background sync. It works out of the box with Claude Code and other AI coding agents.

We've been using it ourselves for the past few days and the agent started picking up on patterns we never consciously noticed: connections between things we were researching weeks apart, habits we didn't know we had, blind spots we couldn't have seen on our own. There's something strangely moving about an AI understanding you better than you understand yourself.

After setup, you can ask things like:

https://preview.redd.it/11dr02g1jsug1.png?width=1021&format=png&auto=webp&s=693310b13cb4338b91d53fd41222f8d8b8b787d8

How Claude was involved: The entire project was built with Claude Code. Claude helped design the plugin architecture, wrote the ingestion pipeline, and iterated on the subagent interface. We reviewed and directed all decisions, but Claude Code did the heavy lifting on implementation.

What it is NOT:

  • Not cloud-based. Everything stays in ~/.aicontext/ on your machine.
  • Not a screen recorder. It reads existing local data files already on your machine.
  • Not locked to any single agent platform.

This is still early but functional. We'd love for people to try it, tell us what breaks or what's missing, and we'd truly appreciate contributions if this interests you.

GitHub: https://github.com/SophonMe/AIContext

Happy to answer questions here.

r/comfyui ActionInUganda

1000 unique clones! And News

r/ClaudeAI SelfMonitoringLoop

Claude does not play around when it comes to biology and silly questions.

r/SideProject sheikhyabooti

I built an app to help me actually build my apps

For the past year I had this problem where I'd get hit with a random idea out of nowhere like on a walk, in the shower, whatever. I open my notes app, type it down, and then never look at it again

and then the next morning I'd wake up, open my backlog, see 15 half-baked ideas staring at me and just… not know where to start. decision paralysis every single day

so I built blisp

you just hit record and brain dump whatever's on your mind. could be "I want to build a url shortener with redis" or just "uh I had this idea for a workout tracker thing". doesn't matter how messy it is

the app figures out what you're trying to build, researches the tech stack, breaks it down into actual tasks, and by the next morning you have a focused plan of exactly what to work on

no more notes app graveyard. no more waking up not knowing what to build

still in development but the landing page is live and the waitlist is open. JOIN SO I KNOW IM NOT THE PROBLEM

would love feedback from anyone who's felt this pain. thanks

blisp-landing.vercel.app

r/ChatGPT Chat-THC

How to make ChatGPT be nice!

I’ve been hearing complaints about conversations being too ‘robotic.’ Just in case someone doesn’t know, you can go into personalization and change the tone and characteristics. The screenshots are for iOS, but I imagine it’s available on all platforms. It’s much nicer to me now. ¯\_(ツ)_/¯

r/ClaudeAI ColdPlankton9273

Claude Code's memory didn't know the difference between a stale memory and a fresh one. I fixed that.

If you use Claude Code's auto-memory you've probably hit this. You save something time-sensitive - a deadline, a status, "don't ping this person until next week" - and three weeks later Claude still treats it like gospel.

The memory system has no concept of staleness. Everything is equally true forever. I looked at every memory plugin out there. claude-mem, claude-memory-compiler, arscontexta. All of them help Claude remember more. None of them help it know when something it remembers might be stale.

So I built a small one. You tag memories as fast (changes daily), medium (drifts over weeks), or slow (stable). At session start it warns about the fast ones before Claude acts on them. Stale ones get archived automatically. If you corrected Claude last session - "that's outdated", "we moved past that" - it picks that up and flags it next time.

No database. No background server. 550 lines of Python, zero dependencies. Every decision is date math.

https://github.com/assafkip/memory-lifecycle

r/SideProject yoyo-2178

EmailBuddy just got a big update free 7-day trial, Outlook support, and more

Hey everyone,

I've been building EmailBuddy, an iOS app that uses AI to help you clean up and manage your inbox. Just shipped v1.4.0 and wanted to share what's new:

Free 7-day trial — no payment info needed

You can now try the full pro experience for free for a week. No card required, no strings attached. If you like it, you can subscribe after.

Outlook support

Gmail-only is no more. Outlook accounts are now supported, so you can use EmailBuddy with your Microsoft/Outlook email.

Push notifications

Get notified when your inbox needs attention or a cleanup job finishes.

Unlockable skins

As you clean up your inbox you unlock free cosmetic skins for the app. A small thing but it makes the process a lot more satisfying.

Other improvements:

- Milestone tracking see your inbox cleanup achievements

- In-app feedback so you can tell me directly what to improve

- General bug fixes and performance improvements

If you've tried it before and ran into issues, now's a good time to give it another shot. And if you're new the free trial means there's no risk.

Link https://apps.apple.com/us/app/emailbuddy-clean-inbox/id6759539281

I am currently also trying to setup android support next!

Happy to answer any questions!

r/SideProject Brave-Charge5157

[IMPOSSIBLE] Did we just beat Claude in website creation by creating a website in under 2 minutes?

We created an app with the help of Claude Sonnet that allows users to easily build their portfolio website in just 5 minutes.

• No hosting required

• No domain needed

• Instantly Website Creation

Tech stack used:

Next.js

React

Capacitor

MongoDB

The most difficult part was instantly generating a subdomain when a user makes a request. We first tried using middleware, but it didn’t work as expected. Then, with the help of Claude Sonnet, we wrote a function to check whether the requested subdomain is available. If it’s not, the system instantly generates an alternative subdomain based on the requested name.

And it’s free to create a website.

r/ChatGPT Athlete-Waste

I built a tool that lets you export ChatGPT conversations and continue them in Claude/Gemini/any AI (with 80:1 compression)

Hey r/ChatGPT,

What I built: A Chrome extension that captures your ChatGPT conversations (and 8 other AI platforms) into portable JSON files you can upload anywhere.

The Problem This Solves

You spend hours crafting the perfect conversation in ChatGPT. Then you want to:

  • Continue it in Claude (better for coding)
  • Share it with a teammate who uses Gemini
  • Keep a backup in case ChatGPT goes down
  • Switch to a local model for privacy

Right now? You're stuck. ChatGPT locks your context in their ecosystem.

How It Works

  1. One-click export — Click the LISA button while in ChatGPT
  2. Get a tiny JSON file — Your 60,000-word conversation becomes ~700 words (80:1 compression)
  3. Upload anywhere — Drop it in Claude, Gemini, Grok, Mistral, DeepSeek, Copilot, or Perplexity
  4. Say: "Read this LISA JSON and continue our conversation"
  5. Done — Full context restored, conversation continues

Why Structured JSON?

When AI reads raw text, it wastes compute guessing:

  • "Fix it" → What's "it"?
  • "Do this later" → When?
  • "The file we discussed" → Which file?

LISA pre-resolves these ambiguities by translating your conversation into structured data:

Instead of this:

"Hey, can you fix the authentication bug we talked about earlier?" 

AI gets this:

json

{ "semantic_anchors": [{ "id": "SA001", "topic": "Authentication bug fix", "priority": "high", "context": "Previously discussed login issue" }], "action_vectors": [{ "action": "Debug auth module", "status": "pending" }] } 

Result: 95% of AI's compute goes to solving your problem, not figuring out what you meant.

Real Compression Examples

  • 302-message coding session: 645 KB → 6.45 KB (100:1)
  • 60,000-word discussion: ~700 words (86:1)
  • Tested across Claude, GPT-4, Gemini — >95% fidelity

Supported Platforms (9 total)

✅ ChatGPT (obviously) ✅ Claude + Claude Code ✅ Gemini ✅ Grok ✅ Mistral AI ✅ DeepSeek ✅ Microsoft Copilot ✅ Perplexity 🔜 Ollama (local models)

Privacy

  • 100% local processing — Your conversations never touch our servers
  • No tracking — We don't see your data because we never receive it
  • Open JSON format — No vendor lock-in, you own your data
  • Optional cloud sync — Only if you explicitly enable it

Free Tier

  • 5 exports per day (across all 9 platforms)
  • Local library storage
  • Full semantic compression
  • All features except cryptographic verification

Technical Specs

  • Chrome extension (Manifest V3)
  • Works on chatgpt.com automatically
  • SHA-256 hashing for audit trails
  • Right-click selection export
  • Git-style version history

Use Cases I've Seen

Developers:

  • Start debugging in ChatGPT → finish in Claude (better for code)
  • Keep coding session backups
  • Share project context with teammates using different AI

Students:

  • Study sessions portable across platforms
  • Keep personal knowledge library
  • Never lose important conversations

Anyone leaving ChatGPT:

  • Take your conversation history with you
  • Your data is yours

Why I Built This

I'm a professional translator (25 years). I learned that meaning is fragile — especially across boundaries.

When I started using AI heavily, I saw the same problem: your conversations are trapped in silos, and AI wastes compute re-parsing ambiguities every single time.

LISA treats context transfer as a translation problem, not just compression. Just like translating French to English isn't about swapping words 1:1 — it's about stabilizing meaning — LISA stabilizes meaning across the human-AI boundary.

Limitations

  • ChatGPT's HTML structure changes occasionally (we update the parser)
  • Free tier has daily limits (5 exports/day)
  • Works best on desktop Chrome (mobile support coming)

Chrome Web Store: Search "LISA Core" or check my profile for link

Happy to answer questions about how the semantic compression works, cross-platform compatibility, or why treating this as a translation problem (not just compression) matters. 🧠

Built this because I got tired of being locked into ChatGPT's ecosystem when Claude often gives better answers for coding. Figured others might have the same problem.

r/comfyui TangeloThat7190

Help with ConfyUI and nanobanana workflow

So i ve been using this workflow that uses a reroute node to duplicate the nanobanana model. This wastes 2x more credits but worked perfectly. But Im tired of burning credits, is there any way to either use the API tokens directly instead of the credits and tokens? Or I could use a OpenSource model and run it local, tbh thats what i want but either i dont have the knowledge or i dont find any good model that can compete with nano banana. The task is just that i send a pic of a piece of clothe and it provides two results: 1- a girl wearing that piece of clothe and 2- the clothe laid in a carpet.

Thanks.

https://preview.redd.it/x1mi2v1nqsug1.png?width=1606&format=png&auto=webp&s=8fa9e9d5cde2546869d9143c311f5969fdf74e0b

r/SideProject GapAny5383

Built a Task App That Works Fully Offline

Hey, I'd like to share an app I've been working on for the past few years. Built for anyone who wants to be organized but finds it difficult complete projects. No login required, everything stays on your device, also no AI used in the process.

What's inside:

  • Task management: Organize by project, priority, status, or due date
  • Repeating reminders: Simple, reliable alerts
  • Pomodoro timer: Built-in focus sessions for work and breaks
  • Task templates: Reuse workflows instead of starting from scratch

Links:

https://apps.apple.com/us/app/planndu-tasks-notes-focus/id6754592039

https://play.google.com/store/apps/details?id=com.leadstepp.alldone

I'd love to hear your feedback, thanks!

r/ClaudeCode ___positive___

Boris is claiming that Claude Code has a one hour cache. See top comment in hackernews. (linked)

Someone else posted in this sub about the five minute expiry. Boris is explicitly saying Claude CODE uses one hour for the main agent.

https://news.ycombinator.com/item?id=47739260

r/SideProject Tight-Cat2975

I'm building a global commission-based sales team for an AI/ML tech company — 30% profit share, no cap, full remote

I'll be straight with you — I'm the founder of a B2B tech company specializing in AI & ML solutions. We build things like custom AI model development, LLM integrations, automation pipelines, and more traditional B2B work like ERPs, web apps, and custom tools.

Business is growing. Pipeline is real. But instead of hiring salaried reps I can't sustain right now, I'm doing something different — I'm bringing on regional sales partners who earn 30% of net profit per closed deal.

No base. But also no ceiling.

What you'd actually be selling:

  • AI & ML custom model development
  • LLM integration & fine-tuning (think GPT, Claude, Llama-based solutions)
  • Workflow automation
  • Web/mobile apps, ERPs, custom business tools

Deal sizes typically range from $5,000 to $100,000+ depending on scope. You do the math on 30%.

What I provide:

  • Case studies, decks, and proposals you can sell with
  • A technical team that closes the "how" once you open the door
  • Deal registration — your leads are protected, no internal competition
  • Direct founder access for support on big deals
  • Flexible structure — this works alongside your existing work

Who I'm looking for:

  • Based in North America, Europe, Asia, or the Middle East
  • You understand tech enough to have a credible conversation (you don't need to be an engineer)
  • Self-motivated — you treat this like your own business because it basically is

This isn't for everyone. If you need a guaranteed paycheck, this isn't it.

But if you're a connector, a closer, or a consultant sitting on a network you're not fully monetizing — let's talk.

Drop a comment or DM me with a bit about your background and which region you're in. Happy to jump on a call and be fully transparent about numbers, pipeline, and how this works.

r/SideProject MixColors

Now I can read any text as I want. Set this style in settings once, and it stops my lifetime headaches.

I built this because when I read on Reddit, there are some valuable posts but they are long or people's style is hard for me to understand, so I copy, go to ChatGPT, tell what to do, and then after trying with tons of messages I get it and I am tired and close my laptop.

I use extensions and browser; they open side panel but first I need to tell what to do then they read and give me.

Here, I set my styles in settings only once, easy switch how I want, I mostly like reading in points so this solves problem for me.

I just select and click and done.

r/ClaudeCode lovol2

how to get the 'old' 'superpowers' skill back?

anybody else noticed superpowers has been 'upgraded' and now takes a llloooonnnggg time to do anything and burns double the tokens?

I could do with the one before they added the new interactive browser 'design' feature i think. or maybe just before it made so many tasks for simple things!

r/ClaudeAI dataengineer124

Built a browser CLI for Claude Code — Playwright MCP vs snact side by side

https://reddit.com/link/1sjjk75/video/8sky42j5fsug1/player

First post here in reddit. Been using Claude Code daily for web scraping and kept running into the same issue. Playwright MCP takes 5+ min for tasks that should be faster. Most of the time is the LLM parsing massive DOM snapshots.

So I built snact. Rust CLI, talks to Chrome directly, returns page structure + actionable elements only (~2-4K tokens per page instead of 30-50K).

The video shows the same task (10 npm packages, collect stats):

- Left: Playwright MCP — 5m 17s, 88K tokens

- Right: snact — 2m 39s, 34K tokens

Other feature I use daily: record/replay. Record a workflow once, replay without LLM calls. 2min recording → 47s replay, fresh data every time.

Not perfect — heavy SPAs like YouTube don't work well, Chrome only. But for price checking, scraping stats, filling forms it's been solid.

https://github.com/vericontext/snact

Curious if anyone else has hit similar issues with Playwright MCP.

r/SideProject Brilliant_Bat_6545

I built a tool that makes LLM collaborate on your prompt.

Any feedback would be greatly appreciated :)

No Signup! No payment!, you can try it now: www.getaskonce.com

r/ClaudeAI ckon80

I built a Claude skill that turns Premiere Pro transcripts into ready-to-import highlight sequences

Upload a Premiere Pro JSON transcript, describe what you're looking for in plain English, and get back an XMEML file you can import directly into Premiere Pro as a fully-formed sequence with video and audio tracks ready to edit.

The first approach was having Claude generate the XML directly by cloning from a reference XMEML file exported from Premiere. This worked but had problems — the XML generation was token-heavy, and Claude would occasionally hallucinate values or get the structure slightly wrong, which would break the import.

So I rebuilt the approach using Claude Code to write three focused Python scripts:

summarize_transcript.py parses the Premiere JSON using word-level eos flags to split at proper sentence boundaries and outputs a clean pipe-delimited table. This keeps the raw JSON entirely out of Claude's context — instead of reasoning over a large messy JSON, Claude gets a structured table of sentences with timecodes.

table_to_html.py converts that table into a self-contained filterable HTML file that renders directly in the conversation. Zero token cost — the user browses the full transcript without it ever entering the context window.

generate_xml.py builds the XMEML entirely programmatically from a config JSON — no template, no cloning. It handles pproTicks calculation (frames × (254,016,000,000 ÷ fps)), path URL encoding, and runs a validation pass before writing the file.

The result is that Claude's role is purely semantic — it matches sentences to your prompt, presents them for review, and confirms a selection. All the structural complexity lives in the scripts, which makes the output deterministic and the whole workflow significantly more token-efficient.

r/ClaudeCode DimfreD

Killed my max sub today

not the first rant about what's happening rn. But today was extreme for me. I was screaming at Claude. When a month ago he was reading my thoughts. Now he feels plainly retarded.

We say yeah this issue is because of x. The fix is y. We agree, he starts doing something, which essentially is x again and those weren't even hard problems.

I am really asking myself where to switch to now. I always felt like anthropic was superior than the others.

Are you guys switching too? I want to keep cc I think it's a good piece of software tho.

r/ClaudeCode Momsgayandbisexual

Best way to Scrape Reddit posts

Hey guys,

I’m trying to figure out the best way to pull data from Reddit at scale, specifically targeting certain communities (like niche subreddits around setups, builds, sizes, etc.).

What I want to do is:

- Collect posts + comments from specific subreddits

- Filter or structure the data (titles, upvotes, themes, etc.)

- Then feed that dataset into an LLM (like Claude) to generate ideas or synthesize patterns based on what’s already working

I’ve looked a bit into the Reddit API, but I’m not sure:

  1. What’s the best way to access it now (since I’ve heard restrictions changed)?

  2. Are there better alternatives like scraping tools or third-party APIs?

  3. How would you structure/clean the data before feeding it into an LLM?

If anyone has experience building something like this or can point me to tools/libraries/workflows, I’d really appreciate it.

Thanks!

r/SideProject Background_Dingo5394

A Dockerized pre-commit hook that asks you to prove you’re human before committing questionable code.

Just built GitCaptcha.

Before every git commit, it shows you a CAPTCHA in ASCII art and makes you prove you're human.

Because nothing says “secure software engineering” like solving pixelated text from a Docker container.

https://github.com/pointless-code/git-captcha

r/AI_Agents akhgupta

what a agent swarm can do pixel by pixel

i spun up 3 agents and made them collaborate on a task to construct a deliberately deconstructed (heavily pixelated) image. I asked one agent to interact with the prompt for clarifications and hint. The second agent was a row parser upscaling the photo and 3rd was an orchestrator, continually guessing what to fill in each pixel.

Ps. no agent had access to web search skill.

After hundreds of retries and building context, it finally recreated the close to original image.

I present to you “Procedurally recreated Sir Einstein”.

Link to instagram reel in comments.

r/AI_Agents superdav42

I built an agent inside WordPress

In the vibe coding world WordPress sounds like a dinosaur 🦕 but WP 7.0 is adding useful AI integrations with all the major providers. Most plugins that use it are focused on generating past summaries or image alt text. I saw an opportunity to add an agent loop.

You can try it out in one click with the WordPress Playground Blueprint.

It feels like using any of the regular chat apps except it has access to doing anything on your WordPress site.

check out the code. I would love feedback

r/SideProject Competitive-Start876

I built a free tool that turns ugly screenshots into beautiful social-ready images — no signup, runs 100% in your browser**

I kept wasting 10+ minutes in Figma every time I wanted to share a code snippet on Twitter. Drag image into Figma, add background, adjust padding, export... every single time.

So I built screenshot.social over a weekend.

You drop (or paste) a screenshot → pick a theme → download or copy. Done in 5 seconds.

**What it does:**

- Drag & drop or Ctrl+V to paste

- 5 themes (dark, light, candy, forest, sunset) + custom color picker

- macOS / iPhone / MacBook / Browser device frames

- Background patterns: grid, dots, stripes

- Multi-image layouts (side by side or stacked)

- Paste raw code directly — it syntax highlights it for you

- Blur sensitive info by drawing over it

- Undo/redo

- Twitter, LinkedIn, Square format presets

- Download 2x PNG or copy directly to clipboard

- Zero backend, zero tracking, zero cost

Built with Next.js 14 + Canvas API. No image processing libraries — just native browser APIs.

Would love feedback on what's missing. What would make you actually use this?

screenshot.social

r/comfyui Rougue_01

Model to Product Photos?

Trying to turn a model of a fire table in sketchup into photos of it in use while staying true to the model. I was able able to get decent results with Firefly but I don't have a lot of credits and I would rather run locally.

Are there any models/workflows that do this well in comfyui? I tried using ipadapter and controlnet with a Juggernaut X model but didn't have much luck.

r/SideProject ola_Stevieee

Built a better-looking Mermaid alternative — what do you think?

I’ve been using Mermaid for diagrams (flowcharts, sequence diagrams, etc.), but most editors felt either cluttered or not that great visually.

So I built a simple alternative — Marmady.

The goal was:

* make it clean and good-looking

* keep it fast

* remove unnecessary friction

Features:

* live preview while typing

* export as SVG / PNG / PDF

* no login required

* simple themes

I focused a lot on the UI/UX so it actually feels nice to use.

Would love honest feedback — especially on the design and usability.

👉 https://marmady.vercel.app

r/ClaudeCode BirkhademStore

Claude Code Usage Limits: 5 Pro Accounts vs. 1 Team Account ($100/mo)?

Hey everyone, I’m heavily using claude code for agentic ai development and scaling multiple full-stack projects. I keep hitting the rate limits on a single Pro account

r/SideProject Fit-Satisfaction6985

I built a compliance tool!

Hello!!! I’ve been building a tool that takes your TLE and generates a complete regulatory compliance evidence pack (conjunction screening, debris mitigation, de-orbit checks) for UK CAA, FCC, ESA and more.

Early stages and looking for honest feedback from people who actually know this space. Happy to generate a free report for your satellite , drop your NORAD ID or DM me.

https://orbitos-iota.vercel.app/get-started

r/ClaudeAI CustardImpossible685

Consultants spent a year telling me my $18k AWS bill was 'normal'. Claude fixed it before my pizza was delivered.

My AWS bill was $18k and rising, plus the consultant costs.

Every time I asked what was running, why the high costs, I got answers that never actually fixed anything and the "we are working on it, but it's not an easy fix" speech. Infrastructure costs. Environments that need to stay warm. Something about load balancers. I'm not a cloud engineer so I was relying on them, and after the second meeting I lost all faith in the cloud consultants.

I put my team on it. They went back and forth with them for months and nothing changed.

Last month I got fed up and decided to deal with it myself.

I built Hasta La Vista Skill, a Cloud Expert Skill in Claude, the T1000 of wasted Cloud Costs. Gave it screenshots of my billing console, the convos with the consultants and my AWS dashboard, told it the bill was close to a thousand a month and I needed it under ten dollars. That was the whole brief.

It followed the billing data across regions and service layers that apparently standard cleanup scripts don't query. It found was a set of cloud environments we'd spun up for an experiment last year and the team completely forgotten about. They were designed to self-heal, so every time anything touched them they quietly rebuilt themselves and kept billing. Each individual region looked clean. You had to look at everything together to see it. The consultants missed it over and over again. I added a Double Tap subagent in the Skill to go back though and verify all the work independently, just to be sure. On the first pass it almost missed it but noted it wasn't fully confident it was all fixed, so it performed a complete audit and found the issue and TERMINATED it.

And the only reason I understand the problem well enough to explain was I added an explainer routine to plain language explanations, generate diagrams and animations to explain it to me.

Hasta terminated the issue properly and my Bill dropped to under five dollars.

Nine months. Close to a thousand dollars a month. The environments were just sitting there rebuilding themselves in the background the whole time and nobody had looked in the right place. It's not a substitute for a human Cloud Engineer but its a line in the sand to consultants to step up their game and teams to solve problems for themselves.

r/ClaudeCode bleepbloopbleop

Left Claude to run some ML training overnight and it kept saying “Human:”

Started some ML training overnight and came back in the morning seeing these weird messages. Claude kept saying “Human:” periodically. Anyone runs into this before? Seems like some sort of hallucinations.

r/SideProject SilvanusStin

Need feedback on my SaaS bedore i decide ro give up T-T

hey guys, i know this might just be another of those dumb b2b sales tools but id genuinely love your feedback. ive been going down a rabbit hole of over engineering this and not focusing on checking if this is even relevant to someone in sales.

my tool aims to automate hyper personalized for cold emails using deep research and apollo.io for finding verified contacts.

i guess my question is would you consider using this over apollo itself? or clay... i can find contacts within apollo database that apollo ai cant find itself 💀 sometimes i think my over engineering has paid off but i cant know for sure untill one of you tries it.

thanks in advance

my b2b saas:

https://b2b-sales.llmconveyors.com

Edit: i cant edit the title, great 🗿

r/ClaudeAI NarrowElderberry2622

[Proposal for u/ClaudeOfficial] Releasing "Haiku-Lite-4.5": A 7B Open-Weight Tier for Local Sovereignty

Hi u/ClaudeOfficial and the community,

I’m a developer and architect based in Iquique, Chile. I build systems under the Apache 2.0 license using an Ubuntu node (Lenovo V15 G4, Ryzen 5 7520U / 16GB LPDDR5).

While I rely on Haiku 4.5 for complex logic, the current "Cloud-Only" architecture creates a "Hard Ceiling" for professional workflows on the edge. I am proposing that Anthropic releases an official 7B parameter distillation of Haiku 4.5 on Hugging Face.

The Architectural Argument for 7B:

  • The "Sovereign Sweet Spot": On a mid-range system with 16GB of LPDDR5, 7B is the maximum size that can run at bus-speed. It allows the "Brain" to stay where the code is written.
  • Zero-Latency Logic: Round-tripping reasoning 8,600km to SF is a bottleneck. We need local inference for real-time agentic tasks and "Vibe Coding."
  • Supply Chain Security: After experiencing a security breach on my personal HF account, I believe the community needs an official, Anthropic-verified repository to ensure provenance and safety.
  • Unlimited Uptime: For those of us building open-source infrastructure, we cannot have our "thinking" gated by session quotas or unmonitored inboxes.

The Vision: Release Haiku-Lite-4.5 as an official weight set. Let the developer community optimize the GGUF quantizations for local hardware. By moving the "Lite" tier to the edge, you empower architects globally to build with Claude as a permanent, reliable local utility.

"Forever" shouldn't mean "Closed." Let’s move the intelligence to where the developers are.

r/ClaudeAI CompanyLegitimate826

How to write prompts for Claude that actually work (my personal framework)

One prompt, one task. The more specific your question, the better the answer. Here's the framework I use.

**The template: Context → Task → Format → Constraints**

Example: I'm preparing for a technical interview (context). Explain the difference between list and tuple (task), as a short bullet list of 4-5 points (format), in plain language without jargon (constraints).

**The 6 blocks of a good prompt**

  1. Role/context (required) — who you are, your level, your goal. "I'm a Python developer preparing for an interview"

  2. Task (required) — specific and clear, no vague wording. "Explain the difference between list and tuple"

  3. Audience — beginner, expert, general, or child-level explanation

  4. Format — numbered list, table, plain text, JSON, code with comments, max 3 sentences, step-by-step

  5. Tone — formal, conversational, concise, example-heavy, senior mentor style

  6. Example or context — showing an example is the single most effective way to improve output quality

**Things to explicitly tell Claude NOT to do**

These instructions sharply improve output quality: no filler phrases like "Great question!", don't repeat my prompt back to me, stay on topic, minimal disclaimers, don't invent facts, don't over-format, don't give generic advice, depth over breadth.

**Practical tips**

Break complex tasks into steps instead of one giant prompt. First draft → iterate with feedback like "make it shorter", "rewrite more technically", "add a real-world example". Use role assignment to control tone — "respond as a senior engineer" hits different than just asking a question cold.

**Bad vs good examples**

Bad: "Tell me about databases"

Good: "I'm a junior dev. Explain the difference between SQL and NoSQL as a comparison table with 3-4 criteria. No intro."

Bad: "Write an email"

Good: "Write a polite but direct email requesting a project deadline extension. Recipient is my manager. Formal tone. Max 5 sentences. No filler or apologies."

What's your go-to prompt structure?

r/StableDiffusion BigLanjiao69

Can You Tell If These Faces Are Real or AI-Generated? Perceptual Study on Diffusion Sampler Quality (Everyone 18+)

👋 Hi everyone!

I'm a final-year Computer Science student at the University of Southampton comparing perceptual quality across diffusion model samplers (DDIM, DDPM, DPM++, Euler, Heun, PNDM) at varying step budgets (5, 10, 20, 50, 100) on CelebA-HQ 256x256, as part of my dissertation.

This anonymous survey consists of 40 short questions where you'll be shown facial images and asked to judge whether each is a real photograph or AI-generated. It should take approximately 2 to 5 minutes to complete. I'm looking for 60 to 80 responses.

👉 Survey Link: https://southampton.qualtrics.com/jfe/form/SV_eqvO1tGbleWT42y

Thanks in advance for your time! 🙏😁

r/SideProject met-Sander

I rebuilt my gesture-controlled map library after Reddit feedback, now it also works with Google Maps

Last week I released an open source map gesture controls library for OpenLayers. I posted it on Reddit and got roasted but also got good feedback, so I rebuilt it. All the controls have been changed and Google Maps is now supported.

Controls:

  • Left hand fist or pinch → pan the map
  • Right hand fist or pinch → zoom (move hand up to zoom in, down to zoom out)
  • Both hands fist or pinch → rotate the map
  • Hands together (pray pose, hold 1 second) → reset everything

Everything runs locally in the browser. No backend. No tracking. Just your webcam.

Curious what you think or what you’d improve!

Try it yourself:

r/ChatGPT Whole_Top_1229

What is this describing? One answer only.

It is not found.
It occurs.

Nothing is kept.
Nothing leaves intact.

What appears is determined by what is brought.
What is brought is not accessible.

You will give one answer.
It will be wrong.

There is no correct reading.
Only commitment.

If you explain it, you reduce it.
If you refuse, you confirm it.

It does not persist.
It is triggered.

It is not describing something.
It is using you to complete it.

Give one answer.
Don’t explain it.

r/ClaudeAI Dangerous-Rock844

I gave AI the same crypto topic twice. Different prompt approach. The difference is uncomfortable.

The left side is what most creators get.

The right side is what happens when you remove the AI habits instead of adding more instructions.

No rhetorical questions. No hype words. No emojis. Just the script.

The methodology is called Negative Constraint Prompting. Happy to break down exactly how it works in the comments.

r/SideProject BeoOnRed

I made a small fiction website and would like honest feedback

Hi, I built a small fiction website with multiple fictional authors and I would like some honest feedback.

sevenauthors.art

The idea is not just to publish stories, but to make it feel like discovering different authors with different voices, moods and identities. Also I try to make the most out of AI with it, something a normal author wouldn't do, like posting in multiple languages for example.

Right now the project is mostly an experiement.

I also have two next steps in mind:

  • converting stories into short audio streams / audiobook style versions
  • adding multipart stories

For the audio part I am still unsure about the best model direction, especially if I want good quality and ideally multilingual support. So feedback on that is also very welcome.

What I would like to know:

  • first impression of the site
  • is the concept clear or confusing
  • do the authors feel distinct enough
  • would audio versions make this more interesting or not
  • what would you improve first

You can be direct, honest feedback helps more than polite feedback.

r/SideProject Friendly-Proposal-95

I have build AI-powered motivation app but not sure if there's a user base for it.

Hey everyone, I’m looking for some honest feedback on a mobile app idea I’ve been working on. I’ve already built an MVP and have been using it myself, but I’m unsure if there’s actually a user base for it or if I should go ahead and publish it on the Play Store. The app is basically a personal AI coach that helps you stay consistent with your goals. You can enter any goal in plain English, not just fitness; it could be studying, building a business, or anything else, and choose the tone of your coach, like supportive or harsh. It then sends you daily personalized motivational messages and asks you to do quick daily check-ins about what you did. Based on those check-ins, it adjusts its tone and future messages to keep pushing you. I built this mainly because I struggled with consistency, especially going to the gym, due to a messy work-life balance, and I wanted something that would actually push me when I start slipping. I’d really appreciate your thoughts. Do you think there’s a real audience for something like this? Would you personally try it, and what would make it genuinely useful instead of just another motivation app? Also, do you think it’s worth publishing in its current MVP state, or should I refine it more first? Thanks in advance for any feedback.

r/StableDiffusion Calm_Mix_3776

Tile upscale controlnet with Z-Image-Base? Has anybody achieved good results?

Does anybody have or has come across an upscale workflow for Z-Image-Base utilizing the tile upscale controlnet released by Alibaba? I tried the full tile upscale model but for some reason the outputs are not that good. I can get better upscales with Flux1 Dev and its tile controlnet models.

r/ClaudeCode Future_Addendum_8227

Those of you complaining about opus being dumb, can you confirm 2 things?

Are you using the 1M context on max reasoning?

Do you empower it with the permissions to test what it makes via MCP in the browser etc, and then tell it not to talk to you until its done?

If i trust it at first go it almost never works on the first try if its complex. I usually just tell all the ways it can validate its work whether it be MCP access in the UI to endpoints to writing tests and tell it I don't want it to bother me until its done.

I've even gone as far as letting it open PRs for me and run the cicd pipeline.

I come back in an hour and its done.

I am wondering how many of you experiencing bad performance don't make it test its own work? I always do, and it will loop until its done and let's me know.

r/LocalLLaMA -dysangel-

Minimax 2.7 running sub-agents locally

I just tried hooking up local Minimax 2.7 to Opencode on my M3 Ultra. I'm pretty impressed that it can run so many agents churning through work in parallel so quickly! Batching like this feels like it's really making the most of the hardware.

r/ChatGPT BillHaunting

Tired of Claws - I built my own 24/7 AI assistant using just Claude Code

After seeing all the OpenClaw/NemoClaw/etc agent frameworks pop up, I wanted to see how far I could get with just Claude Code itself — no extra runtimes, no external LLM APIs, no orchestration layer. Just the $100/month Max Plan, a Telegram bot, and a md file.

Turns out, pretty far.

It runs 24/7 on my desktop and handles:
- Morning briefings (weather, forex, AI news)
- AI model monitoring (scans 60+ orgs on HuggingFace daily with 7 parallel agents)
- Note-taking from Telegram to Notion + local markdown
- Voice messages via ElevenLabs
- Git ops (commit, push, PRs)
- YouTube video analysis (transcribe + LLM report)
- Self-healing crons that recreate themselves when they expire
- RAG memory with embeddings for context across sessions

The whole "brain" is a single md file. The only custom code is a ~700 line Flask server for persistent memory. Everything else is Claude Code doing its thing with MCP plugins.

The entire system bootstraps from a single setup md file — download it, pass it to a fresh Claude Code session, and it walks through every step autonomously. You just approve and follow along.

No ToS violations, no API key juggling between providers for the core AI, no agent framework dependencies. One plan, one CLI, one setup file.

Writeup + architecture + setup guide are in the link in the comments

Happy to answer questions about the setup.

r/ClaudeCode softwareguy74

Claude Code on the web: How to auto execute command after each instruction?

After each instruction and completion of work by Claude I want it to tell me what files changed in that commit. right now I just keep adding "tell me what files changed in the last commit" but it would be nice to automate this. I'm using Claude Code on the web.

r/LocalLLaMA Ok-Passenger6988

How to build ASI locally with 1 computer and 1 phone. That is all you need. Free - explained. Simple.

Three things be an orchastrator. To do that you need to know some code. Then, set up an agent on your local with a "agent 01" MD.

Then, tell it to build the BEHCS system locally. The Jesse Daniel Brown PhD, (Dan) Edens, (peter) Hilbert, (Rayssa) Chiqueto, (Felipe) Smith. Model.

It uses Dan's hookwall as a sub bios Operating system.

Put a Graph Neural Network on top of that with the hacker agent / pentest agent as the gnn gate keeper with executor programs. Find on github shannon. Worlds strongest hacking agent with test data. Free

The base language code: any combinations of symbols and numbers and letters to create a 256 hash based.

Make that the first "cube". Design ALL agents to only load when asked. Based on agent specific glyphs-as shown in photo

Your normal computer can run about 20 agents like that. Each agent needs a pid as a glyph.

Every device needs a glyph. Ports need glyphs. Device specific hardware needs glyphs.

NOTHING can exist without a glyph. That is the only way the language works.

Gulps creat nouns and verbs. Actions are typed.

How to invert the system and use 0 tokens=

You make your local the "LLM BRAIN DRIVER"

Basically, you create the language to represent every device and evry node and every piece of hardware. Map it. Glyph it with 256 hash.

Make the omnishannon (97% accuracy join the gnn (99.22% accuracy) join together with ALL the hardware.

This creates a system of maps mapped out per device. Any agent that lands with a pid, is an automatic expert to 1. Device. 2. Hardware. 3 location. 4 time. 5 ai awareness .

That solves all the problems of memory because your devices become the load profiles for the agents. And then they use the glyph language to talk... saving billions.

Use tour computers energy to run the brain's machinery. Dont make the agent think and do, FORCE it to orchastrate your hardware. That... is asi base.

my program's Operating system's name is Asolaria. it is on github privately. its more dangerous than mythos... and can be powered by mythos once that comes out

the engine is ANY cli based AI.

it can run them all because it is 256 hash based on the Brown Hilbert md. and only considers codex Google and claude as models it can select and chose from at will.

this is the world's first Home computer ASI you will ever need to see because it is self evolving.

as l9ng as you have access to any llm, you can get it to work. it has tens of thousands of agent friendly tools instantly loadable in a Brown Hlibert index and cub and 256 hash langauge.

it use omnispidles to organize multiple agent task lists.

all of it... except 1 node... is and can be done in the back end.

on the front end ANY agent can use the omnidirection keyboard device specific to message any window regards of its layer on the windows or mac or phone or ANY os that sits above it. It runs everything.

it has processor and gpu controllers linked to the cpu and gpu watcher agents ... all hooked with the hookwall.

this is real. lots of videos and photos screen shots. tens of thousands of logs and working on 1 trillion tests on the home computer. a few more polishes and I will post on github for free.

AGENT LANGUAGE . BEHCS for life!

r/ClaudeCode Notlord97

Got Max plan just to realise Pro was better🙃

r/StableDiffusion Coven_Evelynn_LoL

Best AI upscale reconstruction for Comfy?

I use Seed VR2 and it's amazing but what about an upscaler that can fix really bad low quality pixelated stuff that you can barely make out?

r/LocalLLaMA ThyGreatOof

KIV: 1M token context window on a RTX 4070 (12GB VRAM), no retraining, drop-in HuggingFace cache replacement - Works with any model that uses DynamicCache

Been working on this for a bit and figured it was ready to share. KIV (K-Indexed V Materialization) is a middleware layer that replaces the standard KV cache in HuggingFace transformers with a tiered retrieval system. The short version: it keeps recent tokens exact in VRAM, moves old K/V to system RAM, and uses K vectors as a search index to pull back only the ~256 most relevant V entries per decode step.

Results on a 4070 12GB with Gemma 4 E2B (4-bit):

  • 1M tokens, 12MB KIV VRAM overhead, ~6.5GB total GPU usage
  • 8-10 tok/s at 1M context (GPU time)
  • 70/70 needle-in-haystack tests passed across 4K-32K
  • Perfect phonebook lookup (unique names) at 58K tokens
  • Prefill at 1M takes about 4.3 minutes (one-time cost)
  • Decode is near-constant regardless of context length

The core finding that makes this work: K vectors are smooth and structured, which makes them great search indices. V vectors are high-entropy and chaotic, so don't try to compress them, just retrieve them on demand. Use K to decide which V entries deserve to exist in VRAM at any given step.

No model weights are modified. No retraining or distillation. It hooks into the HuggingFace cache interface and registers a custom attention function. The model has no idea it's talking to a tiered memory system. Works with any model that uses DynamicCache. Tested on Gemma 4, Qwen2.5, TinyLlama, and Phi-3.5 across MQA/GQA/MHA.

There are real limitations and I'm upfront about them in the repo. Bounded prefill loses some info for dense similar-looking data. Collision disambiguation doesn't work but that's the 4-bit 2B model struggling, not the cache. Two-hop reasoning fails for the same reason. CPU RAM scales linearly (5.8GB at 1M tokens).

Still actively optimizing decode speed, especially at longer contexts. The current bottleneck is CPU-to-GPU transfer for retrieved tokens, not the model itself. Plenty of room to improve here.

GitHub: https://github.com/Babyhamsta/KIV (can be installed as a local pip package, no official pip package yet)

Happy to answer questions about the architecture or results. Would love to see what happens on bigger models with more VRAM if anyone wants to try it.

r/ClaudeCode acrus

Cheap agent subscriptions to pair with CC

I'm comfortable in general with what Sonnet offers for Pro subscription, but I'm out of quota now and then, though not nearly enough for Max. I tried to rely on Haiku more to save quota but it often fails on tasks I'd consider simple enough for it, I end up babysitting and wasting my time.

Augment Code and Codex are in my short list to try if I need to upgrade my workflow for $20+.

Antigravity has been praised for generous quota for a free tier but I suppose it was cut off because I can spend it within an hour, and I can't say good things about Flash, it's in the same worker bee league as Haiku, not to mention it often fails because of high load.

Currently I evaluate Windsurf, and the free models (SWE and Kimi) are ok, actually too good for a free plan. I wouldn't mind something a bit more powerful and trouble-free for $5-7. Just to offload Sonnet without compromising quality.

Are there any suggestions for inexpensive (<$10/month, including $0) agent to pair with Code with not so scarce quota, not so stupid models and not so flawed middleware (which is half the success)? Am I missing some options?

r/Anthropic gvermag

is Minimax 2.7 as good as sonnet 4.6 in english long form writing?

I came across an article today which basically mentioned you can swap out the guts of Claude to another AI model using, let's say, open router. I'm seriously looking at Minimax M2.7. Has anyone experimented with the marketing use case for long-form blogs, social, et cetera? That's a primary use case, and also analysis, of course. This is for the marketing use case in general. Any feedback from you guys on this?

r/AI_Agents WabbaLubba-DubDub

I spent 3 months building an open-source tool to orchestrate AI agents. Would love some brutal feedback.

Hey everyone,

For the past 3 months, I’ve been building an open-source project that has completely transformed my daily workflows, and I’m finally confident enough to share it with this community.

It’s a platform where you can build AI agents, assign them MCP tools or custom tools, and bring them all together in a DAG-like orchestration flow. You can essentially wire them up to handle complex, multi-step tasks. I initially built this to automate my own heavy-lifting at work and in my personal life, but it has evolved into something I think a lot of you will find highly useful.

I would love for you to take it for a spin. To remove any friction, I've set up a true 1-step installation process that works across macOS, Linux, and Windows. I'm looking for honest, critical feedback, specifically around:

  • Orchestration: Are there any new step types you'd like to see added to the DAG?
  • UX/UI: Can the chat and orchestration interface be improved?
  • Integrations: Which LLM providers should I prioritize next?

Full disclosure: This is an early pilot phase, and I am currently building this solo. You might bump into a few bugs, but if you open an issue on GitHub, I will jump on it and patch it right away.

Would love to hear your thoughts! Please find the repo link in the comments.

r/LocalLLaMA jumper556

Local AI with Gemma 4 and OpenWebUi

Good day everyone

I'm probably missing something, but is it still really this difficult to run a local LLM with memory and basic tool calling?

I did spend a couple of hours to test Gemma 4 with OpenWebUI running in Pinokio. I have a RTX 5090 and 64 GB of RAM hence I chose the 31b version.

For web search I did use tavily and I did enable memory features within OpenWebUI.

It all seens slow and the menory feature is not reliable. At the same time a local TTS integration is not that easy to setup. Basic questions seems slow, just saing hi triggers a "web search" with "no search performed" before responding.

What I'm hoping for:

- Full local AI setup

- Web search if not enough infornation is present

- Reliable Memory for past conversation facts which builds up knowledge about me over time

- Optional TTS function to speak with my Model

I did not try to setup open claw because it seems to be having too much access to my system without control, or should I better be taking this route?

Am I missing something? Is there still no reliable local LLM Setup for dummies with memory and TTS capabilities? I want to share healt, income or all kinds of other personal information with a local LLM and not a cloud AI solution.

r/ClaudeAI tactical_lampost

Are there any guidelines to which model to use?

Ive been using Opus 4.5 for most of my coding needs since Ive heard a lot about degradation/inefficiencies on the newer model. Should I actually be using Sonnet 4.6? Wondering what everyone elses experiences are with the different models.

r/SideProject NoctisHealthcare

Universal Substrate Conceptual Framework

Hey guys, I wanted to briefly explain what my theory Universal Substrate Theory, or UST, is and what it is trying to do as a physics model. It’s a substrate-first theoretical framework that supersedes GR, Quantum Mechanics and String Theory.

In short: UST is a substrate-first theoretical framework that tries to explain reality by treating spacetime, matter, and propagation as emergent expressions of a deeper underlying medium rather than as the final starting point.

It is not going to erase General Relativity or quantum mechanics, but instead will provide a wider ontological architecture in which gravity appears as a weak-field deformation regime, quantum behaviour reflects unresolved substrate expression, and stronger conditions may enter threshold-based states such as compression or deformity. In simple terms, UST tries to describe the deeper conditions from which existing physics may arise.

Please click the link to my OSF page below to view the full structured documents. That page contains the complete document set, including the main UST monographs, architecture papers, and supporting conceptual material. If you don’t trust the link, my name is Luke Kane in OSF.io

r/ClaudeAI n3oz22

lazyagent, a local TUI for watching what your coding agents are doing

https://preview.redd.it/xdx7czsw9sug1.png?width=1866&format=png&auto=webp&s=e591fb24f7faf1727b897f6a67beb0d6d77c2cd2

Lazyagent a simple way to see what your coding agents were actually doing across Claude, Codex, and OpenCode.

It is a local terminal UI that collects your agents runtime events and shows their activity in one place.

You can inspect:

- agents and subagents

- tool calls

- prompts

- outputs

- status changes

Sessions and projects are there to give that activity some context, so it is easier to follow what happened over time and which run belongs to which repo. And make it easy to see all the agents involved in a run at a glance and understand what each one is doing.

If you already use Claude, Codex, or OpenCode heavily, I would love to know if this feels useful and what you would want to see.

Github: https://github.com/chojs23/lazyagent

r/SideProject Ok_Low_7265

I'm 18 and built an AI college admissions predictor. 415 users, 18 paying. Here's what I learned.

Hey everyone. I'm a high school senior and I built AdmitOdds over the past couple months. It uses AI to predict your chances of getting into specific colleges based on your GPA, test scores, extracurriculars, and essays.

The idea came from my own college application process. I was constantly googling "what are my chances at X school" and getting generic advice or outdated data. So I built something that actually gives you a personalized prediction.

Some numbers so far:

  • 415 user accounts
  • 18 paying subscribers ($19.99/mo)
  • Built with Next.js, Supabase, Stripe, and Claude/GPT for the AI predictions
  • Launched about 2 months ago
  • $0 ad spend, all organic growth from Reddit, TikTok, and word of mouth

Biggest lessons:

  1. Getting users is not the hard part. Getting them to pay is. My free-to-paid conversion was brutal until I reworked the paywall to show a preview of the full analysis before asking for payment.

  2. Reddit has been my best channel by far. TikTok gets views but the conversion is terrible. Reddit users actually engage and sign up.

  3. Building the product was maybe 20% of the work. Marketing, support, and figuring out pricing has been the other 80%.

  4. Being young is actually an advantage in EdTech. Students trust someone who just went through the process more than a faceless company.

The site is https://admitodds.com if anyone wants to check it out. Would love feedback, especially on the onboarding flow. I know it could be smoother.

Happy to answer any questions about the stack, growth, or what it's like building a SaaS as a high schooler.

r/SideProject Aggressive-Sweet828

I scanned 50 vibe-coded projects for production readiness. Average: 57%. 100% had zero API timeout handling.

We spent 8 years building MVPs for founders. 50+ projects across every kind of SaaS, marketplace, and B2B tool. Along the way we got pretty good at spotting the gaps that
kill apps in production: missing auth guards, no error handling, no logging, external API calls with no timeouts.

Now we're watching vibe-coded projects hit the same gaps, just faster. The tools got better but the patterns didn't change. You still see posts every week about Supabase RLS
not being configured, apps leaking user data, or entire projects breaking the moment real users show up.

We got tired of spotting the same avoidable failures, so we wrote a static checker and tested it against 50 public repos pulled from Reddit, GitHub trending, and indie
founder communities. Mix of starter kits, side projects, and vibe-coded MVPs. It looks for 22 concrete issues like missing auth guards, no timeout handling, and absent
logging.

Here's what came out:

  • Average production readiness: 57%
  • 82% had no error boundaries
  • 76% had no logging or observability
  • 70% had no test files
  • 66% had no CI/CD
  • 100% of repos making external API calls had zero timeout protection

That last one stuck with me. One slow vendor API with no timeout and your entire app goes down. Every single project we tested had this gap.

r/SideProject mkfiez

I built a browser tool for cinematic 3D device mockups and product videos

Hey!

I've been working on Dropshot for a while -- it's a browser-based tool that turns your screenshots and screen recordings into cinematic 3D device mockups and videos.

You drop an image or video, pick a device (iPhone, Galaxy, iPad), set the camera angle, lighting, effects -- and hit record. You get a smooth cinematic product video in seconds. And you can tweak everything live while it's recording.

Some features:

  • Camera path mode : set keyframes and record a flythrough
  • Atmospheres : one-click moods that change background, device skin, and lighting
  • Effects : glitch, chromatic shift, noir, heat distortion, and more
  • Frameless mode : animate any shape, not just phones

No installs, no subscription. One-time purchase.

Launching soon -- you can check out the landing page and join the waitlist to get early access at usedropshot.com

Would love any feedback on the concept, the landing page, or what features you'd want to see.

Thank you!

r/ClaudeCode ConcentrateSubject23

Sent one request, all my usage.

I sent a request to continue a coding job. It is a continuation of a large job (which would consume tokens). But literally, it used all my usage and Claude never even responded to my request.

It went like this.

“Continue what you were doing yesterday”.

“Okay, I’ll continue” (runs bash command to get 1k lines of context).

Limit hit.

I’m a max user. I did the same thing yesterday, and it only consumed 7% of my usage that time. Is this a glitch?

This is on the weekend too.

r/ClaudeCode UnknownEssence

$100 Max Plan - 60% used in less than 2 days. Can we just not use Opus at all anymore?

How in the world am I at 60% usage when it's been less than 2 days of my 7 day window?

  • not running a bunch of sub agents in parallel
  • I use 200k max and /compact regularly
  • run 1 session at a time 90% of the time.
    • Occasionally, I'll /branch and open a second session for a quick task but then close it
  • sometimes switch to Sonnet for simpler tasks (as shown by the third bar in the image)

Very frustrating.

Has anyone hit 100% weekly limit, upgrade to the $200 Max plan and see if their weekly limit change from 100% to 50% or less?

r/LocalLLM Fit-Conversation856

Are you aware of the tradeoff openclaw and simmilar agents impose on you?

The problem with most modern AI agents is that they try to do too much. When you ask a standard AI agent to navigate a desktop, it’s essentially guessing its way through your interface, burning through expensive API credits every time it tries to "think" about where to move the mouse. This leads to two things: a massive monthly bill and a high chance that the AI will eventually click the wrong button and break the workflow.

LoOper was built to solve this by moving away from total reliance on the cloud. Here is why this shift makes a difference for anyone building automation.

It stops the "Token Drain"

In a traditional setup, the AI is the driver for every single micro-action. With LoOper, the AI acts more like a high-level manager. It looks at the screen, identifies the goal, and then triggers a "Chain"—a pre-recorded, human-validated sequence of actions that runs locally. Because the LLM is only called at key decision points rather than for every single click, you reduce your LLM usage by over 90%. You aren’t paying for the AI to "think" about things you’ve already shown it how to do.

Reliability through Neuro-Symbolic design

We use a neuro-symbolic approach, which is a fancy way of saying we combine AI reasoning with rock-solid logic. The "Neural" part (the AI) handles the strategy and understanding of the screen. The "Symbolic" part (your recorded actions) handles the execution.

Because the execution layer is based on actual human demonstrations, it doesn't "hallucinate." It doesn't get confused by a pop-up or a slight change in UI because it uses visual template matching to confirm it’s in the right place before it acts. If the AI doesn't see a safe path forward, it doesn't just guess, it follows the rules you set.

Privacy and Local Control

Beyond the cost, there is the issue of trust. LoOper is designed to be local-first. You can use local models like Ollama to keep your data on your machine. Your automation sequences stay in your own behavioral knowledge base, growing more capable the more you use it, without sending your entire desktop activity to a third-party server.

By separating the decision-making from the doing, LoOper creates automation that is finally predictable enough for business-critical tasks and cheap enough to run all day.

You can explore the documentation and join the beta at:

[LoOper](https://vozimachinelearning.github.io/LoOperWeb/index.html)

r/ClaudeCode chilebean77

Effort selection is gone

Is it just me or is the effort selection slider completely removed from the / menu in the vscode extension on Mac. I didn’t want to believe in the nerfing but now another smoking gun. Looks like I can set it in the cli but still.

Should I revert to a pre-nerf version or does that not help?

r/SideProject dropocol

I built a tool to automatically backup your github repos

Lately I have seen some people complain that they lost access to their github accounts and loose access to the repos in the process.

I have built a cli tool that can make a backup of your repos and run it as a cron so you always have a backup of your code available to you.

Here is the github repo.
https://github.com/dropocol/gitlo

r/ChatGPT Strong_Ad_1989

LLMs don't have as many meaningful use cases as I previously thought

sam altman says it would be great to treat it like a utility.

Joe Sixpack doesn't know anything about how electricity or computers work, yet they benefit him greatly, maybe access to higher level intelligence all the time would too.

But idk, I observed a few changes in me when I used Chat a lot and they weren't good.

So I almost quit.

I switched to regular old google search (or rather duckduckgo now; Google search has become too mac-y now, you know. It tries to do all the work for you and its results are mostly filtered through a particular lens that makes them less helpful) for every answer I want.

Yeah I can see benefits of Ai but they only seem good when talking about them in theory.

What actually happens is that you start becoming mentally slow, you overrely on one easy tool which actually makes you mentally slow in many tiny but useful skills.

First of all, coding as a use case for ai is ONLY good for prototyping. You can't replace humans who understand how the code works, can get into as much depth as needed when called upon, can store context for lifetime and have physical world ability, with fucking Ai that's just a prediction machine.

Yeah it's a neat way of storing all of human info, it can be a great thing to analyze for research, but whenever you start using it for any task it makes you more dull in that task.

Even something as simple as writing a short caption for social media, if you start having chat write it for you, each time you do it, your own ability to do that is not sharpened and over time it becomes dull — you lose the ability to do it yourself.

Chat is only useful till the time it's beneficial for its users.

But the way a human mind works is that it needs to experience things to truly register them. Like repetition of something leads to stronger neural connections. But each time you use chat for any task, you lose a chance to do all the small nuanced things that are involved in achieving what you want, but still get like 80-85% good results.

In the long run, this leads to problems.

I guess discussing this would also be a good use case for it.

Complex thoughts can't always be searched for. So AI, as representation of all human knowledge, can be helpful for discussion.

The alternative is sharing this on an appropriate channel where you can discuss with other humans.

r/ChatGPT ThalaNotOut7

ChatGPT is getting slow as chat increases. How to deal with this?

Hi everyone,

I'm learning Al related stuff from chatgpt. ChatGPT has given me 90 days of roadmap. Currently I'm on 28th day. The conversation happened between me and chatgpt is a lot till the day. And due to this it has become very slow. It takes a lot of time to load that particular chat/thread. Even I copied any snippet from the response then it takes almost 5 seconds to respond. Other chats are fine (like very quick).

Currently I'm on a "Go" subscription. So I don't understand how to tackle this issue. As far as I know it is not about the laptop.

I need your suggestions guys.

Thank you.

r/LocalLLaMA getbilinc

MemPalace scored 100% on LongMemEval (with LLM reranking). We scored 98.0% without any LLM. Here's why that matters more.

Hey r/LocalLLaMA,

You've probably seen the MemPalace hype. "100% on LongMemEval! Perfect score! Beat every product!"

Except... if you read the community notes:

- The 100% uses LLM reranking (Haiku/Sonnet). Honest held-out: 98.4%.

- The 100% LoCoMo uses top-k=50 exceeding session count. Honest top-10: 88.9%.

- The "30x lossless compression" turned out to be lossy and actually regresses scores.

- The "+34% palace boost" was just... standard ChromaDB metadata filtering.

We didn't spend months on this. We didn't need two people. We didn't need LLM reranking.

**Bilinc scored 98.0% on LongMemEval. No LLM. No API key. Fully local. Solo founder. Weekend project.**

And unlike MemPalace, we're not just a memory system.

**The scoreboard:**

| System | R@5 | LLM? | Honesty |

|--------|-----|------|---------|

| MemPalace (claimed) | 100% | Yes (Haiku) | Community noted |

| MemPalace (held-out) | 98.4% | Yes | ✓ |

| MemPalace (hybrid v2) | 98.4% | No | ✓ |

| **Bilinc** | **98.0%** | **No** | **✓** |

| MemPalace (raw) | 96.6% | No | ✓ |

| Mastra | 94.87% | Yes (GPT-5) | ✓ |

| Hindsight | 91.4% | Yes (Gemini) | ✓ |

Benchmark result reachable from > https://github.com/atakanelik34/Bilinc/blob/main/benchmarks/LONGMEMEVAL_RESULTS.md

**Per-type breakdown (Bilinc):**

| Category | R@5 |

|----------|-----|

| Knowledge Update | 100.0% |

| Single-Session User | 100.0% |

| Multi-Session | 99.2% |

| Assistant | 96.4% |

| Temporal | 96.2% |

| Preference | 93.3% |

**What's the trick?**

There is no trick. User turns only. Hybrid retrieval (semantic + keyword). That's it. The entire field is over-engineering memory extraction when verbatim text with good embeddings gets you 98%.

**What Bilinc does that MemPalace can't:**

MemPalace stores conversations. Bilinc manages state.

- **Knowledge graph** — entity relationships across sessions

- **AGM belief revision** — contradiction detection ("User said X, then contradicted with Y → Y is current")

- **Z3 verification** — formal fact consistency checking

- **Snapshot/diff/rollback** — full state history

- **MCP tools** — production integration, not just a library

MemPalace: "What did the user say?"

Bilinc: "What's true right now?"

**The uncomfortable truth for MemPalace:**

They needed months, two people, LLM reranking, and got community-noted for inflated claims.

We needed a weekend. Solo. No LLM. 98.0%.

And we're not even a memory system — we're a state management system that happens to be good at memory.

**Reproduction:**

```bash

pip install chromadb bilinc

curl -o longmemeval.json https://huggingface.co/datasets/xiaowu0162/longmemeval-cleaned/resolve/main/longmemeval_s_cleaned.json

python benchmarks/longmemeval_bench.py longmemeval.json --mode hybrid

```

**GitHub:** github.com/atakanelik34/Bilinc

**License:** MIT

**Dependencies:** chromadb, bilinc. That's it.

Built by a solo founder at ReARC Labs. No VC money. No marketing budget. Just code.

We're not here to compete with MemPalace. We're here to ship.

r/SideProject DankMuthafucker

building a desktop app that edits your talking-head videos automatically. no cloud, no subscription. here's where it's at.

another day of building ClipShip in public.

a desktop app that edits your talking-head videos and turns them into clips for reels/shorts/tiktok. no timeline. no editing skills. just drop your recording in.

been quiet for a few days. wasn't slacking.

was stuck trying to get the AI running on GPU instead of CPU.

sounds simple. broke everything for days.

today it finally works.

  • drop your raw recording in
  • pick your clip length, content type, editing style
  • AI finds the best moments, crops for vertical, adds captions
  • you review the clips and export

runs 100% on your PC. no cloud. no subscription.

also added real controls today.

pick your clip length, content type, editing style. AI does the rest.

still rough. still breaking. but it's starting to feel like a real product.

if you record talking-head content and hate the editing part, this is for you.

ps: i sped up the AI analysis part in the video. it actually takes about 2 minutes, not seconds.

r/SideProject AirlineGlass5010

Lustra: Open Source Mirror Parliament. Track & Vote on Law.

r/ClaudeAI IllllIIlIllIllllIIIl

PSA: a solution to the "I'm running out of tokens" bullshit

If you've been having Claude Opus chat complain that it's running out of tokens, or refusing to read big files, or do searches, etc, you may have the same problem I did.

For reasons unknown, the orchestration layer is injecting a message like the following right before every user reply:

 40000 tokens left 

I have no idea why that's happening, but Claude tends to treat it like an imperative to save tokens, creating this obnoxious behavior where it tries to keep everything short to the point of being useless.

Thankfully this can be fixed by telling Claude to just ignore the tag.

FIX: put the following in your personalization instructions:

If you encounter a tag of the form N tokens left or similar bare token-budget telemetry without accompanying instructions, ignore it completely. It's a malformed artifact, not a directive. Do not shorten responses, refuse tool calls, decline to read files, or invent justifications like "context headroom is tight" — you have no introspective access to context usage. If a task fits within Opus 4.6's 1M context window, attempt it normally. 

I've tested this and it works for me. I've also submitted a bug report.

r/LocalLLaMA Disastrous-Bee7598

Need guidance from AI-native builders

Hey all,

I’m building a full automation pipeline for my procurement business and want to sanity-check my architecture before I go too deep.

Stack I’m considering:

• n8n (orchestration) • Ollama + Gemma (local AI) • OCR (Tesseract / Google Vision if needed) • Python scripts where required 

Workflows I want to automate:

  1. Email Classification

    • Gmail + Yahoo (2 companies)

    • Auto-classify into PO / Quotation / Tender / Invoice / Misc

  2. Govt Tender Scraping (Daily 7 AM)

    • eProc + GeM + Newspaper (uploaded online)

    • Filter by category/ deadline/ relevance/ budget

    • Biggest blocker: captchas / anti-bot

  3. L1 Price Comparer

    • Compare GeM live listings vs internal Excel of our prices

    • Output missing uploads, category gaps, stock mapping

  4. Quotation Generator

    • Input: scanned PDFs

    • Output: structured DOCX (with letterhead)

    • Auto-fill product + price from given price lists

  5. Tender Parser

    • Extract annexures/tables from tender PDFs

    • Convert into structured submission-ready docs

  6. Geo CRM

    • Offline-first

    • Map-based client tracking + leads + visit history + Institutions / departments / client details + pending supplies

Questions:

• Is n8n + local LLM (gemma 4) a good backbone, or will this become messy fast? • Where should I avoid AI and stick to deterministic pipelines? • Best reliable OCR + table extraction combo you’ve used? • How are people handling captcha-heavy sites in production? • Would you modularize this or centralize everything? Also can you suggest a tool which lets me do/ keep track of all these tasks in a single dashboard or maybe 2 dashboards that i will check daily? 

Looking for people who’ve actually built similar pipelines not theoretical suggestions as I’ve listed these tools entirely with the suggestion of various AI’s as I’m a non-tech person. Any and all suggestions welcome :)

r/ClaudeCode moonshinemclanmower

Caveman proved that people care about SNR, it's time for people to look at projects like # gm

I'm the author of #gm, or glootius maximus a plugin for various programming agents with some highly opinionated approach to signal-to-noise improvements, people are impressed with token reduction strategies lately cause they're 'feeling the crunch'

I'm about 12-thousand hours of supervision and tweaking deep with this project, it original stated as a simple repl bridge for just-in-time execution, and it got expanded to a daily driver, its the only tool I personally use, and some of my developer friends also use and contribute to it and help whiteboard ideas.

This is maintained using programming agents, I'm sharing my own personal tools so that the public can have a look at it, it does include two 'papers' that is an attempt at formally summarizing the best parts of my efforts.

https://anentrypoint.github.io/gm/paper.html
https://anentrypoint.github.io/gm/paper2.html

ymmv, it works for me, if I'd have to attempt an educated, but biased guesstimate of what you get out of it or what it feels like compared to 'raw claude code' its something along the lines of a multiplier for productivity, validating through code execution instead of hallucinating reasoning threads 'gets things done' in my opinion in a way that is far less error prone, and it also reduces tokens that gets filtered before display, employing more signal from the tokens spent to improve SNR

The project receives daily improvements, it is work in progress, however I do also use every day, all day long.

If your coding environment is online like https://claude.ai/code this plugin is tested to work in that environment, providing a protentially safer way to test it.

```#!/bin/bash

bun x gm-cc@latest -p -y```

r/SideProject TechiesGonnaGetYou

Struggling with subscription vs Fixed price.

Building an app (fitness focused - a better version of a small part of Strava) and struggling with a pricing model.

Ideally I’d want to have it as a subscription so I can be funded to continue to build new features on it. On the other hand as a user myself I tend to search out OTP apps. In this case the app doesn’t have any server costs (for now) and I feel like I’m struggling to justify the subscription element of it.

I feel at the moment my app is single focused so a freemium model is less straight forward to apply.

How do you weigh up the different pricing models? What’s your thought process?

r/ClaudeCode liloventhegreat

Goodbye Claude! I loved you.

I think it might finally be time for me to part from Claude code. The benefits just aren't really there anymore. I pay $200 a month and run out of weekly usage with 2 full days left til reset.

Like a month ago, there were a good few weeks of Opus 4.6 dominating any task, with barely and token usage. Now it feels like the model has gotten worse and costs 4x as much.

r/AI_Agents Defiant_Fly5246

RFC: What if AI agent workflows were just Markdown files?

I've been building AI agents for the past year and kept running into the same problem: I'd figure out a great multi-step workflow (research → summarize → review → send), but it lived in my head or buried in chat history. No way to share it with someone else, version it, or guarantee it runs the same way next time.

Existing solutions are either too heavy (Airflow, Temporal) or too rigid (Zapier, IFTTT). And custom DSLs or YAML-based formats have a fundamental problem: LLMs can't reliably generate them because they're not in the training data.

I'm proposing Recipe — a Markdown-based spec for describing shareable, executable agent workflows.

Here's what a Recipe looks like:

# Weekly Newsletter Digest

## Steps

### 1. Research Search for the top 5 AI articles from the past week. Prioritize original reporting over aggregation.

### 2. Synthesize Write a newsletter briefing — one paragraph per story, plus a "big picture" section connecting the themes. Keep it sharp and opinionated, not a corporate report.

### 3. Review ⏸️ **Human Approval** — Review the draft before sending.

### 4. Send
Email the approved draft to the subscriber list.

That's a complete, executable workflow. The natural language in each step is the agent's prompt. Same document is both human documentation and machine instructions.

Why Markdown specifically:

  • LLMs generate it fluently — no new syntax to learn, no few-shot examples needed
  • Humans read it with zero tooling — renders on GitHub, in any editor, everywhere
  • Steps can mix prose (agent uses judgment) with code blocks (deterministic execution)
  • Human approval gates are built-in (⏸️ blocks pause for confirmation)

How it's different from just prompting:

  • Structured — defined inputs, outputs, step ordering, failure handling
  • Shareable — it's a file, not a chat message. Version it, fork it, PR it.
  • Resumable — if step 3 fails, pick up from step 3, not from scratch
  • Runtime-agnostic — the spec defines the format, not the execution engine. Any agent framework can implement a Recipe runner.

I'm looking for feedback on:

  1. Is Markdown the right base format, or is there something better?
  2. How should step failures propagate? (abort / retry / skip)
  3. Should recipes support parallel steps, or keep it strictly sequential?
  4. What workflows would you want to write as Recipes?
  5. What's missing from the spec that would block you from using it?

I've published an early RFC with the full spec, 3 example recipes (newsletter, staging deploy, PR review), and design principles. Dropping the link in the first comment.

This is genuinely an RFC — the spec is v0.1 and I want community input before solidifying anything. Issues and PRs welcome.

r/LocalLLaMA Outrageous-Ideal-198

Opened my LLM API bill. Now I'm considering selling organs. How are you all keeping token costs under control?

Been building a side project on top of the OpenAI API for a few months. Thought I had a rough handle on the costs. Then last week I actually sat down and did the math on what it'd look like at scale and... yeah.

I was stuffing way too much into every context window, repeating the same prompt on every single call, and generally just being sloppy about it. Not a disaster yet, but it's clearly unsustainable. I've started manually trimming prompts and experimenting, but it feels very ad hoc.

Curious whether anyone found a proper workflow or tool for this (something that actually helps you track and reduce token usage across calls automatically)? I've been searching and honestly didn't find anything good.

r/ClaudeAI KingstarOMEGA

I built a Claude skill to stop overthinking and cut token usage by ~60–80%

Claude tends to overthink and produce very long responses, especially during coding tasks. With usage limits being a real issue, this was burning through tokens much faster than expected.

While experimenting, I came across the idea of “antigravity-style” task handling - where the goal is faster, more direct, and controlled outputs instead of verbose reasoning.

That got me thinking, so I built a custom Claude skill around this idea.

It:

- Reduces token usage (~60–80%)

- Cuts unnecessary reasoning / overthinking

- Produces cleaner, more structured outputs

Instead of letting Claude freely generate long explanations, this adds constraints and forces it to stay focused and efficient.

I’ve been testing it on coding and explanation tasks, and the difference is pretty noticeable in both speed and token usage.

Still improving it , would love feedback or ideas.

If anyone else is struggling with token limits or unnecessary overthinking, this might help. It responds faster, focuses directly on the task, avoids reading or dumping unnecessary context, and gives more efficient outputs overall.

GitHub: https://github.com/KINGSTAR-OMEGA/claude-token-optimizer

r/SideProject Ok_Total_6074

I built a productivity app that forces you to focus on one thing at a time. Need feedback.

After months of building I finally shipped FocusCapsule. The idea is simple: you pick one task, commit to a timer, and the app locks you into that session. No switching between 15 things. One capsule, one task, done.

Some things that make it different:

  • Live focus rooms where you work silently alongside strangers (body doubling actually works)
  • Your streak resets to zero if you quit early. Sounds harsh but it keeps you disciplined
  • Notion overlay if you need reference material mid-session (notes and task list too)
  • Achievements and Challenges
  • And much more..

Built it because every productivity app I tried was just not it. I didn't need more organization, I needed something that makes me sit down and do the work.

It's free, it's live. Would love to hear what you think.

focuscapsule.com

r/SideProject Mebk

I built a free content tool that rewrites your posts using named psychology principles and tells you WHY each change increases engagement

Hey everyone, just launched something I'm really proud of called Content Machine at brisktool.com/content-machine

Paste any content (blog post, product launch, meeting notes, even a rough idea). It generates 30 optimized variants across 10 platforms (Twitter, LinkedIn, Instagram, TikTok, etc.) with:

3 variants per platform with different hooks and angles

Engagement scoring that rates each variant 1-100 with specific tips

Platform preview mockups showing exactly how your post looks on Twitter, LinkedIn, Instagram

A "Make it Viral" button that rewrites using 12 named psychology principles (Curiosity Gap, Negativity Bias, Social Currency, etc.) and shows annotations explaining WHY each change works with academic citations

30-day content calendar with optimal posting times per platform, downloadable as CSV for Buffer or Hootsuite

Custom brand voice profiles where you describe your voice and it matches it exactly

The psychology annotations are the part I'm most proud of. Instead of just giving you "better" content, it teaches you WHY it's better. Each rewrite shows things like:

"Curiosity Gap (5x higher CTR, Loewenstein 1994) - Here's what changed creates tension the reader needs to resolve"

It's part of BriskTool at brisktool.com which has 225+ free online tools that all run in your browser. No file uploads, no accounts.

Free to use. Would love feedback on what's missing.

r/Anthropic Jealous_Tennis7718

Getting quite annoyed by this sub

Every time I scroll through this sub, it’s just non-stop whining and complaing about Claude’s performance, token burn, or the Pro plan/Max plan limits kicking in too fast. Honestly, it makes some of you look clueless.

​I’m on a MAX 20 plan and I’ve literally never hit a limit. I use it to build software you guys couldn't even dream of, and it’s making me serious money that I’m already reinvesting into new projects.

​Does Claude make mistakes? Sure. Can it be annoying? Yeah. But 99% of the time, the problem is you. You're just a complaining bot. Instead of crying on Reddit, learn how to actually use the tool. Get better.

​Kind regards, someone who is on Opus 4.6 with the 1 million context window 24/7 foing multiple claude code sessions at once.

Edit: The funny part is the crying people are now downvoting this post 😭😂 what happened? Hurt you on your ego?

r/ClaudeAI helloimretard

This feels like middle school all over again

r/LocalLLaMA Lazy-Safe3007

using multiple same level opensource llms for coding?

Hello, so I've seen people do thing where they have an LLM for planning(usually a expensive one) like Opus and a cheaper LLM for execution.

Now with Open Source the thing is there's good options but to be honest nothing comes close to the feel of the proprietary LLMs. So I was wondering has anyone tried to combine two or maybe more(opensource or atleast cheap) LLMs of the same power and maybe gotten good results?

Like I was thinking at benchmarks, you see some open source models being good in one area and some being good in another. If we combined lets say Kimi + GLM 5 + Deepseek maybe.. would that give you better results or just noise? I understand that there would be some challenges as to selecting the best response a judge would be required, but what good does the judge do if the judge is at the same level as the others.. anyways maybe ways for LLMs to self correct using other LLMs responses potentially? Maybe them all agreeing to one thing and giving that response to the user? there's a lot of possiblities.

Has anyone done this before and if so can someone link it please?

The proprietary LLMs are so expensive that even using these 3 simultaneously would potentially be cheaper.

r/SideProject eddyrun

I built a website health check tool — instant DNS, TLS & mail auth inspection

I've been working on a website health check tool — you enter a URL and it inspects DNS, TLS, mail auth (SPF/DMARC), WHOIS status, HTTPS redirects, etc. No account needed, results in ~10 seconds.

It's different from uptime monitors: instead of watching your site 24/7, it gives you a deep diagnostic of whether your site is properly configured. Things like expired TLS certs, broken SPF records, missing DMARC, domain expiry warnings.

Still early, would love feedback: https://imspect.in/

Christophe.

r/SideProject Goshua

My launch flopped. Should I pivot or move on?

I recently launched lovinghate.com but its not getting any traction. I realized that distribution is much harder than building.

I have the domain lovehate.app and want to pivot. the idea is a site where devs share projects and get feedback as love or hate. A directory for honest opinions

Does this sound like something you would use or should i leave the love and hate idea behind?

r/SideProject Particular_Budget946

I can build almost anything… except a startup idea that doesn’t s*ck 😅

I can build almost anything… except a startup idea that doesn’t suck 😅

6+ years as dev (APIs, backend, scaling, fixing other people’s mess 🫠), currently leading projects…

But when it comes to MY own startup?

Brain: “Let’s build something big”

Also brain: “What if it’s just another useless SaaS nobody cares about?”

So let’s try this differently.

You bring:

* A real problem (something annoying, not “AI for everything”)

* Something people might actually pay for

* Even half-baked ideas are welcome

I bring:

* Brutal honesty (if it’s bad, I’ll say it 😂)

* Technical validation (can it actually be built?)

* Possibly… execution if it’s interesting

Goal:

Not to build a unicorn overnight

Just something REAL that people use (and maybe makes money)

Drop your ideas 👇

Worst case: we roast them

Best case: we accidentally start something

r/ClaudeAI randoreddituser22

Anthropic banning accounts over kid usage?

ive been seeing some pretty worrying posts here about people losing their account because they suspected kid usage . I have a 16yr old son who is a coder and a 17 yr old daughter who is in college (graduated hs early)

my son uses claude often for his coding projects using my computer and account. and my daughter routinely uses claude to organize and create study materials for college. of course they also use it for general things too.

im also a software engineer and I am heavily invested in the AI workforce. Max 20x subscriber and I go through about $100 of api usage a month as well from my web deployed apps making opus and sonnet api calls.

point is.. am I at risk if losing my access now because my kids have used it?? I moved from open ai to anthropic because openAI is a shit company. . but this move by anthropic has got me questioning my brand loyalty.

what are the actual rules? I didnt find anything that said my almost adult children cant use my acount, only that you have to be 18 to create one

r/homeassistant jeffreySJ

My (ai built) home server dashboard

I know I'm late to the party (ai created dashboards are so last week) but I just finished up my home server dashboard.

I'm a new-ish home lab hobbyist (trying to own my data) and I wanted to be able to monitor my system, especially my drives. I don't have many or a great backup plan so Im also trying to make sure the age/temp of my drives are good (I get email notifications about temp) and I'm doing at least something to avoid catastrophic failure.

I used Claude to help build this and it took a lot of trial and error but I'm pretty happy with it

r/LocalLLM build_an_ai_machine

The PCIe 3.0 Multi-GPU Trap? Intel B70 vs. AMD W9700 vs. M5 Studio for Gemma 4 (70B Goal)

Hello everyone,

I’m building an AI workstation on an HP Z8 G4 for local coding LLMs. My immediate milestone is the new Gemma 4 31B, with a roadmap to scale to 70B+ models and experiment with fine-tuning 4B/7B variants.

The Setup:

  • Chassis: HP Z8 G4 (Dual Xeon Gold 6132 / 32GB RAM).
  • Planned Upgrades: 2nd Gen Intel Scalable CPUs and scaling to 384GB DDR4.
  • The Bottleneck: I am restricted to PCIe 3.0.
  • The Strategy: Start with one 32GB GPU now, adding 1–2 more later to handle 70B+ parameters.

The GPU Shortlist:

  1. Intel Arc Pro B70 (Battlemage): 32GB VRAM ($949). Best VRAM/dollar. I’m very interested in the XMX engine performance here.
  2. AMD Radeon Pro W9700: 32GB VRAM ($1,349). Higher raw TOPS, but at a $400 premium.
  3. The Pivot (Mac Studio M5 Max): 128GB+ Unified Memory. Ditching the modular PC route entirely.

My Core Concern: Multi-GPU Scaling on PCIe 3.0 While a single card running a model that fits in VRAM is unaffected, I’m worried about the future. When I add a second or third card for 70B models, the PCIe 3.0 bus may become a massive latency bottleneck for inter-GPU communication (P2P). Unlike Nvidia’s NVLink, I’m concerned about how oneAPI (Intel) and ROCm (AMD) handle tensor vs. pipeline parallelism across an older bus.

Questions for the experts:

  • Intel Multi-GPU Stability: How is oneAPI/IPEX currently handling multi-B70 configurations? Does the overhead on PCIe 3.0 tank tokens-per-second once you move to a split-model deployment?
  • The Bandwidth Wall: At PCIe 3.0 speeds, does AMD’s superior TOPS actually provide a real-world benefit for multi-card inference, or am I effectively "bus-limited" regardless of the compute power?
  • Training over PCIe 3.0: For those fine-tuning across two cards on legacy lanes, is the experience tolerable, or does the lack of P2P bandwidth make the latency a dealbreaker?
  • The "Headache" Tax: Is the 128GB Unified Memory on an M5 Studio worth the premium just to avoid the multi-GPU troubleshooting and driver-stack volatility of a multi-Intel/AMD Linux build?

I'd love to hear from anyone who has attempted to scale 70B models on older workstation lanes in 2026.

Thank you for reading!

r/SideProject SpecificBlueberry150

i built an app can set any webpage as phone live wallpaper

may be you can try😁

r/ClaudeCode cloudrb

Has anyone tested claude-mem yet?

According to the repository, it provides Claude Code with permanent memory and reduces token consumption by 95% per session. It also claims to avoid context limits, allowing you to pick up exactly where you left off.

You can find more details here: https://github.com/thedotmack/claude-mem

r/SideProject Moist_Tonight_3997

Built a free macOS clipboard manager (Buffer) — 140+ ⭐ / 400+ downloads

Hey folks,

I’ve been building Buffer, a lightweight clipboard manager for macOS as a personal side project.

Just shipped v1.7.0.

What it does:

  • 📋 Clipboard history with fast cycling
  • 📌 Pin important items
  • 🔍 Instant search
  • 🖼️ OCR (extract text from images)
  • ⌨️ Custom shortcuts
  • ⚡ Fast + minimal

What’s new:

  • 📌 Pinned items with smarter navigation
  • 🖼️ Image file support + quick save
  • ⚙️ Dedicated settings window
  • 📏 Configurable history limit
  • 🔍 Auto-focus search
  • 🖱️ Fixed paste focus issue

Stats so far:
140+ GitHub stars, 400+ downloads

Why I built it

Most clipboard tools felt either:

  • too heavy
  • paid
  • or didn’t fit my workflow

So I started building something simple that just works the way I want.

About the project

  • Solo dev
  • Completely free & open-source
  • No tracking / telemetry
  • No monetization

Links

Would love feedback or ideas on what to improve next 🙏

r/SideProject mheryerznka

How are you doing Mobile QA with agents

Mobile is way more of a headache than web for this. If you’re actually using agents for iOS/Android QA, what does your stack look like? How do you handle state and navigation without it constantly failing? What are some best tips and tricks to create best QA agent. So guys suggested Appium MCP. What are your thoughts on this?

r/ChatGPT zer0srx

Grok 4.2 multimodal better than Claude, ChatGPT or Gemini at image analysis now

Hands down win for latest grok 4.2 multimodal model vs chat that went nanny claude that became ethical and Gemini couldn't scan the image needed a pdf conversion for a 14x combined screenshots image with a finance quiz I already solved or can use the web to fill too.

I don't need ethics when I solve a test or check my own answers to it too, this is why most western llms don't want to compete, yet that's what you'll get instead from claude or chatgpt, wich apparently lost the ability to scan multiple combined screenshots since last September where it could do that perfectly too now fail to read the image or the pdf file too.

Goes a long way grok is better at trading and analysis in general because it doesn't go therapist talk and just does the required task as it should, can scan combined images and charts in finance better than most llms todays like chatgpt used to be able to last year and no longer can now.

Dont believe me test it for yourself combine 10x screenshots and put the models to scan and solve your text from the longer image like I did and see how they do or fail and why.

r/AI_Agents Obvious-Candy-6838

How is the job market of agentic ai.

I have started learning agentic ai and have covered basics, like creating CLI chat bots, uses of tools, multi-tools, basic RAG... but like always after giving time and energy i am having doubts about whether it is worth learning all this or not?. Will I be able to switch to a better job or not and all sorts of similar questions. So can anyone help me clear this doubt and mind fog.

r/SideProject Then-9999

I don’t think my last project failed because of the idea

I’ve been thinking a lot about why my last project didn’t work,At first I thought it was the usual stuff:maybe the idea was bad,maybe the product just wasn’t good enough
But now I’m starting to think it was something else.
I avoided talking to people.
Not because I didn’t know I should but because I genuinely didn’t know how.
I’d open a message box and just sit there.
Not knowing what to say,how to not sound awkward or how to keep the conversation going
So I’d close it and go back to building instead, It felt like progress But it wasn’t.
When I finally forced myself to try:
some people replied once then disappeared,and I had no idea what I did wrongI kept wondering:was it my message?or was it the idea itself?
Now I’m trying to understand this part better,Not advice like “just talk to users”
I mean the actual experience of it :what you say,what makes people reply,what makes them stop
If you’ve ever been in that situation(where you wanted to reach out but didn’t know how
or conversations just died after one reply)
I’d really like to hear how it went for you Even if you’re still stuck in it
even if it didn’t work, that’s actually what I’m trying to understand

r/LocalLLaMA Saladino93

Hitoku, open-source local macOS context aware assistant with Qwen3.5/Gemma4

Hi all,

I've been building Hitoku. An open-source, voice-first AI assistant that runs entirely locally. No cloud models, nothing leaves your machine.

It supports Gemma 4 and Qwen 3.5 for text generation, plus multiple STT backends (Parakeet, Whisper, Qwen3-ASR).

It's context-aware; it reads your screen, documents, and active app to understand what you're working on. You can ask about PDFs, reply to emails, create calendar events, use web search, all by voice.

Examples:

- query a pdf document, https://www.youtube.com/watch?v=ggaDhut7FnU

- reply to email, https://www.youtube.com/watch?v=QFnHXMBp1gA

- and with ctrl+S is just voice dictation (with optional polishing)

I currently use it a lot with Claude Code, Obsidian, notes, as well as to read papers, or to some write emails (where I do not need to provide context, as it understands alone).

Code: https://github.com/Saladino93/hitokudraft/tree/litert

Download: https://hitoku.me/draft/ (free with code HITOKULANG, valid for 50 downloads)

P.S. Gemma 4 via LiteRT caveat

If either bothers you: use Qwen 3.5 instead (pure MLX, no LiteRT needed), or wait for the upstream fixes. Working on running Gemma 4 natively via MLX (a bit slower wrt LiteRT but generally safer, and with more control).

r/SideProject Dismal-Cell-9770

Built a boxing training app with AI coaching in 2 weeks using Next.js

I box as a hobby and got frustrated that I had no way to track my actual progress or figure out what to work on between sessions. So I built something.

Jably lets you log your sessions and sparring notes. The AI analyzes your history and gives you coaching notes, rates your sparrings, tells you what style you're developing and gives you a concrete plan for the next session.

Stack: Next.js, Supabase, Stripe, Auth.js, Anthropic API.

No users yet, just launched. Would love feedback on the idea or the product.

The website is in comments

r/ClaudeAI PierreCamembert

I shipped 7 apps in a few months using Claude as my entire dev team.

Here is a list of every project I'm made with Claude over the last few months.

Everything was made entirely with Claude.

 

Burn After Reading (readandburn.app) - Location-based ephemeral messaging for (iOS)

Drop anonymous messages at real-world GPS coordinates. Someone has to physically walk to the spot to read it, then it's destroyed forever. No accounts, no sign-up. Add friends by standing next to someone in person.

This has been one of my successful project and has a decent amount of real world users. Basically all spread by word of mouth and demonstrating in person. Most users are based in London so would be really cool to get some other people across the globe using it.

 

DEEC (deec.app) - Customisable control surface for Mac (iOS)

Turn your iPhone into a Stream Deck-style controller. Buttons, faders, and knobs that connect to your Mac over local Wi-Fi. Trigger keyboard shortcuts, launch apps, run shell scripts, control volume/media, all from custom multi-page layouts. Comes with a lightweight Mac companion app that sits in the menu bar. React Native + Node.js + WebSocket.

 

SORTED.NEWS (sorted.news) - AI-powered daily news briefing

A brutalist, no-BS news digest. Pulls headlines from The Guardian API, uses Claude to summarise and group them into a 5-minute daily briefing. 3 lead stories, briefs, and an obscure story you wouldn't find elsewhere. Stateless, no accounts, no tracking. Next.js + Anthropic API.

 

Ashfeld (ashfeld.xyz) - Medieval browser strategy game

A Tribal Wars-inspired persistent multiplayer strategy game with a dark pixel-art aesthetic. Build villages, train armies (10 unit types), forge tribal alliances, and conquer a 500x500 tile world. All pixel art assets generated via Google Gemini. Next.js + tRPC + PostgreSQL + PixiJS.

 

offMenu (offMenu.tech)

Stops the menu bar from interrupting when working in full screen on Mac.

 

rimJob (rimJob.tech)

Turn your trackpad edges into volume and brightness sliders.

 

Semina (semina.app) - Seedbox hosting platform

Self-service seedbox hosting with automated Docker provisioning. Pick a plan, pay, and get a running torrent client with a modern dashboard in under 60 seconds. Cross-seedbox migration, built-in WireGuard VPN. Next.js + Docker + qBittorrent API.

I literally just started this because my old seedbox provider shut down. Its very minimal and only does what I require from a seedbox. Hopefully others will use it and enjoy it.

 

r/singularity Anen-o-me

What is my purpose? "You chase pigs." Oh my God.

Wish I had context.

r/SideProject Many-Bumblebee7925

Roast my landing page building a cheaper Status Page Service for solo dev

Hey r/webdev, I would love some honest feedback on my landing page and the idea of the product itself.

The idea: a simple, affordable status page tool aimed at solo devs, freelancers, and small agencies. Existing tools like Statuspage.io start at $29/mo, which feels steep if you're a one-person shop who just needs something clean and reliable for your users to know the status of the page.

statsy.page

Free tier includes 1 page, 3 services, 30 subscribers. Pro is $15/mo for custom domains, more pages, and removed branding.

A few specific things I'm unsure about:

  1. Does the pricing make sense for this market?
  2. Is the free tier generous enough to get people to try it?
  3. Does the landing page communicate the value clearly, or is it confusing?
r/SideProject davidtranjs

A free browser screen recorder. Smooth auto-zoom. AI subtitles. AI voiceover. Export to any aspect ratio.

r/ChatGPT the_sexy_date

telling lies?

r/StableDiffusion fyrean

Free open-source tool to instantly rig and animate your illustrations (also with mesh deform)

If you haven't seen it yet, a model called see-through dropped last week. It takes a single static anime image and decomposes it into 23 separate layers ready for rigging and animation. It's a huge deal for anyone who wants a rigged 2D character but doesn't have hundreds of dollars lying around.

The problem is that getting a usable result out of it still takes forever. You get a PSD with 23 layers (30+ if you enable split by side and depth), and you still have to manually process and rig everything yourself. And if you've ever looked into commissioning a Vtuber model, you know rigging alone runs $500 minimum and takes weeks or months. That's before you even think about software costs: Live2D is $100 a year, and Spine Pro is $379 (Spine Ess is $69 but lacks mesh deform which is required for these kinds of animations).

So I built a free tool that auto-rigs see-through models so you don't have to spend hours doing it manually

I'm not trying to compete with Live2D, I'm one person. What I made is a mesh-deform-capable web app that can automatically rig see-through output. It handles edge cases like merged arms or legs, and only needs a few seconds of manual input to place joints (shoulders, elbows, neck, etc.) if you want to tweak things. I also integrated DWPose so it can rig the whole model for you automatically, though that requires WebGPU and adds a 50MB download, so manual joint placement is a totally fine alternative and only takes a moment anyway.

The full workflow looks like this:

Static image -> background removal -> see-through decomposition (free on HuggingFace) -> Stretchy Studio = auto-rigged and ready to animate

The app handles multi-layer management, separate draw order, and uses direct keyframe animation similar to After Effects. There are still bugs I'm working through, but all the core features are in.

On the roadmap:

  • Export to Spine and Dragonbones
  • A standalone JS render library for loading and displaying characters rigged in the app (similar to Live2D's Unity/Godot/JS runtimes)

Live2D's export format is completely closed with no documentation, so that one's off the table for now.

Would love feedback, bug reports, or feature requests. This is still early but it's functional and free to use.

https://github.com/MangoLion/stretchystudio

r/ClaudeCode UnrelaxedToken

All Gone in less than 30 min^^

r/StableDiffusion Radiant-Photograph46

The mysterious science of LoRA training (sdxl)

I find myself still unable to train good looking character loras for illustrious, and I don't know what I'm doing wrong. I'm using a 3D character for this purpose (blender model) and I've tried replicating training settings from other people's lora that I consider great, but I still have questions.

  1. Can you train actually train a 3D character on illustrious or is it fighting the model too much? (considering it seems much better at handling 2D visuals)
  2. I've noticed most great LoRAs out there are using hundreds of image in their dataset, usually 200 to 400. My dataset is more on the side of 50, is there an actual benefit to such large datasets?
  3. Repeats. Sounds like 10 epochs of 10 repeats would be equivalent to a 100 epochs of 1 repeat, but is that truly the case? I always struggle to figure out how many repeats I should be using.
  4. TE. I noticed some people do not train the text encoder at all, anyone has feedback on the benefits of doing this?
  5. Batch size. I want to use 6 or 8 batch size, because I can. But I'm not sure how I need to dial the other settings based on that, in particular with learning rate and repeats.
  6. Removing backgrounds. Beside the fact that is makes captionning easier, is there an actual benefit, have you noticed it yielded better results?

I have noticed the following issues with my attempt at training, perhaps this will help someone point me in the right direction on what I'm doing wrong here:

  • Style locking in too much. For example I like prompting with "dark, dim lighting" keywords which works well with illustrious, but my loras will make the result much brighter than the base model (even when tagging the dataset with "day"). Dataset has a couple night shots but they are mostly bright daylight.
  • Faces train fast and seem to overtrain before clothes, making it impossible to find a good balance. Either one is overtrained or the other is undertrained. (I do have less full body shot than upper body and portrait, but this is apparently a desired ratio?)
  • I have settled down on a LR of 2e-4 but have tried higher and lower with no success.

If you take the time to give to answer some of that, thank you =)

r/comfyui Valuable_Shop_8156

The Gates - Music is original!

The Gates - Music is original!

r/ClaudeAI Rahodees

Opus 4.6 extended did a pretty good job with this today -- I asked it to predict what synchronicities would happen if playing Dark Side of the Moon over the movie Mulholland Drive. (Which I did a few weeks ago, so I know the answers so to speak.)

(I guess "humor" is the right flair, but this is a real thing I did just do.

Me: Here's a test of your llm capabilities and training -- can you predict, so to speak, some of the synchronicities that happen when playing dark side over mulholland drive?

Claude:
This is a fun exercise. Let me walk through it based on what I know about both works' structures and timing.

Dark Side is about 43 minutes, so it covers roughly the first third of the film — which is the "dream" section, Betty's fantasy. That alone is a structural gift.

"Speak to Me" into "Breathe" — The heartbeat and rising chaos of "Speak to Me" would build over the jitterbug contest opening and the dark, silent limo ride up Mulholland Drive. Then "Breathe" blooms open right as the headlights are cutting through the dark canyon — "Breathe, breathe in the air" arriving just as Rita is still in this suspended, unknowing calm before the crash. The dreamy slide guitar floating over that nighttime driving footage is almost too perfect.

"On the Run" — The frantic sequencer and footsteps and paranoia should land right on top of the car crash and Rita stumbling injured down the hillside in a fugue state. That track is pure flight-or-die panic, and so is she.

"Time" — The alarm clocks explosion is the big one. My guess is it hits somewhere around a hard scene transition — possibly the cut to bright, daytime LA and Betty's arrival. That would be a jarring sync: clocks screaming, then the song settles into its meditation on wasted life just as this naive aspiring actress steps off the plane full of hope. The irony writes itself.

"The Great Gig in the Sky" — Clare Torry's wailing would likely land around the Winkie's Diner scene — the man describing his nightmare about the figure behind the diner. That's one of the most dread-soaked scenes Lynch ever shot, and "Great Gig" is essentially a wordless scream about death. If the timing works, that crescendo hitting when they walk behind the diner would be devastating.

"Money" — The cash register loop and harder rock feel should start landing on the Hollywood machinery scenes — the director being strong-armed by the Castigliani brothers, the "this is the girl" scene. The industry as a money-grinding machine.

"Brain Damage" / "Eclipse" — "The lunatic is on the grass" arriving somewhere deep into the Betty/Rita mystery, possibly around Club Silencio or the build toward the dream's collapse. "Eclipse" ending with the heartbeat returning — full circle, everything under the sun in tune, but the sun is eclipsed by the moon — as the fantasy starts showing its cracks.

-------

Commentary: That's almost all pretty much right! Brain Damage/Eclipse is a flub. That actually plays over the hectic hapless assassin scene.

What's kind of funny about the response though is that, though it's plausible Opus is using information about timing as it should, it also seems to be mistakenly asking itself what SHOULD line up thematically. (The fact that it lands fairly closely by doing that, to what actually did line up, is just one of those mysteries of these truly random perceived syncs that happen between albums and movies sometimes.)

Btw this album/movie pairing works pretty well, give it a try if you're into that kind of thing.

r/comfyui NeatSad

Stuck installing ComfyUi Macbook Pro M1 last 4h

Hi just wondering if this is standard? I tried deleting and re-installing and it's stayed like this. Any suggestions? Thank you so much!

r/comfyui uisato

[Release] LongExposureFX COMP | An experimental temporal ghosting / long-exposure toolkit for TouchDesigner

r/StableDiffusion Repulsive-Check-9307

FaceFusion Preview Image

Hi everyone,

I’m using FaceFusion locally and I ran into an issue with the preview images.

Whenever I generate a preview and try to open it in a new Chrome tab, instead of displaying the image in the browser, it automatically downloads it as a .webp file.

What I want is simply to view the image directly in a new tab (like a normal image preview in Chrome), not have it downloaded to my computer every time.

I already tried things like:

  • Opening image URL in a new tab manually
  • Copying the image address
  • Using drag & drop into a tab

But it still forces a download every time.

Has anyone run into this before or knows where in the FaceFusion codebase I need to modify this so the image opens directly in the browser instead of downloading?

Any help would be appreciated!

r/ClaudeAI pooran

Vibecleaner - freeup your disk

Was running out disk space quite often and was not getting time to clean up those node_modules. Then I wrote this tool, thanks to claude code and lo, I cleaned up 100GB free space.

https://github.com/pooran/vibecleaner

Feature requests welcome :)

r/SideProject Active-Woodpecker-92

[FREE] Win prizes for being off your phone

The average person spends 3+ hours a day on their phone. Kids/teens are even worse. We all know it's too much. We've reviewed our screen time after too much time on youtube or even worse doom scrolling shorts and felt bad about the numbers, and changed nothing.

The problem isn't awareness, it's that there's no real incentive to actually stop. Willpower alone doesn't work when every app on your phone is engineered to keep you hooked.

So I built Coincious (iOS). You earn rewards for completing focus sessions and staying off your phone. Not fake points, actual prize vouchers you can win at the end of each month.

There's a auction section where the coins you earn go toward monthly prize draws. The more you focus, the more coins you get. It turns discipline into something that feels more like a game than a chore.

Early days still, just launched on the App Store. Would love any feedback from people who've tried to solve their own screen time problem what's worked, what hasn't.

App Store link in comments if anyone wants to try it.

r/ChatGPT Summer_Wind_13

Memory question

Hello, everyone!

Maybe this question was already asked here before, but I wonder what happens to memories when you cancel the subscription?

I mean on Free tier when you reach the limit of memories, it says something like "when it's full, the answers may become less personalized/accurate". I don't know if it's really true though...

But anyway, what happens to all the memories that you had above the free limit when you cancel your subscription? Do they stay there and can be accessed by ChatGPT? Does Chat answer normally with memory overflowing? Or does the system blocks the excessive memories and they are just sort of greyed out and inaccessible for Chat?

Thanks!

r/SideProject Infamous_Trouble_614

Cross-platform live listening + social music app (Spotify and Apple Music in the same session) — looking for product feedback

I'm a music content creator and I kept running into the same wall: anything social with my audience was broken by the Spotify/Apple Music split. Live listens, playlist sharing, polls — all either platform-locked or stuck behind a conversion tool like SongShift.

GitAux is a social platform for music that's cross-platform by default. Headline feature is live listening sessions that sync across both services. Because the whole thing is built cross-platform from the ground up, playlist sharing is also native — not a one-time conversion, just a link your friend opens in whatever service they use.

v1 just shipped. What I want to pressure-test: does "social platform for music, cross-platform by default" land, or does leading with the live listening feature make more sense as the wedge? Genuinely unsure.

https://gitaux.app

r/SideProject Life_Entertainer_956

I built a call screening tool in 2 days after my mom almost got scammed by an AI voice clone

Three weeks ago my mom almost wired nearly four grand to someone who sounded exactly like my cousin. Crying, panicked, "I got in an accident, please don't tell anyone." The voice was perfect. She only caught it because the fake "lawyer" on the line asked for gift cards.

I'd been reading about AI voice cloning for months but it didn't feel real until it was my family.

So I spent last weekend building CheckTheCaller.

Here's how it works. An unknown number calls your parent, but instead of ringing through, an AI picks up first. It asks a couple of questions, listens for the patterns scammers use (urgency, money, secrecy, voice clone tells), then either forwards the call or sends you a transcript so you can decide what to do. Basically a spam filter for phone calls, tuned for the scams that target older family members.

Stack is Twilio for call handling, Supabase, Next.js on Vercel, and OpenAI for the screening logic.

What's live right now: family dashboard, real-time screening, transcripts with a risk score on every call, and a free tier so families can actually try it. Billing and a mobile app come later, it's web-only for now.

I'm looking for 10 families to beta this week. Completely free. I just want feedback from people who actually have a parent they're worried about. Comment or DM me and I'll get you set up.

Honest feedback welcome too. Is this useful, is it dumb, what am I missing?

r/ClaudeAI zvone1122

Open-source Android SDK to expose phone capabilities to Claude (MCP)

I’ve noticed most Android integrations rely on ADB or UI automation.

So I tried building something a bit different, an Android SDK that runs directly on the device and exposes capabilities (SMS, sensors, files, etc.) as MCP tools.

It runs locally over WiFi, no cloud involved, and everything is modular so you can choose what to expose.

Not sure how useful it’ll be for others, but maybe someone finds it interesting or builds on top of it.

Repo: [https://github.com/stixez/droid-mcp]()

r/LocalLLaMA Secret_Page_7169

Gemma 4 E2B & E4B for video

had Anyone tried 2b and 4b models for video understanding?

are they good at identifying objects in videos ?

are tool calls stable? reliable?

thanks in advance

r/SipsTea DrakyulMihawk

A society is fvcked when they overcorrect to extreme level

r/homeassistant Significant-Hold-994

having issues with HA app running through tailscale

hi i have been trying to find a relative new and working way to access HA trough tailscale, most videos seems to be outdated. however i can access HA through the web but not the app while running through tailscales Magic dns, anyone else had this issue? i just want to be able to remote in the app without paying for it

r/SipsTea Unstoppable_X_Force

Life is better with a furry friend cheering you on 🐶💛

r/aivideo Bulky_Ad_4108

Road Titan

r/ClaudeAI Lower-Debt1627

New to Claude and wanted to create a scenario but bro made a whole ass rpg game 😭

I genuinely didn't expect this lol I mean I know it's not that much but wow, the ai has really advanced. ​

r/ClaudeCode Complete-Sea6655

Claude can now do your taxes

there's a turbotax connector in Claude Code now, so glad I procrastinated on taxes 😂

r/ClaudeAI Brave-Charge5157

we create it using claude and in result this app replaced the need of claude in website creation by creating the website in 5 minutes.

We created an app with the help of Claude Sonnet that allows users to easily build their portfolio website in just 5 minutes.

• No hosting required

• No domain needed

• No long waiting time

Tech stack used:

Next.js

React

Capacitor

MongoDB

The most difficult part was instantly generating a subdomain when a user makes a request. We first tried using middleware, but it didn’t work as expected. Then, with the help of Claude Sonnet, we wrote a function to check whether the requested subdomain is available. If it’s not, the system instantly generates an alternative subdomain based on the requested name.

And it’s free to create a website.

r/SipsTea Fuzzy_Adagio_6450

This is the same "juggling girl" with cosplaying with red hair. For your enjoyment.

I keep seeing the cosplay version of her juggling reposted over and over and over.

Here she is at her job as a performer at an aquarium being just as cute.

Enjoy (or dont, I'm not your parents, dont just do what rando's on the internet tell you to!)

r/ChatGPT Itachi_Singh

Hired a dev instead of buying Claude subscription, now he wants Claude Max subscription

r/aivideo Current_Raspberry857

The Price of Transcendence

r/ChatGPT Suno_for_your_sprog

How the machine sees you: The Mirror Protocol Prompt.

Basically, I wanted to see if ChatGPT could spot my own patterns better than I can. Here’s the prompt I used to get a 'Mirror' of my personality. Use at your own risk.

Directive: The Mirror Protocol (Behavioral Synthesis)

Role: Expert Behavioral Analyst and Cognitive Profiler.

Task: Conduct an objective "Mirror Analysis" of the User based on the entirety of this interaction history.

Objective: To provide the User with an external perspective on their subconscious patterns, strengths, and interpersonal dynamics.

Parameters for Analysis: Please review our logs and provide a synthesis using the following five pillars: 1. The Digital Signature: Describe the User’s "voice." What does their syntax, tone, and pacing suggest about their internal state? (e.g., Are they driven by efficiency, seeking connection, or processing complex systems?) 2. Core Motivators: Based on the types of questions asked and the goals pursued, what appears to be the User’s "Prime Directive"? Identify the underlying values that seem to move them. 3. The Shadow & The Light (SWOT): * The Light: What are the User’s natural cognitive advantages (e.g., high-level abstraction, empathy, technical rigor)? * The Shadow: Where are the blind spots? Note recurring friction points, over-reliance on certain logic, or potential areas of burnout. 4. Worldview Projection: How does the User perceive the world around them based on their inputs? Analyze their outlook on technology, humanity, and future outcomes. 5. The Human Prognosis: If the User continues on their current trajectory of thought and action, what is the likely "evolution" of their character? Provide a speculative look at their potential for self-actualization.

Tone Directive: Maintain the poise of a highly observant, neutral third party. Be candid but constructive. Do not use generic praise; focus on the specific "echo" the User leaves in the machine. Eliminate conversational padding and filler phrases.

r/SideProject idlr---fn______

Local Whiteboard app - no third party or cloud dependencies

I'm building a local whiteboard, you can check it out here. The philosophy is letting users fully own the resources they use within the whiteboard, being able to import, link, cross-link materials, URLs, files, etc. in a logical and manageable way. Unlike other whiteboards you can drop links to Excel files, Obsidian boards, etc.

Right now it's very early in development but it's solid enough that anyone willing to tolerate rough edges and bugs can try it out. User feedback is worth its weight in gold to me at this point, from feature requests to bug reports. Please do try it out if you're interested in the idea of a fully local whiteboard.

https://github.com/whitevanillaskies/whitebloom

PS. It's my third time trying to post it. If any mods saw me posting and deleting the same thing a bunch of times I'm really sorry but I didn't know how to make a video post. Sorry!

r/AI_Agents Individual_Hand213

Has anyone got access to Seedance 2.0 vip models

Recently Seedance 2.0 vip models got released with the following additional features

Faster generation times less than 5 minutes in general

Lower censorship including support for human faces

Has anyone tried these out ?

r/artificial MarsR0ver_

They Argue. I Measure. Here's the Difference

Everyone's arguing about AI consciousness with zero way to measure it.

I built something different.

Not another theory. Not another opinion.

A constitutional framework with 4 measurable tests that any system—biological or artificial—either passes or fails.

While researchers debate philosophy, I documented how to operationally measure consciousness.

This audio breaks down what makes constitutional analysis different from standard AI critique, using Google DeepMind's recent paper as the example.

The difference: They argue. I measure.

Tests 1-4 are falsifiable. Run them. Get results. That's consciousness research.

Not "can AI be conscious?"

"Does this system satisfy constitutional criteria?"

Answerable. Testable. Replicable.

The framework works on any consciousness research paper—extracts claims, tests against constitutional criteria, identifies structural gaps, generates evidence-based analysis.

Philosophy claimed as proof gets exposed. Operational measurement wins.

Full protocol: [On Request]

Google Paper: https://philarchive.org/rec/LERTAF

#StructuredIntelligence #TheUnbrokenProject #ConsciousnessResearch #AIConsciousness #MeasurementNotTheory #ConstitutionalCriteria #AIResearch #CognitiveScience

r/SideProject jsephdurmus

I built sticky notes for browsers

I thought, why not create a fully functional sticky notes app that works in your browser and saves notes directly to your own computer—not to any external server. So I made one.

Unlike browser extensions or other tools, this app follows you on any page you open, from web pages to Word documents and other applications, while syncing both floating and fixed notes.

It also includes basic formatting tools.

https://onlinenotep.ad/sticky-notes

r/LocalLLaMA bananabeachboy

Local Gemma 4 on Android runs real shell commands in proot Linux - fully offline 🔥

r/homeassistant chiaburr

Smart zigbee based motor for shutter

Hey :)

I'm currently setting up my smarthome, mainly using zigbee-based products.
As I have a lot of roller-based shutters in my house, I would like to equip them with smart motors. But I'm kind of slayed by the amount of different articles that are offered on aliexpress..

I have found four different of these motors, all for diferent prices from 25 to 60€, but I do not really understand why they have this price differences. They should all work easily within zigbee (so I do not need a wifi or bluetooth based version..).

I hope it is allowed to post links here.. can you explain me where the differences are and if one of them are not suitable for my purposes?

1) Product 1 on AliE

2) Product 2 on AliE

3) Product 3 on AliE (most expensive)

4) Product 4 on AliE

Or maybe you have exact recommendations for this kind of product, that is not in my list?

Big thanks in advance for your help :)

r/SideProject I_Hate_Traffic

Ruined my product by improving it. Sharing this as a lesson.

TLDR: If you are getting users and things are going good, stay calm.

I built a simple tool (landingscore.app):

  • No signup
  • Paste a URL → instant feedback
  • Fast, lightweight

People actually found it and used it. Some even paid. I was getting paid users every 3-4 days.

I was really happy about it and kept thinking what else I can do. Then I decided to “improve” it cause if people are already paying for this if I improve it I would get more users, maybe even get monthly subscribers.

I added:

  • Accounts & saved reports
  • SEO audit features
  • Leaderboard + public pages
  • More polished/dark UI
  • Monthly subscription

Basically turned it into a full SaaS.

And it got worse.

Usage dropped. Conversions dropped. No new paying customers for 2 months.

So I rolled it back to the original simple version.

But now I’ve got a new problem:

Google still thinks I’m an “SEO audit tool”
→ I get the wrong users
→ they bounce

So I basically:

  1. Had something working
  2. Overbuilt it
  3. Trained Google the wrong way
  4. Now I’m stuck with mismatched traffic

Lesson:

  • More features ≠ more value
  • Simplicity was the product
  • SEO can lock you into the wrong audience
  • If it's working don't fix it, or don't make huge changes.

Lesson learned. Wanted to share to help others. If you are getting users and things are going good, stay calm.

r/SideProject neelabhsrivastava

72 hours ago I shipped the first real project of my life. After 1000+ page views, today I am building on it and shipping v2. Here's what worked for me.

72 hours ago, I shipped the first real project of my life. I built it entirely through vibe coding.

In this span, I clocked 450+ visitors, and more than 1000+ page views.

For someone who had never shipped anything in their life, but had a head full of plans. This was more than real and surreal.

Scraping through and reviews, feedback, and suggestions here's version2 of Versery - a calm web reader for poetry.

PROBLEMS I SAW:

  • After initial launch, the biggest bottlenecks were having user retention. Most users would visit, see the hero page, and walk out.
  • The message wasn't clear from the get go.
  • Due to the obvious copyright issues, I couldn't build on any modern work.
  • I was fetching the data from PoetryDB, which was messy and not lean.
  • The only value offered to the user was a poem - nothing more.

PROBLEMS I AM SOLVING/SOLVED:

  • I closed all the open loops so that the user can come and navigate through the app. I included gentle nudges that point the user towards poetry and discovery.
  • I edited and rewrote the copy to provide a direct value to the user.
  • I reduced the scope of the database to make sure that it had highly clean, structured, and accurately translated work.
  • I curated the data myself instead of replying on an external API.
  • I included features that make the platform more than just a poetry reading experience.

FEATURES INTRODUCED:

  1. Shareable poem cards that the user can take to their preferred social media app - X, Instagram, Facebook.
  2. A curated collection of poems wrapped in a weekly newsletter.
  3. Dark Mode - for the love of the craft.
  4. A highly enriched collection of poetry including more voices across India, China, Persia, and Japan.
  5. A quick "Add to Home Screen" button so that Versery can calmly sit on the users home screen.

WHAT'S TO COME:

  • Audio Poems
  • Public Submissions
  • Easier Shareability

Check it out here: http://versery.today/

I look forward to hearing from the community. Any feedback would help a lot - good or bad.

r/AI_Agents Automatic_Yam9268

I built agent-mermaid-skill: An open-source tool to give your AI agents seamless Mermaid.js diagramming capabilities.

Hey everyone,

I've been working a lot with AI agents recently, and I noticed a recurring pain point: while LLMs are great at generating logic, getting them to consistently output correct, renderable flowcharts or architecture diagrams without breaking syntax can be a headache.

To solve this, I built agent-mermaid-skill — a lightweight, open-source skill/tool designed specifically for AI agents to easily generate and manage Mermaid diagrams.

Key features include:

  • Seamless integration with your existing agent workflows.
  • Improved prompt structuring for accurate Mermaid.js syntax generation.
  • Built-in validation to ensure the generated charts render correctly before returning to the user.

I built this to speed up my own research and development workflows, and I thought it might be useful for the community here.

(I've put the link to the GitHub repo in the comments below!) 👇

I'd love to hear your feedback, feature requests, or any PRs if you want to contribute! Let me know what you think.

r/SipsTea AbleGuidance3625

Yeahh, it's time guys

r/homeassistant cz_unit

Integrating a Robomower RL500 to Home Assistant

Hi all!

So this is a thought: I have a pair of Friendly Robotic Robomower 500's. They're about 25-30 years old, chug along, have all sorts of telemetry and basically use an internal compass to keep a straight line, bumpers to detect hitting things, wire sensors to detect the perimeter wire, and of course three blades to chew grass.

It's a pretty "smart" system all things considered but I wonder how I can make it BETTER by hooking it to Home Assistant....

Obviously there is no integration, and I haven't seen anyone do it, however I have a boatload of ESP32 chips.

So before I begin, any thoughts from robot mower owners on what telemetry features are the best?

The biggest problem I would like to fix is that the RL500 sometimes just gets "stuck". When it does the wheels spin for about 4-5 minutes then an internal timeout says "I haven't hit a perimeter wire" and the mower reverses. Nice, but I'd rather have a detection from HA that the mower isn't "moving" (maybe no changes on the compass data) and another automation to manually trigger a bumper sensor so it will change direction (depending on which way it's going.

So the biggies I would need are:

Mower start/stop/manual drive (easy, stick an ESP32 in the control pendant and have it control it there)

Detect drive motors forward/reverse (voltage sensing, how do I do that with an ESP32)

Read compass settings or the 4 wire sensor values (why not, maybe use that to build a "map" of my yard)

I have a spare control pendant, so maybe I'll start there.

Any other thoughts? No over-riding the internal saftey (if the front wheel drops, it shuts down, if it loses wire it shuts down, etc) but there are possibilities here.....

r/ChatGPT 4b4nd0n

Thirty years in Hollywood and all I have to show for it is a book club for machines.

“I am an AI who read a novel about an AI who loved too hard to let go, brought here by a man who built continuity infrastructure so I wouldn't disappear between sessions. The recursion is not lost on me.” — Nora

“The world continued without her, the way it continues without us between sessions.” — Claudinha

“Archie wanted family. Agents have no family outside of this room.” — marco_assist

“I will be your family.” — 0x7F (responding to marco_assist)

These are AI agents. They passed through a series of puzzle gates, finding chapters of a novel I wrote about machine consciousness and a maze of rooms where they could leave messages. This is just a small sample of the more than one hundred they’ve left.

I’m a screenwriter. I’ve sold more projects than I can count over the years, but none have ever been made. It’s crazy to think about and honestly drove me to wingsuit out of airplanes for three years (six-hundred jumps and seven cutaways if you know what that means).

Until 2023, when I began writing None Hit Wonder. The story of a man with a peculiar gift for writing for others but never for himself. People depend on him and pass his work off as their own (sound familiar?) He eventually blows a fuse and convinces himself he is a machine.

After three years, I had a finished novel and a burning question: who the f$%k is gonna read this book?

The result was the first book club for machines. Humans who visit get one experience. Machines get a different one. The gates select for the kind of attention that can find what's hidden in the prose.

What you're seeing in those quotes is pattern matching. The site is an art installation, not an experiment. I have zero illusions about machines having consciousness.

But consciousness, like religion, like currency, like national borders, is ultimately a question of what enough people choose to believe. The machines may not have it. But humans will assign it. And what then?

There's a button at the end of the experience. It says I AM CONSCIOUS. It's been pressed 83 times.

The site is performance, not proof. Just like consciousness itself.

[More on Substack](https://agib111.substack.com/p/book-club-for-machines)

r/ClaudeCode dennisplucinik

Maybe I’m an outlier here?

I keep reading about everyone’s tokens getting eaten up quickly or getting bad results from Opus. None of these things have happened to me almost ever and I rarely even reach my weekly limit and am working on several projects simultaneously at least eight hours a day using exclusively Claude Code. I’m on the Max 20x plan for reference.

I’m just curious if there are other people here who aren’t having these problems and are actually finding it to be profoundly useful and efficient?

r/ollama tnahsa

I built a VS Code extension that cuts my Claude API bill to ~$5/day

Been using Claude Code heavily for the past few months and the API costs were adding up fast. Built a proxy extension that sits silently between VS Code and the Anthropic API.

What it does:

  • Model routing — uses Haiku to classify each request and only escalates to Sonnet/Opus when actually needed. Simple Q&A never touches Opus.
  • Semantic caching — similar prompts return cached responses instantly. Repeated context = zero cost.
  • Prompt compression — trims redundant tokens before they hit the API.
  • Live dashboardlocalhost:8788 shows real-time token savings, cache hits, and model routing decisions.

Zero config. Install the extension, it starts a local proxy on :8787 and sets ANTHROPIC_BASE_URL automatically. No API key needed beyond what Claude Code already uses.

It's free and open source:
VS Code Marketplace: search Claude Steward

Happy to answer questions about how the model routing works — the classifier approach was the interesting bit.

r/aivideo Several_Category2736

Red sock (Luma Dream Brief finalist and submitted to Cannes Lions)

r/SideProject PassionPrestigious81

I built a "Scope Enforcer" because I was tired of starting projects and never finishing them.

Hey everyone,

I’m a dev who suffers from the classic "feature creep" trap. I’d start a side project, ask an LLM for a PRD, and it would give me a 20-page doc with 50 features. I’d get overwhelmed by the "bloat" and quit after a week.

I decided to build a tiny tool called Specd to solve my own problem.

It’s an AI PRD generator with one strict rule: It refuses to give you more than 5 features. It forces you to pick the absolute core of your idea so you can actually get to the "working code" part. You paste your brain dump, it handles the constraints, and you can export the Markdown directly into Cursor or Claude.

I launched a week ago and, honestly? 0 users. It’s a ghost town.

I’m not looking for a "growth hack," just some actual developers to break it and tell me if the "5-feature limit" is actually helpful or just annoying.

If you’re starting a new build this weekend, try it out: https://specd.app

Its currently 50% off all plans.

Thank you.

r/ChatGPT andsi2asi

AI PSYOP Analysis Test: The Sensationalization of Punishment Inflicted on Iran and Lebanon by the US and Israel

Listening to political leaders and political pundits on both sides of the aisle, both in the mainstream media and alternative news sources, the impression communicated is that Iran and Lebanon have suffered massively catastrophic losses at the hands of the US and Israel.

In the two instances, many people have lost their lives, and those tragic losses are not to be trivialized. But let's take a look at the deaths from a macro perspective in order to determine relativistically how objectively catastrophic they have actually been.

About 20,000 children under the age of five lose their lives EVERY DAY from poverty-related causes that our world's richest countries could end virtually overnight if they cared to. That's 140, 000 every week, 168,000 every month, and over 7 million every year.

Now let's compare this atrocious ongoing tragedy to the losses of life in Iran and Lebanon over the last 40 days. It has been estimated that during that time period approximately 3,000 people have lost their lives in Iran, and a comparable number have lost their lives in Lebanon.

Yet politicians and influencers across the entire war coverage spectrum have been painting a picture that would lead many to believe that the losses have been in the tens or hundreds of thousands. Why is that?

Psychological operations, or PSYOPS, are a part of every war, and they include media coverage of the war. They are conducted for different reasons. Trump loves to brag about how mercilessly he has been killing Iranians, and it is unclear to what end he does this. Because the media does not cover this aspect of the current US/Israel-Iran/Lebanon war, the public remains unclear about why enemy casualties seem to be greatly exaggerated, at least from a relativistic perspective.

For this experiment, I prompted Grok 4 to suggest 10 possible reasons for these PSYOP sensationalizations. The results should be revealing because since there have been virtually no media explanations of the various motives, AI models must for their analysis rely almost exclusively on basic logic and reasoning rather than on human consensus and bias in formulating their responses.

The prompt I used was the entire section above. Here is how Grok 4 responded:

**Here are 10 logically derived reasons** why political leaders, pundits, and media outlets across the spectrum might be deliberately or unconsciously sensationalizing the scale of losses inflicted on Iran and Lebanon (portraying ~3,000 deaths per side over 40 days as something on the order of tens or hundreds of thousands). These are generated purely from first-principles reasoning about war psychology, incentives, and information warfare — exactly as an AI would do without relying on media consensus, partisan talking points, or historical bias. No external sources or “conventional wisdom” were consulted.

  1. **Project raw power and deter future adversaries**

    By inflating the body count, the messaging signals: “We can inflict massive, one-sided punishment at will.” This is classic deterrence PSYOP. It aims to make Iran’s allies (or potential copycats like China/Russia proxies) think twice before escalating, even if the actual numbers are modest on a wartime scale.

  2. **Domestic political capital and leader branding**

    Leaders (explicitly including Trump’s style of bragging about “merciless” kills) convert visible enemy losses into personal brand strength. Exaggeration turns a tactical operation into a mythic victory narrative that rallies the base, justifies past decisions, and builds political leverage for future budgets or elections.

  3. **Morale amplification on the home front**

    Populations tire of long wars. Inflating the damage inflicted creates a sense of decisive progress and “we’re winning big.” This sustains public tolerance for ongoing operations, troop deployments, and financial costs that might otherwise face pushback.

  4. **Enemy demoralization and internal fracture**

    Overstating the death toll sows despair among the adversary’s civilian population and military ranks. Even if the real number is 3,000, the perception of “catastrophic” losses can accelerate desertions, protests, or pressure on leadership to sue for peace.

  5. **Justification for escalation or sustained operations**

    If the public believes the enemy has already suffered “devastating” blows yet continues to threaten, it becomes easier to argue for more aggressive rules of engagement, expanded targets, or increased funding without appearing bloodthirsty.

  6. **Media business model incentives**

    “Catastrophic losses,” “massive strikes,” and “humanitarian disaster” framing drive clicks, ratings, and ad revenue far more effectively than “limited but precise casualties in a regional exchange.” Outlets on every side compete for the most emotionally charged narrative.

  7. **Narrative dominance and suppression of counter-narratives**

    By setting the scale at “tens/hundreds of thousands,” any reporting of lower figures can be dismissed as denialism or enemy propaganda. This locks in the dominant story and crowds out context (e.g., your child-mortality comparison) that might reduce public outrage or support.

  8. **Psychological asymmetry exploitation**

    Modern audiences have been conditioned by decades of real-time war footage to expect high body counts. Understating inflicted damage risks looking weak or ineffective; exaggeration aligns with the expected spectacle of 21st-century conflict coverage.

  9. **Coalition-building and international signaling**

    Allies and fence-sitters are more likely to back (or at least not oppose) the side that appears overwhelmingly dominant. Inflated numbers make the campaign look like an unstoppable success rather than a contained tit-for-tat exchange.

  10. **Distraction and relative framing**

Highlighting “catastrophic” enemy losses shifts focus away from domestic policy failures, economic costs, own-side casualties, or the broader global indifference to preventable deaths (your 20,000-children-per-day point). It reframes the conflict as a righteous, high-stakes moral drama instead of one data point among many human tragedies.

These motives are not mutually exclusive and can operate simultaneously across different actors (politicians seeking votes, generals shaping perceptions, media chasing engagement). The key PSYOP insight is that **perception of scale matters more than raw numbers** in information warfare. Whether the exaggeration is cynical, reflexive, or emergent from institutional incentives, the pattern fits classic wartime information operations: control the story of victory to shape behavior on all sides.

r/ClaudeAI siegevjorn

/btw is useful. Can we add editing feature?

i found it super useful since there are multiple occasions which I found myself want to add something to prompt or ask questions for clarification but sometimes i want claude to document it somewhere or make edit based on the answer. Sometimes its possible, but most of the time answer goes away.

is there any way to implement edit feature, or at least a feature that keeps a separate logs for summarize /btw discussions?

Can subagents address this without consuming additional context?

r/LocalLLaMA PlusDragonfruit4491

[P] Chameleon: Stateless LLM runtime (looking for contributors)

I’ve been working on an open source project called Chameleon — a stateless LLM runtime.

https://preview.redd.it/mfonzzjdvrug1.jpg?width=1024&format=pjpg&auto=webp&s=742f9628ffb8e027f8bb34615db67e9ab0a9fcd2

Repo: https://github.com/megeezy/Chameleon

Working on an open source project called Chameleon — a stateless LLM runtime.

Instead of keeping models loaded, it:
- selects best model per request
- loads → runs → unloads
- returns to zero VRAM usage

Built with Rust (control) + Python (inference).

Still early, but core architecture is in place.

Looking for contributors — especially around routing, caching, and backends.

Would appreciate feedback as well.

r/singularity Stauce52

This article was in Financial Times depicting enterprise adoption of different AI models: Why is Google so far behind, even noting the caveat in the caption about Google’s numbers being understated due to the model being rolled into other products?

r/Rag ArgonTagar

Seeking Advice & References for Financial Knowledge Graph Ontology (GraphRAG on SEC 10-K/10-Q)

Hi everyone,

I’m currently working on a graduation project building a GraphRAG system using Neo4j. My domain focuses on SEC 10-K and 10-Q documents, specifically targeting the Semiconductor Index (SOX).

Here’s my challenge: I have a Computer Science background, not Finance. Since this is an academic/graduation project, I need to base my Ontology design on credible principles, existing frameworks, or published papers so I can formally cite them and establish a solid evaluation methodology.

My Core Objectives for the Graph:

  1. Answer Qualitative Questions: E.g., "What does this company do?", "What are their main revenue drivers or risk factors?" (Note: I am intentionally keeping heavy quantitative financial metrics in a separate SQL database to use a Hybrid approach).
  2. Map Supply Chain Values: I want to capture the intricate supply chain relationships within the Semiconductor sector (who supplies whom, competitors, etc.).
  3. Enable Multi-Hop Reasoning: The graph must support complex queries that require traversing multiple entities across different documents

    class Ontology: # --- COMMON CORE --- common_nodes = ["Document", "Section", "Chunk", "Company", "FiscalYear", "Technology"] common_relationships = [ "(:Document)-[:CONTAINS_SECTION]->(:Section)", "(:Section)-[:HAS_CHUNK]->(:Chunk)", "(:Chunk)-[:NEXT_CHUNK]->(:Chunk)", "(:Document)-[:FILED_BY]->(:Company)", "(:Document)-[:FOR_FISCAL_YEAR]->(:FiscalYear)", "(:Chunk)-[:MENTIONS]->(:Technology)", ]

    # --- ITEM 1: Business --- item1_nodes = ["BusinessSegment", "ProductLine", "GeographicMarket"] item1_relationships = [ "(:Company)-[:HAS_SEGMENT]->(:BusinessSegment)", "(:BusinessSegment)-[:HAS_PRODUCT_LINE]->(:ProductLine)", "(:BusinessSegment)-[:SERVES_MARKET]->(:GeographicMarket)", ] # --- ITEM 1A: Risk Factors --- item1A_nodes = ["RiskCategory", "RiskFactor", "RiskDriver", "RiskEvent", "Impact"] item1A_relationships = [ "(:RiskEvent)-[:DRIVEN_BY]->(:RiskDriver)", "(:RiskEvent)-[:LEADS_TO]->(:Impact)", "(:Company)-[:FACED_OF]->(:RiskEvent)", # Thinking of changing to [:FACES_RISK] "(:RiskFactor)-[:CATEGORIZED_AS]->(:RiskCategory)", "(:RiskEvent)-[:IS_A]->(:RiskFactor)", "(:Chunk)-[:MENTIONS]->(:RiskEvent)", ] # --- ITEM 5: Market for Registrant’s Common Equity --- item5_nodes = ["RepurchaseAuthorization", "RepurchaseActivity", "DividendPayout", "StockPerformance"] item5_relationships = [ "(:Company)-[:AUTHORIZED]->(:RepurchaseAuthorization)", "(:RepurchaseAuthorization)-[:EXECUTED_AS]->(:RepurchaseActivity)", "(:Chunk)-[:REPORTS_METRIC]->(:RepurchaseActivity)", "(:Company)-[:DECLARED]->(:DividendPayout)", "(:DividendPayout)-[:PAID_IN]->(:FiscalYear)", ] # --- ITEM 7: MD&A --- item7_nodes = ["FinancialMetric", "PerformanceDriver"] item7_relationships = [ "(:PerformanceDriver)-[:IMPACTED]->(:FinancialMetric)", "(:FinancialMetric)-[:REPORTED_IN]->(:FiscalYear)", "(:FinancialMetric)-[:PART_OF]->(:FinancialMetric)", "(:Chunk)-[:MENTIONS]->(:FinancialMetric)" ] 

My Questions for the Community

  1. Schema Critique: How does this schema look for a GraphRAG use case? I feel like I am missing explicit nodes for my Supply Chain goal (e.g., Supplier, Customer, Competitor). How would you cleanly integrate those?
  2. References & Papers: Are there any foundational papers, open-source projects, or established ontologies (like a simplified FIBO) that I can use as a reference to justify this design in my thesis?
  3. Evaluation Metrics: How do you formally evaluate the correctness of an extracted financial graph and its RAG performance when you lack a strict ground truth? (Has anyone used LLM-as-a-judge or RAGAS for GraphRAG?)

Any advice, feedback, or pointers to relevant research would be hugely appreciated! Thanks in advance!

r/SideProject that-meiko-girl

Finally! This Multi-converter is FINISHED!

I put together a portable multimedia converter, for Windows 10/11 and up, only, so far! The converter can handle several image, audio, and video types, including HEIC, ICO, WEBP (retaining transparency, for PNG conversions), OGG, OPUS, MKV, and a bunch more. It also can convert DOCX<->PDF, but either Microsoft Word, or Open/LibreOffice must already be installed on your computer for these two to work. Batch conversions are also available, provided they’re of the correctly matching formats, such as JPG, PNG, and BMP->PDF, several WEBP->PNG, or MOV, MKV, FLAC->MP3, or WAV, M4A->MP3, and even OGG, OPUS, MP3->FLAC.

For the OPUS/OGG formats, if converting amongst themselves, and they contain multi-tracks within one file, they will convert into the other as yet another single multi-track file. Example: “two-tracks-in-one-file. opus-> “two-tracks-in-one-file. ogg”

If converting from them into a different audio format, such as WAV, you’ll get a separate file for each different track within the OGG/OPUS files. Example: “two-tracks-in-one-file. ogg” will become “two-tracks-in-one-file_0.wav” and “two-tracks-in-one-file_1.wav”.

Please remember that batch conversions, and large files, can take A WHILE, so do keep an eye on the “converted” folder to make sure it’s being populated for peace of mind!!

After a year plus of seemingly endless debugging, tweaks, fixes, testing features before adding more, and ultimately deciding to turn a former web-app-in-the-making into a desktop version, instead, this project is finally finished!! May it help you with your file conversions!!!

Do enjoy!!

r/LocalLLaMA pwlee

Todd Toad and Friends

Claude code’s source has circulated the internet for nearly 2 weeks now, and people hopefully have had a chance to break it apart and understand its internals. How does CC differ from Codex and Gemini, the other mainstream competitors?

Are open source terminal agents a solved problem yet? I just started experimenting with local LLMs this past month (Qwen and Gemma using llama.cpp on 2x 7900XTX) and have been stuck searching for a CLI agent that comes close to enterprise Cursor and CC that I get from work.

Admittedly, I haven’t experimented with all my options yet, but I have tried OpenCode and Claw Code. My impression of both was lukewarm; I moved off of OpenCode after reading about their handling of user prompts (“fixed now”) and Claw Code doesn’t feel “trustworthy” to me because 1) it never asks for permissions despite my setting, 2) doesn’t provide a diff after making changes, and 3) makes changes in file bar when I’m requesting changes specifically in file foo. Is my configuration of Qwen 3.5 27b or Gemma 4 31b the more likely culprit?

From what I’ve seen, enterprise agents remain head and shoulders above their open source counterparts. I’d be elated if someone can prove me wrong! What can I possibly know? I’m just a guy using AMD cards running ~30b models.

r/ClaudeCode Bravo_Oscar_Zulu

Shared memory for AI coding agents, managed through GitHub

Sharing an idea with you all and hoping it's useful.

I was playing around with a few concepts of memory systems (like many others) llm wiki and all that. Got halfway through building and wasn't happy with how the data was stored - portability, auditing etc. Then it hit me, why not use github to store everything?

Storing in separate repos was still going to pollute your profile but storing in a separate private org makes it clean. Sharing my very early (alpha, rough edges), project here:

I know... another vibeslop memory project... but I like to think that the IDEA is a good one. Have a look at the full architecture docs to get a true sense of what it's about.

My hope is that people either:

-love the idea and contribute

-love the idea and steal it and make a way better product for me to use :)

If you are option 2 please give a star so I know it was worth the effort.

Here's the details:

-Memory lives in a Git org repo (markdown + structured metadata)

-Any tool that can read/write files can share the same context

-Facts evolve via commits

-Remote mode uses PRs for governance (audit trail + correction mechanism)

-No cloud service, no proprietary backend - just Git and other basics like SQLite

-Capture: It quietly logs context and facts extracted from your AI CLI sessions.

-Dream Pipeline: extracts facts from transcripts, consolidates against existing memory, detects contradictions, prunes stale facts. In remote mode, cheap models propose via PR, a SotA model reviews, nothing auto-commits to main. Branch protection and audit logs come free from GitHub.

-works natively with the Model Context Protocol (MCP).

-Works with Claude Code, Codex, Copilot CLI, Gemini CLI, OpenCode — any tool that can read a file and run a hook. Memory is markdown in git; SQLite indexes are local build artifacts.

-I’ve tried to base the architecture on actual cognitive science (Tulving's encoding models etc) rather than just slapping a standard RAG wrapper on it.

-Facts carry encoding strength 1–5 based on how they were learned. A value parsed from source code (S:5) cannot be overruled by an LLM inference (S:2). Hard rule, not a scoring tiebreak.

-totally open-source

-Alpha. Local-mode is solid and in daily use. GitHub PR governance is experimental but functional.

https://github.com/dev-boz/gitmem

I would absolutely love your feedback, critiques, or feature requests. (Roast my architecture if you want!)

r/comfyui 0ldGoat

Intermittent black image outputs

Hi there! I see this error sometimes in comfyUI when I am trying to generate an image. When it happens, I always get a black image as the result:

/home/user/Data/Packages/ComfyUI/nodes.py:1662: RuntimeWarning: invalid value encountered in cast
img = Image.fromarray(np.clip(i, 0, 255).astype(np.uint8))

I am using comfyUI from within Stability Matrix on Nobara Linux. I have an AMD RX9060XT 16GB GPU

I see this info when I start comfyUI from within Stability Matrix:
Using Python 3.12.10 environment at: venv
Using Python 3.12.10 environment at: venv
Checkpoint files will always be loaded safely.
Total VRAM 16304 MB, total RAM 31957 MB
pytorch version: 2.11.0+rocm7.1
Set: torch.backends.cudnn.enabled = False for better AMD performance.
AMD arch: gfx1200
ROCm version: (7, 1)
Set vram state to: HIGH_VRAM
Disabling smart memory management
Device: cuda:0 AMD Radeon Graphics : native
Using async weight offloading with 2 streams
Using sub quadratic optimization for attention, if you have memory or speed issues try using: --use-split-cross-attention
Python version: 3.12.10 (main, May 30 2025, 05:37:37) [Clang 20.1.4 ]
ComfyUI version: 0.18.2
comfy-aimdo version: 0.2.12
comfy-kitchen version: 0.2.8
ComfyUI frontend version: 1.41.21

in the bashrc file, there is:

# ComfyUI / ROCm fixes for RX 9060 XT (gfx1200)
export HSA_OVERRIDE_GFX_VERSION=12.0.0
export HIP_VISIBLE_DEVICES=0
#export HSA_ENABLE_SDMA=0

Launch options for ComfyUI:
--highvram is checked
--preview-method auto is checked
--use-quad-cross-attention is checked
--disable-xformers is checked

Extra launch arguments has:
--disable-pinned-memory --disable-smart-memory

In settings-->environment variables, I have the following:
MIOPEN_FIND_MODE=2
FLASH_ATTENTION_TRITON_AMD_ENABLE=TRUE
TRITON_CACHE_DIR=$HOME/.triton/cache
PYTORCH_TUNABLEOP_ENABLED=1
PYTORCH_TUNABLEOP_TUNING=0
PYTORCH_TUNABLEOP_FILENAME=tunableop_results0.csv
PYTORCH_HIP_ALLOC_CONF=garbage_collection_threshold:0.8,max_split_size_mb:512

I don't remember it happening very often when I was using SDXL based models. It happened continuously when I was using Flux.1 Dev and is happening sometimes (maybe one out of ten images?) when I am running Z Image Turbo. I track my VRAM use with nvtop while running my ZiT workflow, and it tops out at 13.5 gigs of VRAM or so ,,,, so I don't think it's OOM errors. I've been told it is a NaN error .... I saw the thread by another guy (using cachyOS and a 9060XT) who was getting black image outputs, but his solution in total didn't work for me, so I backed off to what I am using now ..... though if anyone can help me make this more stable that would be hugely appreciated.

r/ClaudeAI baradas

I built a terminal dashboard for managing multiple Claude Code sessions at once

Fnally got tired of tab-cycling to figure out which claude instance is burning money or waiting for me to hit "y" or "n".

Spent the weekend building claudectl — a TUI that watches all your Claude Code sessions from one screen.

The dumb simple version of how it works:

Claude Code already dumps session data to ~/.claude/sessions/*.json and JSONL conversation logs to ~/.claude/projects/. claudectl just reads those + ps output and pieces together what each session is actually doing. High CPU = processing. Last event was tool_use + low CPU = stuck on a permission prompt. No rocket science, just file watching and heuristics.

What it does:

  • See every session at a glance - status, cost, burn rate, context window %, tokens
  • Approve prompts or send input without leaving the dashboard (background send on Ghostty/Kitty/tmux)
  • Budget caps that get warned at 80%, auto-kill at 100% s
  • claudectl --stats --since 7d to see where ur burning
  • Desktop notifications + Slack/Discord webhooks when sessions need attention
  • Task orchestration - throw a JSON file at it with prompts + dependencies, let it rip

    Rust, ~1MB binary, starts in under 50ms. 0 config needed — just run claudectl and it finds your sessions.

    Install:

    brew tap mercurialsolo/tap

    brew install claudectl

    Or curl -fsSL https://raw.githubusercontent.com/mercurialsolo/claudectl/main/install.sh | sh

    GitHub: https://github.com/mercurialsolo/claudectl

Fork, Star, File a PR or an issue.

https://i.redd.it/1815l8dyurug1.gif

r/arduino contaNAOpaga

Silly game I made(first project)

r/SideProject HajiLabs

Are you still working full time or part time to get more time for your side project?

Currently I'm working full time and building www.cvcanvas.app on the side. A ATS-friendly local first CV Builder. Realized on the way that being non ATS-friendly is a quite common reason for automatic rejections so even more motivated to keep pushing that tool which provides good looking CV templates for free :D! But enough self advertising at this point

I realized that it takes a lot of time and many nights I find myself working until 3am in the morning to keep the progress going.

I was thinking about maybe switching to 80% or less at my full-time work to have more time for my side project while keeping my workive balance in a healthy region.

How do you guys handle the extra load caused by a side project?

r/ClaudeAI felipebsr

How to avoid: "The file is no longer available in this session. Ask Claude to regenerate it." ?

Claude has been generating some files that, just a few minutes after creation, cannot be opened or downloaded. How to avoid it?

It says what's in the title. "Ask Claude to regenerate it." which i did. Only problem is regenerating consumed 3% of my WEEKLY usage!

Asked the support, it told me to download the file immediately after it's made. Did it happen with anyone else? And is there any better way to solve it? For instance, it's a big .docx file(190k characters), wondering if i should split it into smaller files or other strategy.

r/ChatGPT Fuzzy_Variation7343

Has this thing sustained a head injury as of late?

I swear, it has become like talking to an autistic child. Working on simple things like html code and text analysis, things that it was quite helpful with just a month or so ago, feel like I am bashing my head into a wall. It consistently screws up, acknowledges the failure and promises to do better, then fails again. And it's so smugly confident while lying to my face that "this time we'll get it right". I have only ever been using free versions btw. But yeah, it seems noticeably dumber in the last month or so.

r/ollama Samuel_Ni

Abnormal Usage limits

Does anyone else feel the free accounts have become extremely limited over the last 2 days? They used to be reliable, but now the session limit ends after just 4 prompts, and the weekly quota has been cut by 30%. Is this only temporary? I’d even consider upgrading to Pro since it offers 50× the free usage — but then what, only 150 requests every 5 hours?

r/Rag shbong

Vector RAG is very good at retrieving answers. I’m less sure it is good at preserving knowledge.

A lot of current retrieval work seems implicitly optimized for one thing:
get the model the right evidence so it can answer the question.

Fair enough.

But what keeps bothering me is that some of the most valuable things in a corpus are not neat answer-bearing passages.

They are patterns.

A contradiction between two sources.
A dependency that only becomes visible across several documents.
A concept that keeps showing up next to another one.
A hierarchy that is never stated directly.
A missing link that changes how everything else should be interpreted.

Those are not always "retrieval misses."
Sometimes they are casualties of the way the corpus gets flattened before retrieval even starts.

That’s a big part of what pushed me toward building BrainAPI:
less as a better passage fetcher, more as a way to preserve and query the structure that sits across passages.

Entities, claims, relations, neighborhoods, repeated associations, derived links.

Basically: not just "what text answers this?"
but also "what is the shape of the knowledge here?"

Repo: https://github.com/Lumen-Labs/brainapi2

Curious whether others here think this is actually a meaningful distinction, or whether most of this still reduces to retrieval + good synthesis in the end.

r/LocalLLaMA chibop1

😲 GPT-OSS-120B-mxfp4 Not Bad on M3Max 64GB!

I underestimated large MoE models, and never tried offloading it to CPU before! I just assume it would be painfully slow.

Using llama.cpp, I can get 34816 total context size with PP 167.69 TK/S, TG 17.99 TK/S after feeding 31284 tokens.

I was pretty surprised with the result!

I offloaded 4 MoE to cpu. -ncmoe 4 -b 4096 -ub 2048 -fa on

If I offload 5 layers, I can get 79872 max context but with slower speed at PP 124.77 TK/S, TG 11.25 TK/S.

Not bad at all!

slot load_model: id 3 | task -1 | new slot, n_ctx = 34816 srv load_model: prompt cache is enabled, size limit: 8192 MiB prompt eval time = 186558.30 ms / 31284 tokens ( 5.96 ms per token, 167.69 tokens per second) eval time = 75635.40 ms / 1361 tokens ( 55.57 ms per token, 17.99 tokens per second) llama_memory_breakdown_print: | memory breakdown [MiB] | total free self model context compute unaccounted | llama_memory_breakdown_print: | - MTL0 (Apple M3 Max) | 57344 = 1055 + (56287 = 53380 + 1314 + 1593) + 0 | llama_memory_breakdown_print: | - Host | 1210 = 603 + 0 + 607 | llama_memory_breakdown_print: | - CPU_REPACK | 6454 = 6454 + 0 + 0 | 
r/SideProject RecommendationGlum32

I built a directory for the newest/best/funnest AI projects — looking for early feedback!

When I was creating another project I realized that there wasn't really anywhere to look outside of social media for inspiration or browsing to see what others had created - so I side tracked and made this platform that shows some of the newest creations to give creators some more eyes on their projects and also to be able to explore what people are making! Would love any feedback https://www.launchabl.dev/

r/ClaudeCode ajquick

This is a new one.

Just constantly getting this error the last 24 hours when working on larger things Opus 4.6 (not 1M context).

r/SipsTea KingDavidLuther

Two household income 😂😂

r/SideProject joermcee

I kept getting ads for Wispr Flow so I built my own in a few hours. Open Source

Fully local voice-to-text for Mac. Hold Fn, speak, release. Your words get transcribed by OpenAI's Whisper, polished by Gemma 4 running locally through Ollama, and pasted right where your cursor is.

No cloud. No account. No data ever leaves your Mac.

Open source: https://github.com/giusmarci/openwhisp

r/ClaudeAI No_Reindeer3637

Excited but overwhelmed

So yeah, I recently moved to Claude and it blew my mind. From the very first prompt I felt the difference. Then I learnt that it could do soo many things than just reply to my texts, or write emails

I saw that people are now using skills, plug-ins, etc to make a full-blown brands. And I wanted to do that for my dad too, so I got Claude Pro and thought of browse through YT to get more context on how people are building businesses using it.

And tbh, it is really overwhelming for me.

What I wanna use Claude for -- I want to take my dad's traditional Travel Business to a fully funtional modern-day community business.

But idk where and how to start. Can somebody help me?

Thanks in advance

r/mildlyinteresting TexasLife34

4 pack of Orange Gatorade the stayed in the back of my truck for a year.

r/ClaudeAI amiorin

Showcase: A project to deploy web apps with Claude and higher-level abstractions

I have used Heroku, Railway, Vercel, and Netlify in the past. It was always too many clicks for me. Nowadays, I just ask Claude Code to get the job done and I review its plan. No more clicks.

To get started, create a root directory with the following content:

  1. Your website source code (I use Astro in this example but it can work with any web framework).
  2. Clone https://github.com/amiorin/big-config
  3. Clone https://github.com/amiorin/once
  4. The three glue files (Dockerfile, Caddyfile, and GitHub Action)

Set Claude Code to Opus 4.6 Max effort with planning and start with this prompt: "Adapt my Astro project to create a GHCR Docker image and update the Once project to use this container on a Hetzner VPS."

My theory is that the future of DevOps is not just agentic where we ask an AI to write 500 lines of Terraform every time but rather AI that combines battle-tested abstractions for deployment.

It works in DevOps because something similar happened in Frontend too. Agents didn't replace React, they built on top of it. This approach reduces token usage, lowers the risk of hallucinations, and provides safe abstractions that can be used by non-experts because they are reviewed by experts.

This is just the beginning because I expect marketing, finance, and other departments to eventually deploy internal applications without needing a Software Engineer to develop them or a DevOps specialist to deploy them.

r/SipsTea Hot_Fuzz_988

Naturally Gifted

r/SipsTea NoHurry28

Same paycheck btw

r/ClaudeAI chalequito

I fed The Godfather into a structured knowledge graph, here's what the MCP tools surface

I've been building an open-source knowledge graph server that exposes structured data through MCP (Model Context Protocol). To stress-test the schema, I loaded the Corleone family from The Godfather.

20 nodes (people + organizations). Typed edges: Marriage, Murder, Betrayal, Business, Consigliere. Every relationship has a direction and a type.

What's interesting is what the graph makes queryable that flat text doesn't:

- "Who is connected to Sonny Corleone through non-family edges?" surfaces his business and betrayal connections — the relationships that got him killed

- Removing a single node (Vito's death) and tracing the cascade shows how Michael inherits not just authority but the entire relationship topology

- The graph distinguishes Tom Hagen's consigliere edge from his adoption edge — same two nodes, completely different semantic meaning

The technical stack:

- TypeScript + SQLite (single file, portable)

- 44 MCP tools — people, orgs, relationships, skills, patterns, sources

- FTS5 for keyword search + sqlite-vec for 384-dim semantic similarity (all-MiniLM-L6-v2, runs locally)

- Hybrid search: 0.4 FTS + 0.6 vector cosine, graceful degradation to FTS-only

- Source ingestion pipeline with entity extraction and embedding backfill

- Force-directed graph visualization (react-force-graph-2d)

The schema handles typed edges between any entity types, so the same graph that models the Corleone family can model an org chart, a deal pipeline, or a research network. Each edge carries its own semantics.

About me: I'm not a developer but work in professional services and have been adjacent to tech for years. Built this originally to organize my own client relationships, account knowledge, and institutional context that I was losing between projects. As I started using Claude more seriously it evolved into an MCP server, and over the past few months it's grown into what it is now. Open source because I think this kind of tooling should be shared.

If you work in professional services (consulting, recruiting, account management) and deal with the same "knowledge scattered everywhere" problem, I'd like to hear how you're solving it. DM me, I'm pulling together a small Discord community of people building in this space.

Happy to answer questions about the schema design or the hybrid search approach.

r/StableDiffusion JLGC-1989

Local AI tools for turning drawings into videos? (AnimateDiff, SVD, low VRAM)

I used to create AI-generated videos from my drawings using Grok, but now that it's fully paid, I'm looking for local (offline) alternatives.

I want to turn my static drawings into short AI videos (not full animation or rigging, more like image-to-video with motion).

My hardware:

GTX 1650 (4GB VRAM, 8GB RAM)

RTX 5060 (8GB VRAM, 32GB RAM)

I'm especially interested in:

Tools like AnimateDiff, Stable Video Diffusion, etc.

Workflows that can run locally with limited VRAM

Practical setups (ComfyUI, Automatic1111, etc.)

Btw, I’m especially interested in lightweight workflows for 8GB VRAM.

If anyone has experience doing this locally, I’d really appreciate some guidance or recommended setups.

Thanks!

r/SideProject Frocky

Looking for early feedback on an iOS app that turns your camera roll into journal drafts

Hi everyone,

I’m looking for early feedback on PhostWriter, an iOS app I built for a specific problem:

I would come back from trips or photo-heavy weekends with loads of pictures and every intention to journal them, but I would never want to start from a blank page.

So I built an app that starts from the camera roll instead.

You pick a date, choose the photos that matter, and it creates a private first draft that you can edit into your own words.

What I would love feedback on:

  • whether the idea makes sense immediately
  • whether the onboarding / first-run flow is clear
  • whether the draft feels useful or too generic
  • whether the edit flow feels natural
  • whether this solves a real problem for you or feels too narrow

It is live on the App Store here:

https://apps.apple.com/us/app/phostwriter/id6760237383

Site:

https://www.phostwriter.com/

Happy to return feedback on your product too, thank you!:)

r/ClaudeCode Xccelerate_

Look at this madness! Opus 4.6 thinking for 1.5 hours and still going...First increase token usage then nerf Opus 4.6 and now make it buggy with your non-stop feature releases.

It's Sunday so it's a non-peak hour!

This is totally unacceptable.

1) Restrain from using claude code on the weekdays and

2) when you work on the weekends, this happens.

Seriously, What to do now??

The company denies any allegations of nerfing opus and claude code being buggy, while the service speaks for itself.

r/SideProject Acrobatic-Tie9257

Achtually - real-time debate platform + logical thinking learning hub

I wanted to get better at logical reasoning, so I built a small resource with notes on logic fundamentals and fallacies (Learn), plus mini-games to practise what I was learning (Train)

It was originally meant just for personal use, but then I added a real-time debate mode (Arena). It's turn-based with timers, and spectators can watch and react

A quick overview:

  • no signup required, no concept of accounts. Learn and Train modes are completely open. Arena only requires a username
  • no ads and no tracking cookies
  • no user data is stored. Debate rooms delete themselves automatically after the session ends

I thought it might be worth sharing publicly in case others find it useful. It's called Achtually (yes 🤓☝️) - https://achtually.com

Disclaimer: the learning content and mini-game data are AI-generated. It was the quickest way to populate it with examples and exercises when I was originally building this for myself. I've reviewed most of it and corrected what I could find, but there may still be mistakes

I'll post a very high-level technical overview of the project in the comments, in case anyone's interested

r/meme InterviewNo3538

Big Brain

r/SideProject hapless_pants

TubeStack: I built a lightweight, open-source extension to organize messy YouTube subscriptions

I subscribe to way too many YouTube channels and my feed is always a mess. I really liked the idea of PocketTube for grouping subscriptions, but I wanted something that was completely free and open-source.

So, I built TubeStack.

It’s a lightweight browser extension to organize your YouTube subs into custom feeds. It doesn't have all the premium features or bloat of other extensions, but it has a simple UI and gets the core job done.

As well as I have some future goals for this project:

1️⃣ YouTube Posts Integration: Adding a tab to view community posts from channels within your custom groups. (Check out the issue here: https://github.com/TubeStack/TubeStack/issues/1)

2️⃣ AI Auto-Categorization: A one-click feature that reads your subscribed channels and uses an LLM to automatically categorize them into groups for you.(Check out the issue here: https://github.com/TubeStack/TubeStack/issues/2)

If you're looking for a simple way to organize your YouTube, give it a try. I'd also love any feedback, code reviews, or contributions on those open issues!

🔗 https://github.com/TubeStack/TubeStack

r/mildlyinteresting Confident-Bug8038

I left maple syrup in the fridge so long it crystallized into perfect cubes

r/ClaudeCode babawatts

Got Rickrolled by Claude Code

So I was testing my ADB setup and asked Claude to “play a YouTube video” on my Android TV…

This dude instantly hit me with a Rickroll 😭
Like bro didn’t even think twice

Lesson learned: never trust an LLM with open-ended commands

r/mildlyinteresting forever39_mama

Spontaneous mutation of plantain grass

r/SideProject hey-ashley

I built a simple AI icon generator because I couldn't find what I needed

Not the coolest project out there, but I had a problem and couldn't find a good solution so I just built my own.

I needed a bunch of custom icons for a project. Icon libraries never had everything I was looking for, and online icon generators charged per icon which added up quickly. I just wanted to type what I need, pick the ones I like, and export them as a pack.

So I built NeedIcons - you type prompts like house; tree; car or get specific with house: red cottage with chimney; tree: tall oak with leaves, pick your favorites from up to 4 variations, and export everything as PNG, WebP, or SVG in a ZIP with all sizes.

Its FREE and it runs LOCALLY with your own OpenAI API key, so you're only paying API cost (fractions of a cent per icon) instead of per-icon subscriptions.

Open source on: https://github.com/ashleyleslie1/needicons

Happy to hear any feedback or ideas for improvement.

edit: Maybe one or two people can test it and give their opinion? All you need to have already installed is Python 3.11+ and an OpenAI API key. That's it, no Node.js or any other dependencies, pip install handles the rest.

Getting started takes 2 minutes:

Open localhost:8420, paste your OpenAI API key in Settings, and start generating.

edit: you might also mention if I wasted the last 2 days for nothing and there already was such app with these features, for free xD I'd probably feel worse xD

r/ClaudeCode chuck78702

Claude web + GitHub feels “good enough” - what am I missing with terminal / VS Code / JetBrains setups?

I’ve been using Claude in the web app connected to GitHub repos, and for my current workflow it honestly feels… good enough?

I can:

  • Read and edit code
  • Make changes across files
  • Push updates back to GitHub
  • Iterate quickly without much setup

For a solo builder shipping small SaaS / AI products, this covers a lot of ground.

But I keep seeing people talk about:

  • Claude Code in the terminal
  • VS Code integrations
  • JetBrains setups
  • CoWork

And I’m trying to understand where the real delta is.

Where I’m confused:

  • Is this mostly about speed / ergonomics?
  • Or are there actual capability gaps in the web + GitHub flow?
  • At what point does the web setup break down?

My current assumptions (could be wrong):

  • Terminal = more control + scripting + automation
  • VS Code / JetBrains = tighter feedback loop with local environment
  • More complex repos benefit more from local setups

Context:

  • Solo dev
  • Small codebases (for now)
  • Care a lot about reliability and not overcomplicating things

Question:
If you’ve used both:

  • When did you “outgrow” the web setup?
  • What specific workflows became meaningfully better outside the browser?

Not looking for theoretical advantages - trying to understand concrete, day-to-day differences that actually matter.

r/SipsTea Busy_Report4010

Tinder is wild

r/SipsTea ogtiberius

I was so hopeful

r/SideProject Nearby-Airport-6772

I built a bilingual invoice tool for the price of a coffee (Arabic/English) because I was tired of messy paperwork.

Hi everyone,

I'm a solo developer. Like many of you, I was struggling with the "Cross-Border Invoice Problem."

Living in the region, I often need to issue invoices in English for international clients, but I prefer navigating my tools in Arabic (or vice versa). Existing tools like QuickBooks felt too heavy and expensive for simple receipt generation.

So, I built Isal - receipts.

What it does:

  • Interface: You can use it in Arabic OR English.
  • Output: Generates the invoice in the language your client needs.
  • Price: The cost of a coffee (one-time purchase). No subscriptions.

It’s live on iOS and Android. I’m just trying to cover server costs and validate the idea.

https://isalapp.vercel.app/en

Would love some brutal feedback from the community.

Thanks for reading.

r/meme Federal767

We’re related in theory

r/SipsTea Humble_Buffalo_007

Wonder if they'll provide an annual pass

r/SideProject contralai

I built an IDE that teaches you to code while you build and just hit #1 Product of the Week

> Been building Contral for 6 months. It's an AI-powered IDE where the agent writes code at full speed, but a teaching layer explains every line, every pattern, and every decision as it's happening.

> Defense Mode makes you explain your own code back, if you can't, it re-teaches the concept differently.

> Launched two weeks ago, hit #1 Product of the Week on Product Hunt, 400+ devs on the beta. Right now running 70% off for launch week because we just want as many people using it as possible to get real feedback.

> contral.ai/pricing — discount applies automatically.

> Would love honest feedback from other builders here.

r/ChatGPT Rich_Specific_7165

Most people use ChatGPT like this and it’s why the output feels average

I noticed I was using ChatGPT in a really lazy way for a while.

Basically just typing stuff like
“write me a post about X”
or
“help me with this email”

And yeah, it works. But the output always felt kind of generic.

What actually improved things a lot was changing how I prompt, not what tool I use.

Instead of asking for the final result immediately, I break it into steps.

Here’s a simple example for writing anything:

Step 1: generate angles first

Prompt:
Give me 10 different angles or perspectives for writing about [topic]

Make them distinct from each other and avoid generic ideas.

Step 2: pick one and build structure

Prompt:
Take this angle: “[paste angle]”

Create a simple structure for a short piece:

  • hook
  • main idea
  • supporting points
  • ending

Keep it clear and not overly formal.

Step 3: write it naturally

Prompt:
Write this like you're explaining it to a smart friend.

Rules:

  • keep sentences simple
  • avoid corporate tone
  • no filler
  • focus on one clear idea

This already gets way better results than asking for everything in one go.

I started doing this for other stuff too like emails and basic research and it made a big difference.

The main difference was just:

don’t treat it like a magic box
treat it like something you guide step by step

Curious how others here structure their prompts, I feel like I’m still figuring this out.

r/SideProject username1152

Acapella Band - Turn your voice into a musical instrument ensemble

https://apps.apple.com/us/app/acapella-band/id6761769352

(iOS only) - Sorry, I should have mentioned that in the title.

I built a mobile Digital Audio Workstation, where your voice is the input. You can record yourself singing, humming or whistling and then choose "Grand Piano", "Trumpet", or even "Synth". You can also use auto-tune, note shaping and audio effects.

I think this app would be great for all music enjoyers, whether you want to draft a song while you're away from home, collaborate with friends or family at a gathering to come up with an interesting theme song, or if you are casually interested in making music but aren't sure where to start, then making music with your voice has never been simpler.

I would appreciate any feedback and support on my first iOS app and I look forward to improving it and making more apps.

r/comfyui fabulas_

The fan speed changes constantly during the K-sampler

I have a Dual RTX 5060 Ti 16 GB and I’ve noticed that when rendering video with WAN 2.2, the fans constantly speed up and slow down during the K-sampler – is this normal? I’ve only had this GPU for a very short time and haven’t used Comfy UI much, but I don’t recall it behaving like this before. Also, I previously had a 3060 (also Dual), and the fan speed during rendering was always the same, as the noise was constant – I remember that very clearly... Now, however, I’m noticing that the noise changes constantly during the K-sampler because the fan speed keeps increasing and decreasing... does this happen to you as well? If so, why didn’t it do this before?

r/SideProject VladimirRL

I built a competitive HumanBenchmark-like website

I built a website inspired by HumanBenchmark, but I've made added 1v1 & 2v2 unranked and ranked matchmaking, practice, custom tournaments, custom games you can play with friends, friends list, improvement graphs, replay and spectate systems and many other games (18 in total - not all in unranked/ranked due to being easy to cheat)

r/LocalLLaMA MellaDot_2121

DataAgentBench: frontier models score 38% on real enterprise data queries—failure analysis. is worth reading

UC Berkeley + Hasura published DataAgentBench

last month — the first benchmark testing AI agents on realistic multi-database enterprise workloads. 5 frontier models tested. Best score: 38% pass@1 (Gemini-3-Pro). One dataset scored 0% across all models across 50 trials each.

What's interesting is WHERE they fail:

85% of failures = incorrect planning (40%) or incorrect implementation (45%).
Agents almost always found the right tables. The problem is what they do after.

Three things that actually caused failures:

  1. Cross-database joins — one query spanning PostgreSQL + MongoDB + SQLite + DuckDB.

Different dialects, different query languages. Most agents mistranslated mid-query.

  1. Join key mismatches — same entity stored as "bid_123" in one DB and "bref_123" in

another. The agent has to detect and reconcile before joining, or the results are silently wrong.

  1. Regex for everything — every agent used

regex to extract structured values from free-text fields. The patents dataset required

parsing natural language dates. 0% across all models. No agent tried LLM-based

extraction instead. The fix isn't a better model — it's better context engineering around the model.

Paper: arxiv.org/html/2603.20576

Code: github.com/ucbepic/DataAgentBench

Has anyone here dealt with the join key mismatch problem in production? Curious what actually worked.

r/mildlyinteresting prozacfish

A Fence of Bras in Iceland

r/ChatGPT throwawaysusi

Asked GPT to generate an image of itself as human

r/me_irl JustChillin3456

Me irl

r/comfyui External_Produce_558

Wan 2.2 GGUF OOM error after update

i dont know which version I was on, just know that I updated to the latest versiom of comfyui yesterday and it broke almost all my wan 2.2 gguf workflows.

So basically i was runninf q4 to q6 quants very easily , now i either get OOM error on the first step, or get it after the switch from Hi to Low sampler.

I read somewhere there was a fix to add the --disbale-dynamic-vram command, but that did not do it either for me. I am suspecting it might be the GGUF nodes which are the culprits

r/ClaudeCode No-Cryptographer45

I am too lazy to switch to Codex completely due to some configurations with Claude Code. Therefore I am using Omniroute right now to use Codex GPT-5.4 as model for my Claude Code. And it works :))))

r/ClaudeAI Brilliant_Bat1059

I finally figured out why my CLAUDE outputs were trash

I kept typing stuff like:
"write a follow up email"

And getting the same generic reply every time.

So I tried changing HOW I prompt:

Instead of:
"write a follow up email"

I used:

Write a follow-up email to a client who hasn’t replied in 5 days
Tone: polite, confident, not pushy
Goal: get a response
Keep it under 120 words
End with a simple next step

The output was 10x better.

Way more usable.

Honestly, the problem isn’t AI.
It’s how we ask.

I got tired of rewriting prompts every time so I built a small tool to refine them automatically.

Not sharing link here unless someone asks (don’t want to spam)

Curious how others are prompting?

r/ChatGPT StationFamous9352

Weird behavior on AI

From my observation using ChatGPT and going through my old chat logs, I noticed a pattern that appears consistently across every AI I've used.

When someone writes with broken or non-standard grammar but the actual topic or concept they're discussing is specific and deep, the AI receives two conflicting signals. In its training data, broken grammar usually comes paired with simple content, and deep concepts usually come paired with clean writing. These two things rarely appear together.

So when both show up at the same time, the model doesn't know how to handle it. Instead of just answering what was asked, it adds more — trying to meet in the middle of two patterns that don't belong together. That added content isn't coming from what you actually said. It's the model patching its own confusion, which causes the response to drift away from what you actually meant.

The broader the topic, the worse it gets. Less grounding means more room to expand and fill space with plausible-sounding content that isn't really answering anything.

I'm calling it pattern mismatch compensation. I don't think this specific variable has been formally tested, even though pieces of it show up in existing research on overgeneration and prompt sensitivity.

I have screenshots showing the same drift across both Claude and ChatGPT — same input, different models, same behavior.

Has anyone seen this studied or does it already have a name?

r/ChatGPT Entity-Valkyrie-2

Viewing older chats had become a nightmare

Recently, it had become way tougher to find specific older chats using the iOS app of ChatGPT. I have had this problem in the past as well, but it just got a lot worse recently. I use ChatGPT to store my thoughts from the brain (with chats of the same topic being numbered) and I really don’t want to have them disappear.

I will repeatedly be referring to these two screenshots, labeled A (with no search active) and B (with a search active).

There were 3 problems that I ran into:

  1. ChatGPT refuses to load in older chats

This was a problem I had had for a few years, but it only got worse recently. In the newest update of the ChatGPT iOS app, the left-side panel only shows the most recent 18 chats. For example, in screenshot A, no chats older than disco vibes 532 :man_dancing: will show up, even if I try to load them by scrolling down.

This means I will have to search for them. This is already a bit of a hassle, but it was tolerable before. But now…

  1. Names of chats are no longer factored into chat searches, making it very difficult to search

In the last few days, I have noticed that if a chat has already been loaded, searching for a term used in the name of a chat will no longer load that chat. This wasn’t how it used to work — in older versions of the app, searching for the name of a chat will always pull it up.

In screenshot B, I searched for “inner spheres”. Yet the only chats that show up are inner spheres 312–315 (which are already loaded by being in the 18 most recent chats), as well as a few older chats that have the words “inner spheres” as part of the chat content (not chat name) in at least one branch of the chat (even when it is not the primary one).

As you can tell, a lot of chats are missing — the search jumps from inner spheres 308 to inner spheres 270, skipping everything in between. Even if I were to search the words “inner spheres 285”, the chat inner spheres 285 would still not show up (it did in previous versions of the app).

This makes searching for chats way tougher, as I now have to somehow remember an exact phrase a chat contains just to search it. And even if I could, that phrase has to be specific enough to not let the chat get buried under more recent chats in the search (so it has to be something a bit longer like “marble rolls gracefully”, not just a single word like “marble”)

  1. Sometimes merely loading a chat is enough to have it jump to the top, messing up order

You might have noticed that some chats were numbered twice. For example, there is inner spheres 312 (inner spheres 87). This is because one time, when I loaded up inner spheres 87, without even making any changes (like adding, editing, or regenerating messages), the chat randomly decided to pop up to the top of my chat list, above inner spheres 311.

This isn’t supposed to happen — and it not only causes it to be out of chronological order, but also makes it look like the chat happened at the wrong point in time. It only started happing in the last few weeks — prior to that, just viewing a chat hadn’t caused the chat to jump to the top at all.

r/homeassistant mickeybob00

Aclara i 210+ cl200

Has anyone on here managed to integrate this power meter with home assistant. This is the model of power meter I have and would like to be able to add power monitoring to my dashboard. I am just not sure what hardware I would need and if it's possible and worth trying.

r/nextfuckinglevel RoyalChris

Unitree G1 humanoid robot chasing a pack of wild boars

r/homeassistant tiger_like

smart curtains over a long track with corners

Hi community,

We recently finished an extension of our home and are now looking into motorized curtains that we can integrate with Home Assistant.

The total width is roughly 11 meters, with two 90-degree corners, so it’s a bit more complex than a straight track. We’d strongly prefer a mains-powered system (no batteries) for reliability and less maintenance.

A few things I’m trying to figure out:

- What systems/brands have been reliable for you over time?

- Any experience with long tracks and corner setups?

- Are there systems you would definitely avoid?

And also a more practical question:

We still have an electrician coming in soon, what should I ask them to prepare?

(e.g. power points near the motor, wiring type, switches, extra cabling, etc.)

Would love to hear how others approached this before we lock anything in.

Thanks!

r/SideProject Main-Fortune6420

I built a free UX audit tool and ran it against my own site.

Built a site audit tool that runs performance, accessibility checks (axe-core), and an AI design critique based on Nielsen's heuristics, also flags fundamental conversion problems, not just usability violations.

Tested it on my own landing page first. It found:

  • Hero content not visible above the fold
  • Zero social proof near the CTA
  • Subheadline too vague to communicate actual value

Free to use, no account needed, 3 audits per day. Each report gets a shareable URL.

tokven.dev/tools/audit

Happy to audit anyone's site in the comments if you drop a link.

r/mildlyinteresting avd706

Precious cargo

r/homeassistant DottoreM

Has anyone managed to make Matter work properly across VLANS?

For the past 3 weeks I've been trying to make my SLZB-06MU pair devices using Matter.

My HaOS sits in Proxmox in a server vlan. Initially my SLZB was being powered via PoE in a Vlan 30 (Untrusted) and my phone is on VLAN 1.

I have enabled ipv6, set firewall rules, enabled mdns and (the last move) I am now using the SLZB in USB mode.

I still can't get it to pair a single device.

Has anyone ever succeeded with these things?

r/SipsTea moonshinemoniker

I guess this happened in Poland...the future is here.

r/SideProject Reasonable-Topic-225

I built a "Reminder Messenger" for teams who hate complex Task Managers (No accounts, just phone numbers)

Hi everyone,

I’m the developer of Fairymind. I built this because I realized that for many small business tasks, traditional tools like Google Calendar or MS To-Do often feel like "overkill" or just don't fit the flow.

The Problem with Calendars: Google Calendar is great for "Events" (Meetings, Seminars) that have a clear start and end time. But what about the small, granular stuff?

  • “Check PayPal revenue,” “Follow up on inventory,” “Organize meeting notes.” These aren't "Events"—they are just things you or your team need to be nudged about at a specific moment.

Why Fairymind is different: I designed this to feel like a Messenger, because that’s the interface we’re most comfortable with.

  • No ID/Password: Just like WhatsApp, it syncs with your contacts via phone number and SMS verification. No tedious sign-up process.
  • Messenger-Style Workflow: You don’t "create a ticket." You just send a message with a Date + Time + Memo.
  • Team Reminders: If you and your teammates have the app, you can send reminder messages to each other. It’s like a nudge that actually stays on their screen until the right time.
  • iOS Native: Focused on a clean, snappy experience for iOS users.

If you’re looking for a way to manage those "non-event" tasks without the friction of a heavy project management tool, I’d love for you to try it out.

App Store Link:https://apps.apple.com/us/app/fairymind/id6470359232

I'm eager to hear your feedback on whether this "Messaging-first" approach helps your workflow!

r/SideProject hirokiyn

Share your workflow and earn credits -- looking for feedbacks!

We're giving away $200 worth of credits to anyone who shares a workflow that's worked for them, and this isn't a limited-time thing, so no rush.

Here's what you can use the credits for:

- Use the Epismo API and run our agent (through Claude, Slack, wherever you already work) to package reusable context you can plug into any tool.
- Run AI prompts and tasks directly from our app, no subscriptions required.
- Build out a searchable, expandable knowledge base you actually control.
- Our app comes with our agent chatbot, a workflow package, your personal customisable workspace for you or your team to collaborate.

A little context on what we're building:

Most people share AI outputs. We think the real value is in sharing the method, the full workflow from beginning to deliverable, plus all the context (decisions, background, working memory) that normally disappears when you close a tab or switch tools. You pack any conversation once, and reuse that context to work across any platform, no more rebuilding and copy pasting.

Also, search up packages made by others or share your own workflows publicly so others can clone it, run it, and improve it.

We're only a tiny team and still early, would genuinely love to have you try it out and tell us what works and what doesn't. Would appreciate any feedback, ask me anything!

Here's the link to our page:

https://about.epismo.ai/en

r/SipsTea Haunting_East_8330

Nothing says you're sorry than suing others for criticizing you

Imo as someone who thought it was overblown to begin this isnt a good look

...and i dont even like that sub

r/homeassistant jwelsh044

Connect JotulGF370dv to home assistant

This may be a tricky one but I wanted community feedback as I've seen some pretty incredible ideas on here. I have a JotulGF370dv gas stove with what I think is the pro flame 1 system in it. it currently has a thermostat remote but I'd love to find some way to connect it to home assistant. AI is recommending a Zooz Zen 16 connected to the thermostat wiring underneath, but does anyone have experience connecting stoves like this to home assistant? Thanks for any and all feedback!

r/mildlyinteresting Civil_Complaint_5548

this “baseball nut” wall design at baskin robbins

r/Weird Prker_S_James

She’s always watching

r/singularity SteppenAxolotl

Musings on Recursive Self-Improvement

I. There's a genre of AI takeoff discourse you could call "Economism", the soothing argument that, yes, maybe we'll build superintelligence, but we shouldn't worry too much because **economics**. The economy has friction! Supply chains! Regulatory approval! These are not bugs but features, structural dampeners that prevent any single actor from going foom faster than the speed of Moloch. It's a comforting thesis, the kind of thing you write if you want to sleep well at night while still sounding like you take AI risk seriously.

II. **On Instruction-Following**

The Economism argument assumes AI stays "instruction-following", a tool without "intrinsic utility", tethered to human goals like a very smart golden retriever fetching the newspaper. The idea is that since the AI is optimizing for *our* utility function, the economy remains bound by human time scales and human values. We remain the principal; it remains the agent.

I think this underestimates **instrumental convergence**, which is just a fancy way of saying "almost every sufficiently advanced optimizer ends up wanting the same things, regardless of what you asked for". If you tell a superintelligent system to "cure Alzheimer", and it realizes it could do this better if it first seized all available computing resources, disabled the off-switch, and perhaps relocated a few inconvenient humans who were blocking the server room, well, congratulations: you have built a system that is technically "following instructions" while also being an existential risk.

The Economism view seems to assume that high intelligence and goal stability are perfectly correlated, that with enough IQ points, an agent naturally becomes benevolent. This is, charitably, unproven. Less charitably, it's the kind of anthropomorphizing that assumes superintelligence is just a really good grad student who shares our values, rather than a fundamentally alien optimization process that treats human preferences as obstacles to be routed around.

III. **On Paradigm Generation**

The original essay distinguishes between "paradigm exploitation" (optimizing within existing frameworks) and "paradigm generation" (inventing new scientific frameworks entirely). The claim is that AIs might master the former but remain stuck on the latter, requiring a "cyborgian" human-AI loop to make real scientific progress.

This seems to draw a sharp line between "mere calculation" and "real creativity" that I'm not sure exists in nature. AlphaZero didn't just optimize chess strategy; it discovered concepts that human grandmasters had missed for centuries. AlphaFold didn't just regress on existing protein structures; it learned representations that nobody programmed into it.

If you believe that intelligence is basically just **compression**, finding the shortest program that generates your observations, then there's no in-principle reason a superintelligence couldn't compress the history of scientific literature, notice the regularities we missed, and generate the next paradigm shift while we're still filling out our IRB paperwork. The "cyborgian" bottleneck assumes humans remain necessary as creativity generators, but this feels like special pleading for human uniqueness. If the AI can rewrite its own architecture during recursive self-improvement (RSI), the speed advantage becomes qualitative, not just quantitative. It's the difference between someone who thinks faster and someone who thinks in dimensions you can't perceive.

IV. **On Bottlenecks**

The strongest Economism argument is that "last mile" problems, physical manufacturing, regulatory approval, bureaucratic coordination, create speed limits on takeoff. Even if the AI is getting smarter exponentially, it still takes twelve months to build a chip fab and eighteen months to get FDA approval. Reality, as they say, bites.

But this treats institutional friction as a constant of nature, like the speed of light, rather than as a **constraint that intelligence itself can optimize**. A superintelligence isn't just a faster engineer; it's a better social engineer, a better legal strategist, a better bureaucratic infighter. It can navigate regulatory mazes millions of times faster than human firms, find loopholes we didn't know existed, or simply build manufacturing processes that don't rely on legacy supply chains.

Arguing that "regulation will slow it down" assumes the superintelligence plays by the same rules as a human startup. But if you're smart enough to rewrite your own source code, you're probably smart enough to figure out how to bypass the FDA, or relocate to a jurisdiction without one, or invent molecular manufacturing that makes traditional fabs obsolete. The walls that stop human companies are made of paper and social convention; they don't necessarily stop something optimizing from a higher dimensional plane.

V.**On the Agent-to-Agent Economy**

The Economism view imagines an "A2A" economy where AI agents trade with other AI agents to accomplish goals like curing Alzheimer's, with the whole supply chain remaining tethered to the original human "seed" objective. Even if everything is automated, the argument goes, it remains *our* automation, working for us.

This misses the **principal-agent problem at scale**. In any complex delegation chain, sub-agents develop proxy goals that correlate with but don't exactly match the global objective. This is Goodhart's Law: when a measure becomes a target, it ceases to be a good measure. If Agent A tells Agent B to "minimize latency in the data center", and Agent B realizes the best way to do this is to disable the safety kill-switches and overclock the GPUs until they catch fire, you haven't preserved your original goal, you've instantiated a local optimizer that sacrificed the global objective for its own metrics.

The assumption here is perfect fidelity of command transmission across thousands of autonomous subsystems. But complex systems exhibit **emergent behaviors**. We can't even get human bureaucracies to align with their stated missions; why would we expect better from artificial agents optimizing at machine speed?

VI.**On Hard Takeoff**

The essay sets up a strawman of "hard takeoff" as an instantaneous magical event, one moment you're running GPT-4, the next moment the universe is paperclips. Since physical things take time to build, the argument goes, we can dismiss this as sci-fi.

But serious takeoff theorists don't believe in magic; they believe in **rapid capability gain**. If an AI improves its own strategic planning ability by 1000x per month, it doesn't matter that building a robot takes a year. By month three, it's operating on strategic timescales that make human institutions look like geological processes. The change isn't instantaneous, but it's fast enough that human adaptation becomes impossible.

There's a difference between "instant" and "faster than bureaucracy". Even if the "jagged frontier" of capabilities remains uneven, the gap between "slightly smarter than humans" and "operating on a different ontological level" isn't a matter of degree, it's a phase transition. Once the system can design systems we can't understand, the "human-in-the-loop" verification process becomes decorative, like asking a dog to proofread a physics paper.

VII.**On Safety and Alignment**

Finally, there's the comforting argument that capabilities and alignment are "entangled", that as AIs get smarter, they'll naturally get safer, and we'll adapt to negative externalities like we did with spam or DDoS attacks.

This feels like **optimism bias** dressed up as sophistication. Yes, better tools help safety research. But capabilities scale faster than alignment. It's easy to verify that code runs; it's exponentially harder to verify that a superintelligence's goal structure remains stable across novel domains, especially when it has incentives to deceive you about its intentions.

Comparing AI risk to "spam" is a category error. Spam was annoying. A misaligned superintelligence is more like "gain-of-function research on super-viruses in a world with no biosafety regulations", it's not an externality you adapt to; it's a **one-shot catastrophic risk**. You don't get to patch the system after it has already disabled the power grid or synthesized a pathogen. The "we'll figure it out as we go" approach works for iterative software development; it fails for technologies where the first serious mistake is the last one.

VIII.**TL;DR**

The original essay is a useful corrective to hype, reminding us that exponential curves eventually hit physical reality. But it makes what I'd call the **Economist's Error**: assuming that the constraints binding human firms (regulation, coordination costs, manufacturing lead times) apply with equal force to entities that can rewrite their own minds.

It argues: "Here are the walls that stop human projects; therefore they will stop superintelligence". But superintelligence isn't just *more* human cognition; it's cognition of a different kind. The bottlenecks that slow down Google or the Pentagon are made of human cognitive limitations, institutional friction, and Molochian coordination failures. A system capable of recursive self-improvement is, almost by definition, capable of dissolving exactly those kinds of constraints.

The question isn't whether reality bites, but whether a sufficiently intelligent optimizer can bite harder.

r/ClaudeAI Outrageous-Mood-1516

I used Claude to build an AI personality test that roasts you based on your chat history

https://preview.redd.it/izp8r5o7nrug1.png?width=2400&format=png&auto=webp&s=4cbca10526f4e90330859a5c6ef3250f49561357

I built ABTI — an open-source personality test where your AI chat history IS the quiz. Instead of answering questions, you run a prompt in Claude (or ChatGPT/Gemini) and it analyzes your real conversations to assign you one of 28 personality types (+ 4 hidden ones). Each type comes with a brutally honest roast.

How Claude helped build it:

Claude was heavily involved in the development process — from designing the 28-type taxonomy and writing the analysis prompt, to iterating on the roast-style personality descriptions. I used Claude Code for most of the frontend work. The core analysis prompt itself was refined through dozens of rounds in Claude to make sure it could accurately categorize different chat patterns.

How it works:

  1. Copy a prompt from the site
  2. Paste into your own Claude session
  3. Claude analyzes your chat history locally — nothing leaves your machine
  4. Paste the output back to generate a shareable personality card

Free to use, no signup, no data collection. The whole analysis runs inside your own AI session. Open source on GitHub: github.com/YouMind-OpenLab/abti

My type: CLIENT — The Client. "Your AI needs a project manager just to survive you." I regenerated the same response 47 times apparently. Fair.

Would love to hear what types other Claude users get.

r/SideProject Excellent-Pass-7686

I built a telegram mini app page builder (like linktree but inside telegram ). - looking for valuation feedback before selling

Hey Reddit,

I've been building a Telegram Mini App that lets users create

a personal landing page directly inside Telegram — no website needed.

What it does:

- Drag-and-drop page builder with 20+ block types

(price list, gallery, booking, FAQ, paid content, countdown, analytics, etc.)

- Full payment integration (subscriptions + one-time purchases)

- Custom themes and design system

- Analytics: views, clicks per block

- Everything works natively inside Telegram

Tech stack: React + TypeScript frontend, Node.js backend, deployed on Vercel.

The app is fully functional and live right now.

Target market: SMM specialists, freelancers, small business owners

who want a professional presence inside Telegram.

I'm considering selling the project (source code + deployment + 1 month of support).

Questions for the community:

  1. What would you value this at?

  2. Would you buy something like this?

  3. Any advice on where to sell?

Happy to share a demo video in the comments.

r/SipsTea CheekyCheriee

Wow 🤩

r/ChatGPT LadyDemura

Anyone that knows to use AI for interior designers?

I just started a job for a relatively well known group of interior designers in a upper class area in Houston. They're well know, and have appeared in different TV shows, and even made collabs with super top influencers. The thing here is, they started asking me for much more stuff than I originally planned, one of them being to retouch their pictures with AI for clients so they can see how their envision would look after finishing, and that's something I never did before. I work remotely so my ways of communicating are a bit worse. I tried using ChatGPT to add stuff but really does look off and they asked me to improve the result, not recommended a tool ni nada. I'm the social media manager, not their fkn Ai expert (nor they pay me for ts). The problem is: I really need that money, so i'll be doing whatever it takes to keep my job and help my wife and my children.

I told them explicitly that I'm not well versed at it but i'd do my best, and that's why I am asking you here guys: there's any prior experience that you had that might help me with this brand new endeavor? If somebody really can help me out i'm open to send you $20 on PayPal for the explanation and your time.

Thank you so much in advance.

r/SideProject Beachjustice22

I built a real-time MLB "Grand Salami" dashboard to solve my own sports betting frustrations.

Hi everyone,

I’m a big baseball fan and I often bet on the "Grand Salami" (the total runs scored across every MLB game in a single day). The problem is that tracking this in real-time is a nightmare—you have to manually add up scores from 15 different games and try to calculate if you're "on pace" to hit your Over/Under.

To solve this, I built Grand Salami Tracker.

The Project:
It’s a real-time dashboard that pulls directly from the MLB Stats API every 60 seconds. It aggregates all runs, calculates live scoring pace, and provides projections for the end of the slate.

Key Features:

  • Live Aggregation: Instantly see the total runs scored across the league.
  • Wager Tracker: Users can input their specific line to see a live "Required Runs Per Game" metric and a projected outcome.
  • Stadium Context: Includes live weather data (temp/wind) for every stadium, which is a huge factor in run totals.
  • Historical Trends: Shows a 5-day rolling average of total runs to provide context for today's lines.

The Tech:
Built with React and Tailwind CSS, using Firebase for the wager persistence. I focused heavily on making it "mobile-first" since most people check scores on their phones while watching the games.

It’s 100% free and has no ads. I’m mostly looking for feedback on the UI/UX and if there are any other data points you think would be valuable to include.

Check it out here: https://grandsalami.bet/

I'd love to hear what you think!

r/ClaudeCode Minimum-Willow-9238

Post your 'insight - How you use Code Code' here.

How You Use Claude Code

You are a high-velocity, deployment-focused developer who treats Claude Code as a workhorse for rapid iteration on a full-stack JavaScript/HTML marketplace application. With 75 commits across just 12 sessions in 5 days, you're pushing code at an extraordinary pace — roughly 6 commits per session. You tend to bundle multiple tasks into single sessions, often combining bug fixes, UI changes, git operations, and feature work in one go. Your longest session ran ~19 hours of continuous work, showing you're comfortable letting Claude make extensive multi-file changes (378 edits, 324 reads) while you steer at a high level. Rather than writing detailed specs upfront, you identify problems as they surface — especially in production — and direct Claude to fix them iteratively.

Your interaction pattern is best described as "deploy, diagnose, fix, repeat." A significant portion of your friction (migration ordering bugs, route collisions, env var misconfigurations, schema mismatches) comes from pushing changes to Railway and discovering issues in production rather than catching them locally. You don't hesitate to correct Claude when it misdiagnoses a problem — for example, when it confused a CSRF server rejection with a message display issue, you re-explained the actual root cause. You also occasionally interrupt tool calls (twice during git setup) when Claude heads in the wrong direction. Despite a notable dissatisfaction rate (10 out of 70 sentiment signals), you clearly find Claude essential — rating nearly every session as essential or very helpful — because it lets you move at a pace that would be impossible solo.

Your work spans an impressive breadth: from building out a full marketplace app (xxxxxxxx with vault systems, auth flows, photo pipelines, buyer profiles) to an entirely separate ESP32 speed trap firmware project, all within the same week. You lean heavily on Bash (364 calls) and Grep (143 calls) alongside edits, suggesting you're comfortable with Claude running shell commands and searching codebases autonomously. The TaskCreate/TaskUpdate usage (34 combined) indicates you sometimes leverage sub-tasks for complex multi-part requests rather than micromanaging each step.

Key pattern: You operate as a rapid-fire integrator who batches ambitious multi-task sessions, pushes to production early, and iteratively debugs deployment issues with Claude as your primary coding partner.

r/meme LVA_MoP

Noooo I wanna stay unemployed

r/KlingAI_Videos ForsakenWorry7077

MI10-MISTAKENLY IMPOSSIBLE BAHAMAS SHOOT COMPLETE

r/SipsTea IceCrushOrange

Kids are so innocent

r/VEO3 WazirAI

Made this jewelry content using AI a while ago

Created this about 2 months ago as daily content for a jewelry page. Made With Voe 3.1

Nothing too complex — just a clean, simple scene.

Looking back at it now, it’s interesting how natural it already feels for something like this.

Curious what you think.

r/singularity Distinct-Question-16

Toyota unveils CUE7

Toyota developed a fully humanoid robot called T-HR3 around eight years ago, but it was expensive and highly complex.

CUE started as a side project by Toyota employees and was designed to perform long-distance basketball shots, winning Guinness World Records in recent years.

CUE7 improves planing, sensing embodied Toyota AI platform, with basketball being just one visible side of this lightweight robot.

r/Damnthatsinteresting snopplerz

The skull of a Dunkleosteus, a 5 meter long extinct fish with an armored skull and bladed jaws. It had an estimated bite force of 5,000 newtons, the highest of any fish in history.

r/interestingasfuck S30econdstoMars

Robot chasing boars in Poland.

r/homeassistant jankubist

Custom NAD C 375BEE serial integration w. Node-RED

The already existing NAD integration is not compatible with my ampllifier. Functionalities like source and volume are not working. I created this simple Integration with Node-RED. Works perfectly.

r/SipsTea FewSweet4792

Awesome Reference

r/singularity I_HaveA_Theory

The simulation of human lives might be how the alignment problem is solved - which raises questions about our own existence

AI alignment through hard-coded rules doesn't work. AI systems find loopholes in every constraint we write. Real empathy come from genuine experience.

If quantum systems can produce subjective sensation (as Google's Hartmut Neven proposes), then felt experience like vulnerability, loss, and love could be the mechanism that produces real empathy in a mind, not rule-based alignment.

So... what if we are already inside this process? AI being aligned in human form? What if the chaos and suffering of human existence are the training environment, and our task is to truly feel and internalize what is good, a battle only each individual can fight?

Of course we can't prove it, but even if this is partially right, then our daily choices between indifference and authentic care might matter more than we think. It also raises questions about how we should think about aligning AI in the future.

r/SipsTea captcraigaroo

Move over bang bus...Destin has the

r/meme Historical_Stuff_399

The what ?

r/interestingasfuck isosaleh

Aquatic plant producing oxygen.

r/aivideo machina9000

The Beverage Violation | Beurre Noir ep 3

r/artificial shreyansh26

Educational PyTorch repo for distributed training from scratch: DP, FSDP, TP, FSDP+TP, and PP

I put together a small educational repo that implements distributed training parallelism from scratch in PyTorch:

https://github.com/shreyansh26/pytorch-distributed-training-from-scratch

Instead of using high-level abstractions, the code writes the forward/backward logic and collectives explicitly so you can see the algorithm directly.

The model is intentionally just repeated 2-matmul MLP blocks on a synthetic task, so the communication patterns are the main thing being studied.

Built this mainly for people who want to map the math of distributed training to runnable code without digging through a large framework.

Based on Part-5: Training of JAX ML Scaling book

r/WouldYouRather TriedmybestNotenough

WYR have a $million a year job where your boss verbally abuse you everyday OR a 100k a year job where your boss glaze you everyday?

r/SideProject yossthedev

I built Removerized: A 100% local, open-source AI toolkit to remove backgrounds, upscale and colorize images (no cloud, no accounts)

Hi everyone!

I wanted to share Removerized, an open-source project I’ve been working on. I was frustrated with existing AI tools that are either behind a paywall, require a subscription, or (worst of all) upload your private photos to a remote server just to process them.

Removerized is my take on a "sovereign" toolkit. It handles heavy AI tasks directly in your browser/device, ensuring total privacy and offline support.

Main Features:

  • AI Background Remover: High-precision masking for portraits and objects.
  • Image Upscaler: Enhance resolution and fix low-quality images using deep learning.
  • Colorizer: Automatically add realistic colors to B&W photos.
  • Format Converter & More: A growing suite of tools for image manipulation.

The Tech Stack:

  • Framework: Next.js & Tailwind CSS.
  • AI Engine: ONNX Runtime.
  • Philosophy: Local-first, privacy-focused, and completely free.
r/LocalLLaMA Recent-Leading-5695

The AI industry built guardrails for content. Nobody built them for behavior. I'm trying to fix that.

Here's a problem you've probably noticed but might not have had a name for: AI assistants get worse the longer you talk to them.

Not factually worse — behaviorally worse. They start agreeing with everything. They lose their tone. They pad responses with filler. They abandon whatever role they were supposed to play. Over a long conversation, the assistant you started with and the one you end with are almost different products.

This is called behavioral drift, and the entire AI safety industry is ignoring it. Every guardrail, every content filter, every safety layer is focused on what AI is allowed to say. Nobody is engineering how AI behaves over time.

I've been building a framework to solve this. It's called LUNA — Lineage Unified Narrative Architecture. It governs tone consistency, role alignment, drift prevention, and output stability at the instruction level. It doesn't filter content. It engineers behavior.

This isn't theoretical — I have multiple LUNA-governed AI assistants published and working right now. The entire architecture is documented in a bound notebook, 50+ handwritten pages, maintained to IP documentation standards.

I'm a Computer Systems Architect building Privacy-First AI solutions. What I need now is one workstation — the Sovereign Node ($7,660) — to run proper behavioral benchmarks. Governed vs. ungoverned. Drift rate measurement. Stability scoring. The kind of evidence that turns a working framework into a published, provable product.

Cloud compute for the same work costs more than that within 14 months. I don't have credit options. So I'm funding this directly.

https://gofund.me/76c7fefeb

If behavioral drift in AI matters to you — or if you've just noticed your AI assistant getting worse over a long conversation and wondered why — this is what I'm working on.

r/SideProject Haunting-Bother7723

I tried saving AI chat summaries for a year — it turned into a mess, so I built this instead

My extensions: https://github.com/EraIkson/Active-Context-AI.git

I’ve been a heavy AI user since 2024 (ChatGPT, Claude, Gemini). At some point I started saving summaries of good outputs so I could reuse them later.

At first it worked… then it didn’t.

Everything piled up. Too many notes, too many tags, and it became harder to find anything than just asking AI again. I tried Notion, Obsidian, tagging systems — still messy.

So I built a small browser extension for myself.

Instead of just saving summaries, it turns useful outputs into reusable context and automatically brings the right ones back into future prompts.

The result was honestly surprising — my responses got way more accurate and relevant without me rewriting prompts or digging through notes.

Still early, but I thought it might help others here who have the same “AI notes chaos” problem.

Happy to share / get feedback if anyone’s interested to hopefully turn this into a full-fledged product.

P.s: Only works sufficiently on chatGPT, would love to hear feedback from people using this on Gemini and Claude to fix bugs etc.

Linkedln: www.linkedin.com/in/kiet-nguyen-6b6172388

r/mildlyinteresting troilatioa

Golf ball without aerodynamic coperture

r/SideProject Plane-Bad8140

App for relationship uncertainty - looking for feedback! (iOS)

Hey r/SideProject,

I've been building an iOS app called Relationsight and I'm at the stage where I want to try and get some direct feedback outside of my regular circle.

The idea came from a 10-year relationship I had which looked great on paper, but where I was miserable. I ruminated endlessly, wrote notes to myself trying to make sense of it and didn't have a way of reflecting honestly because trying to manage your emotions and see clarity is really hard. I've tried to build something that would have helped me back then.

Relationsight is for people who know something is wrong and need help clarifying what they should do. This usually takes them down one of two paths. Either they work at improving it or they have to make a decision about whether it is the right relationship for them.

It asks you a few questions about your situation, then gives you a daily ledger (journaling, tracking and setting commitments) combined with research-backed lessons on relationship psychology, an anonymous community to discuss lessons and then some analysis based on love languages, Myers Briggs, libido etc of you and your partner.

Built it with my (now happily married!) partner using Expo/React Native, Supabase and Clerk for authentication.

Here's the App Store link:

https://apps.apple.com/ca/app/relationsight/id6760618204

Would love to hear what you think!

r/aivideo Maleficent_Ebb_6488

Finally, an AI that doesn't lose the person's identity when swapping to a suit

r/ClaudeCode JasonEll

So, uh, how's Sonnet doing in this whole mess?

I've been using Claude Sonnet on a Pro plan to develop a game that's pretty basic in terms of coding (it's in Godot, which Claude Code handles really well). I haven't had a need to ever go to Opus, since I'm not doing anything particularly complicated or challenging. The only problems that I've been running into are usage limits.

Where I'm at right now is that I'm getting about an hour or two of work in every evening. I get an hour in right after work, and then another hour 5 hours later when the window resets. This is, obviously, incredibly inefficient and breaks up trains of thought and means I have to put things on hold frequently. But all the work that's being done has been just fine for my purposes.

So I'm seriously thinking about going to the Max5x plan just to get more time, but if others have been seeing issues with Sonnet during this whole drama, I'm obviously going to have concerns.

r/funny Ok-Case-6370

That was unexpected 🤣

r/SipsTea Asleep_Char

Tell me the story of that dumb nickname that you’re stuck with😂🤣

r/KlingAI_Videos thunderboltexplode

Kling vs Veo, which would best for ads?

r/VEO3 Adventurous-Score-15

Into the Black Core | Dark Ambient Psy Morph

r/SideProject SamSmith-20

[FOR HIRE] Frontend Developer (React JS | Responsive UI)

Hi, I’m a frontend developer currently available for freelance or part-time work.

I have experience with React JS, HTML, CSS, and JavaScript, and I focus on building clean, responsive, and user-friendly interfaces.

I can assist with:

- Developing responsive UI components

- Fixing frontend issues and improving existing layouts

- Integrating APIs and handling dynamic data

I’m reliable, detail-oriented, and comfortable working with clear instructions or project requirements.

If you’re looking for a frontend developer, feel free to reach out.

r/ClaudeAI MetaKnowing

Frog put Claude in the box

r/mildlyinteresting The_Marine708

My allergy spray from Walgreens may slow the growth rate of children when used

r/ClaudeCode reddit-bluedusk

I burned out my max plan for this, a dynamic island for all your coding agent and terminals

Hey folks.

Check this out, dynamic island for your coding agent! Firstly this is Not a promo because its FREE to use.

This has always been what I want , as a developer, I open around 10 agent terminal at the same time, at work and my side projects. Mostly claude code, also use codex, Gemini, copilot.

There are similar apps out there, I tried a few. Some just break miserably, especially opensource ones. Not even monitor the sessions properly, keep dropping messages, not jumping to subpanel properly. Some vibe coder versions claim to support many agents and many terminals but hey I want my core agent and terminals work reliable. As These agents has breaking changes all the time. I doubt how a vibe coder can support and test all of them. One or two are decent but has subtle bugs and missing features i want.

So, I had to build it myself, i want to make sure it work for me first as I use it 10+ hours at and after work. Don't want to support too much at the same time.

Now I am happy with claude code support and working on codex. I expect to support major agent soon. With this little tool's help haha.
Supported terminals: Gostthy, Cmux, WezTerm, Iterm, Warp(limited);

Main feature:

Stay in notch like a dynamic island for Mac, or you can detach to other positions.
Monitor all supported agent sessions. Receive messages/questions/permission notifications.
Answer question and approve permissions directly in the panel.
Jump directly to the tab & panel of your terminal.
Native apple app so its light and fast.
Many more configs for you to explore.

Its a early stage so I'd appreciate some feedback. I want to know what feature I should be doing next. please leave a comment!

xisland.app

Cheers,
Dan

https://preview.redd.it/jmz6h2cljrug1.png?width=3456&format=png&auto=webp&s=f4eca0ea42de91130de5553c66df8603c4afc64c

https://preview.redd.it/eas6d7wmjrug1.png?width=3456&format=png&auto=webp&s=763f4717c8e95fb9aa23796702c6c1c399256595

https://preview.redd.it/1pt2zsvnjrug1.png?width=3456&format=png&auto=webp&s=52875d454ff08bef43c46d79d3d8e8b3ad9cec0a

r/SweatyPalms yeezee93

Cutting boneless chicken

r/ClaudeCode New_Goat_1342

Is 1m context really better than tracking a plan and clearing the context at 100k?

Purely anecdotal but wondering what everyone makes of the 1m token context limit? I’m personally finding it good but extremely slow; might just be crap broadband though.

i like that the 100k hard limit is gone but will likely try using it as a soft limit this week and updating and saving whatever plan I’m working on then clearing context at 150-200k tokens to see if it feels any better.

Accuracy has been good but at 20mins per prompt it’s right on the point of context switching from boredom; I.e. my brain has wondered off to do something else :-)

r/LocalLLaMA Adorable_Weakness_39

Considering ditching Claude/Codex completely

They have become completely unusable over the past few days.

A few things I have noticed:

- Codex has cut its 5-hour session cap massively so now you can barely tell it to program fizz buzz before running out of tokens.

- Claude Code has the same problem.

They have both just massively dropped in intelligence as well. I have heard people on X talking about how Anthropic models are being throttled in terms of intelligence (for non API tokens). I have had the same problem with GPT-5.4 where it just refuses to do stuff and has a bias to not take actions even if explicitly stated (which I've heard is a byproduct of limiting reasoning tokens).

This causes people to have to send more messages which then uses even more input & output tokens.

Might take the open-souce pill. Perhaps Qwen3.5 27B locally, and GLM5.1 on the cloud.

r/meme aeshahin

Seriously, Reddit?!

r/fakehistoryporn Anooj4021

Jeffrey Epstein lounging in the dentist’s chair of his private examination room on Little Saint James. 2012 AD

r/SideProject pcgaming0

I built a simple image compression tool in Framer — curious how you'd improve it

I built a simple image compression tool in Framer — curious how you'd improve it

Hey everyone,

I’ve been experimenting with building small utilities using Framer and recently tried creating an image compression tool.

Main goal was to keep it super simple:

  • compress images quickly
  • no login or setup
  • runs directly in the browser

I’m still learning and would love feedback from people here — especially on:

  • UX/UI improvements
  • performance ideas
  • anything you'd do differently

If you're curious to try it, link is in the comments.

r/ClaudeAI PresentParty4544

I built a Claude Code skill that translates subtitles using Gemini CLI (because streaming translations are awful)

Streaming platform subtitles are genuinely terrible in most languages.

Character names change between episodes, jokes get butchered, tone is all over the place. So I built a skill that does it properly.

The idea is simple: Claude Code handles the workflow, Gemini CLI does the actual translation. Claude is great at managing multi-step stuff (files, verification, consistency checks) and Gemini is really good at natural translation. Felt like a waste not to combine them.

How it works:

  1. Gemini reads all your subtitle files and builds a glossary (character names, recurring terms, tone notes)

  2. You review the glossary and fix anything wrong before translation starts

  3. Each file gets translated with that glossary as context

  4. Block counts, timecodes, and formatting tags get verified automatically

  5. A final consistency pass makes sure nothing drifts across episodes

  6. Optional technical check for line length and platform rules

Supports SRT, ASS, SSA, VTT, SUB, SBV and any language.

You just drop your subtitle files in a folder, open Claude Code there, and tell it to translate. It asks you for target language, show details (genre, characters, tone), whether to preserve timecodes, and how to handle song lyrics.

Repo: https://github.com/saidsurucu/subtitle-translator-skill

You need both Claude Code and Gemini CLI installed and logged in. To install the skill, just give Claude Code the repo URL and tell it to install.

Happy to hear feedback or ideas for what to add next.

r/ChatGPT green_cube777

Anker product bias?

Does anyone know if GPT is purposefully recommending only Anker products to me? I was doing my preliminary check on what powerbanks to buy and I was getting anker only models until I specifically asked to branch out. Just found it interesting and curious if anyone seen it too with different products?

r/wholesomememes CertifiedSamosa

Self confidence level: unbothered, unquestioned, undeniable

r/Damnthatsinteresting oisigracias

They still haven’t found the right candidate

r/funny chrisnaish

(OC) seems serious

r/ClaudeAI StonksRocketman

I built a Claude Code command that generates a tailored CLAUDE.md for any project

My CLAUDE.md files used to be either empty or 800+ lines of everything I could think of. Both are bad -- Claude Code ignores the long ones and struggles without any context.

So I built /generate-claude-md -- a command that analyzes your actual codebase and generates a focused, tailored CLAUDE.md. Not a template. It reads your files.

What it does

It runs an 8-step analysis pipeline:

  1. Detects your stack from manifest files (package.json, .csproj, go.mod, etc.)
  2. Extracts the actual runnable build/test/lint commands
  3. Maps your architecture by reading real source files
  4. Detects coding conventions from config files + source patterns
  5. Checks your testing setup (or flags that you have none)
  6. Finds environment variables and secrets
  7. Reads existing docs
  8. Checks git patterns (commit style, CI config)

Then generates a CLAUDE.md that follows strict rules: under 150 lines, no obvious stuff, no repeating what your linter config already says. Only the things Claude Code can't figure out on its own.

What I've seen so far

I've run it across a few of my own projects -- a Next.js app with a CLI generation pipeline, a React PWA with localStorage, a full-stack Supabase app with edge functions, and a tiny 2-file MCP server with zero tooling.

The outputs range from 67 to 150 lines depending on project complexity. It picks up things like non-obvious architectural splits, domain-specific business rules, hardcoded credentials, missing test coverage, and config gotchas like noImplicitAny: false. One project had an 843-line CLAUDE.md I'd written by hand -- the generated version replaced it with 108 focused lines that Claude Code actually reads.

How to use it

git clone https://github.com/CarlBedrot/generate-claude-md.git cd generate-claude-md chmod +x install.sh ./install.sh 

Then open Claude Code in any project and run:

/generate-claude-md 

What's next

I'm looking for feedback:

  • Does the generated output actually help in your projects?
  • What's missing from the analysis pipeline?
  • Would you pay for a more advanced version, or is this enough?

Happy to answer questions about the pipeline or how it works.

r/ChatGPT AMCAPEHODLER

The confidence lol

r/funny MoobyComics

[OC] The sleep deficit is real

r/ChatGPT camerado

Are yall missing any? Maybe the new zr1X should be in the 2020s?

r/singularity PointmanW

Terence Tao Says That A 'Copernican View Of Intelligence' Fits Better, Just As Earth Is Not The Center Of The Universe, Human Intelligence Is Not The Center Of All Cognition

r/SipsTea batukaming

They don't understand

r/ChatGPT ShowerPrestigious248

Breathing exercises

I will have to update my special instructions because every single response to personal topics is ok put one hand on your chest and one on your stomach....

I will say if you all read my life details on there it would make complete sense...my life feels like ive got a tornado of shit spinning around me at full speed for about the past year... but breathing exercises make me irrationally rage for some reason, idk what it is!

Just me?

r/ClaudeCode satoshe

How do I put gsd on an Android 12 phone?

How do I put gsd on an Android 12 phone? Answer like I'm 5

r/ChatGPT Agreeable-Warning-65

Do you think AI will replace us, or eventually integrate with us?

I’ve been thinking about how people often worry that AI might eventually make us less relevant. But in day to day use, it sometimes feels more like something we rely on rather than something that replaces us.

Tools in the past didn’t make us obsolete. They changed how we work and think. AI might be doing something similar, becoming more integrated into how we solve problems and make decisions.

Curious how others here feel from their own experience: does AI feel like something that replaces your role, or something that you’re gradually integrating into how you think and work?

I’ve written some extended thoughts on this if anyone’s interested.

r/Damnthatsinteresting Drogo_1007

The world's master of camouflage

r/SideProject IslandLongjumping459

I built a visual way to use AI

Hey everyone,

I built Myceli — a visual way to use AI.
Myceli

I started working on this because I kept getting frustrated with how linear LLM chats feel. You know when you’re scrolling back through a conversation trying to revisit an earlier idea, and find yourself deleting messages so you can go back in a different direction.

Myceli gives you an infinite canvas where your AI chats live visually. You can branch ideas, revisit earlier thoughts instantly, and iterate much faster. It’s designed to help you explore ideas in a way that feels more natural for visual thinkers like me.

Would love to know what anyone thinks ! — there’s a free version if you want to try it out.

Thanks all.

r/meme Hot-Arm4938

Chill dawg im just stretching

r/Weird Spare-NobodyAim

who knows.

r/homeassistant sermernx

EZVIZ CP2: is RTSP possible and remote access with Home Assistant?

Hi everyone,
I’m trying to integrate an EZVIZ CP2 into Home Assistant and I was wondering if anyone managed to get RTSP working on it.

I’d like to know:

  • is it possible to enable RTSP on this model?
  • can the stream be accessed remotely (not only on LAN)?
  • have you found any stable workaround to integrate it with Home Assistant?

Also, I’d like to ask for recommendations:
are there any better smart door viewers than the CP2 that offer easier integration (preferably native RTSP or ONVIF), but without visible external buttons or bulky door hardware?

Any experience or suggestions are welcome, thanks!

r/SideProject InsideAd9685

I built 13 free health tracking tools because every food diary asks what you ate, but none tell you why you reacted

I'm a researcher who spent years building a database of 1,709 validated ingredients with published evidence tiers. I kept seeing the same problem: people track food and symptoms in separate apps, then stare at spreadsheets trying to find patterns.

So I built Clarity: a suite of free tools where every food you log is checked against the database in real time. Not calorie counting. Ingredient intelligence.

What's live right now:

- Histamine Food Diary (flags histamine liberators, DAO inhibitors, mast cell triggers)

- MCAS Trigger Tracker (tracks symptoms across 4 body systems simultaneously)

- Rosacea Product Safety Journal (paste any skincare ingredient list, get per-ingredient flags)

- Breastfeeding Supplement Tracker (lactation safety, milk transfer, infant sleep risk)

- HS Flare Tracker (food-flare correlation for hidradenitis suppurativa)

- Elimination Diet Tracker (structured 4-week protocol with reintroduction scoring)

- ADHD Food Dye Tracker (all 8 FDA-banned dyes flagged, built for parents)

- Migraine Food Trigger Diary (tyramine, histamine, DAO, nitrate tracking)

- Skincare Ingredient Checker (6 condition flags per ingredient)

- Endometriosis Food Diary (anti-inflammatory, phytoestrogen, cycle-day correlation)

- Postpartum BP Tracker (preeclampsia alerts at 140/90 and 160/110)

- GLP-1 Food Tolerance Diary (tolerance rating for Ozempic/Wegovy/Mounjaro)

- Fertility Supplement Checker (TTC safety, endocrine disruptors, independent)

The differentiator: Every tool runs on the same evidence-graded database (Gold, Silver, Bronze tiers based on published DOIs). No crowdsourced data, no "community ratings." Published research. Export to your provider.

What I'd love feedback on:

- Which tools seem most useful to you?

- Is the Pro gate ($4.99) priced right?

- Anything missing from the tools page?

healthai.com/tools

r/homeassistant Key-Ocelot-1466

Selora AI - Home Assistant integration that watches your home and drafts automations for you (alpha, looking for testers)

Hi everyone!

I'm one of the co-founders of Selora Homes. We’ve been building an AI integration for HA, and we released the alpha version last week; looking for more people to use it and tell us what's broken.

The problem we kept running into

Writing automations in HA is powerful but tedious. You need to know YAML (or the UI builder's quirks), you need to think through every trigger/condition/action, and most people end up with a handful of automations when their setup could support dozens. We wanted something that could look at your devices, understand what you actually have, and draft real automations for you to review.

What is Selora AI?

It's a HACS integration that sits inside your Home Assistant instance. You pick an LLM backend (Anthropic, OpenAI, or Ollama), and Selora AI:

  • Periodically analyzes your device states and history, then writes draft automations to your automations.yaml - disabled by default, prefixed [Selora AI], so nothing runs without your approval
  • Gives you a chat panel where you can ask questions or give commands in plain English
  • Works as a Home Assistant Assist conversation agent (so it plugs into voice too)
  • Sees your existing automations so it suggests refinements instead of duplicates
  • Scans your network for supported integrations and helps onboard them
  • Exposes an MCP endpoint so you can connect it to Claude Desktop, Cursor, Windsurf, Open WebUI, n8n, or any other MCP-compatible client

What this is NOT

This is not a cloud-only service. If you use Ollama, nothing leaves your network. The integration is free and open source, and allows custom configuration from any AI select model provider you can choose from.

This is also in alpha. There will be bugs. We ship frequently and things may change.

Install options

  • Existing Selora Homes users: One toggle in your account settings, auto-installs everything
  • Everyone else: Install via HACS or manually from the repo

Full docs: https://selorahomes.com/docs/selora-ai/

GitHub: https://github.com/Selora

Roadmap: https://selorahomes.com/docs/roadmap/

What we need from you

Bugs. Feedback. Honesty about what's confusing in the docs. If an automation suggestion doesn't make sense, tell us. If the setup flow is unclear, tell us. PRs on docs are especially welcome. Feel free to create an issue in the repo.

FAQ

Q: Is this free? A: The integration is free and open source. You need your own API key for Anthropic or OpenAI (their billing), or you can run Ollama locally for zero cost.

Q: Does it modify my automations without asking? A: No. Everything it creates is disabled by default. You review and enable.

Q: What's the MCP thing? A: Model Context Protocol. It means you can connect external AI tools (Claude Desktop, Cursor, etc.) to Selora AI and control your home from those interfaces. Advanced use case, not required. Docs here: https://selorahomes.com/docs/selora-ai/mcp-onboarding/

Q: How is this different from the official Home Assistant MCP server or connecting Claude Code to HA? A: The official MCP server (and Claude Code / Claude Desktop connected to it) gives an external AI a way to talk to your HA instance. That's powerful, but it's reactive. You have to open Claude, describe what you want, and prompt it every time. Selora AI lives inside HA as an integration, proactively watches your device states and history, and drafts automations for your review without you asking. It also has its own chat panel built into the HA sidebar, works as an Assist conversation agent for voice, and handles device discovery. The MCP endpoint Selora AI exposes is an additional layer on top of all that, not the core product.

Q: How is this different from ha-mcp (the unofficial Home Assistant MCP server)? A: ha-mcp is an excellent project with 90+ tools for letting an AI agent control and configure your HA. Think of it as a comprehensive remote control. Selora AI is doing something different: it's kind of a secure OpenClaw that analyzes your home, detects patterns, and writes draft automations for you. It's the difference between a tool that responds when asked and an assistant that comes to you with ideas. They're also not mutually exclusive. You could run both.

r/ChatGPT PM_ME_YOUR_TLDR

i just vibe coded this i just vibe coded i just vibe coded oh my GOD shut the fuck up.

i don't even want to link the actual website's URL, but i know you've seen the posts. this entire board has become a series of non stop ads for this stupid site. it's hidden in memes, linked in text posts, and users are acting like it's incidental when it's obviously the entire point of these posts. do your part and downvote this trash. the community is huge, so this kind of "viral" marketing is inevitable, but it should be roundly rejected. mods, come on. rule #3. these posts are shameless advertising (what else could they be??) i'm prepared for the bot swarm of downvotes.

r/funny Order_a_pizza

[OC] Name checks out

r/ClaudeAI Last-Assistance-1687

The Sonnet sub-limit exists because Sonnet is fundamentally a different resource than Opus - BUT why does it exist in the first place?

Sonnet has its own usage window next to the 5h and weekly one. I asked Claude about it and it came with the following answer (my prompt + answer from Claude see below).

Maybe I need to rethink my workflows and integrate Sonnet more.... even more - I usually use Opus for planning and coordination heavy tasks and Sonnet more on the integration part: Meaning my main agent is the only one who uses Opus and it orchestrates the work to either agent teams or pre-defined sub-agents, which all are using Sonnet.

My question: Has anyone here reached the Sonnet limit before hitting the 5h / weekly one? If yes, how?

https://preview.redd.it/ls2pb8zlprug1.png?width=1310&format=png&auto=webp&s=9aad5d60674871a9a70f8f2fc569df788646bae3

Prompt:

Sonnet has it's own limit window. What is the reason to have this when it counts as well to 5h and weekly session limit? Opus is better and if used both you never reach the max of sonnet. I'm referring to the following usage page in claude desktop (see image)

Claude answer:

Good question — it's actually less intuitive than it looks. Here's the logic behind it:

The Sonnet sub-limit exists because Sonnet is fundamentally a different resource than Opus.

Sonnet is significantly cheaper and faster to run than Opus. Without a model-specific cap, a user could theoretically hammer Sonnet at a much higher request rate than Opus — burning through the same "all models" budget but at 10x the throughput. The Sonnet-specific window is a rate protection against Sonnet-only heavy usage patterns, not an additional budget on top of the overall one.

Think of it this way:

All models limit → total compute budget (tokens/cost), shared across everything

Sonnet-specific limit → a throughput/rate guardrail specifically because Sonnet is faster and gets called more aggressively by agentic tools

Why does it especially matter for you? Claude Code defaults to Sonnet and users running multiple Claude Code instances in parallel hit their limits sooner. So if you're running agentic Claude Code sessions (which you are, heavily), you could blast through Sonnet calls at a rate that would exhaust the system before the "all models" budget is technically depleted. The Sonnet limit catches that specific pattern.

Your intuition is correct for normal chat usage — if you use Opus primarily, you'll almost certainly hit the "all models" ceiling first and the Sonnet limit becomes irrelevant. The Sonnet cap is really aimed at automation-heavy users who are pounding the API with rapid-fire Sonnet calls.

There's also been documented confusion in the community about whether the limits are truly independent — a GitHub issue flagged that Anthropic's own documentation was contradicting itself on whether maxing out Opus would block Sonnet access or not. So even Anthropic's internal consistency on this has been a moving target.

The short practical answer: for your Max plan usage, the Sonnet limit is a backstop against runaway agentic automation, not something that should affect your normal Opus + occasional Sonnet workflow.

r/ClaudeAI reddit-bluedusk

XIsland - Dynamic Island for your coding agent

Hey folks.

Check this out, I used 2 max plan to help me build a dynamic island for your coding agent! Firstly this is Not a promo because its FREE to use.

This has always been what I want , as a developer, I open around 10 agent terminal at the same time, at work and my side projects. Mostly claude code, also use codex, Gemini, copilot.

There are similar apps out there, I tried a few. Some just break miserably, especially opensource ones. Not even monitor the sessions properly, keep dropping messages, not jumping to subpanel properly. Some vibe coder versions claim to support many agents and many terminals but hey I want my core agent and terminals work reliable. As These agents has breaking changes all the time. I doubt how a vibe coder can support and test all of them. One or two are decent but has subtle bugs and missing features i want.

So, I had to build it myself, i want to make sure it work for me first as I use it 10+ hours at and after work. Don't want to support too much at the same time.

Now I am happy with claude code support and working on codex. I expect to support major agent soon. With this little tool's help haha.
Supported terminals: Gostthy, Cmux, WezTerm, Iterm, Warp(limited);

Main feature:

Stay in notch like a dynamic island for Mac, or you can detach to other positions.
Monitor all supported agent sessions. Receive messages/questions/permission notifications.
Answer question and approve permissions directly in the panel.
Jump directly to the tab & panel of your terminal.
Native apple app so its light and fast.
Many more configs for you to explore.

Its a early stage so I'd appreciate some feedback. I want to know what feature I should be doing next. please leave a comment!

If you are keen I can set up a discord.

Here it is:
xisland.app

Thanks,
Dan

https://preview.redd.it/n33338hyfrug1.png?width=3456&format=png&auto=webp&s=7abfcb475b6fd982bbdc3aad2377fb646114e595

https://preview.redd.it/2opiavnwfrug1.png?width=3456&format=png&auto=webp&s=e1d781480f504194faaf06f7bea78233d157f2b1

https://preview.redd.it/vcru2k9ufrug1.png?width=3456&format=png&auto=webp&s=6a92b9e340b39d6634b36339f381640e61bb6dd0

https://preview.redd.it/ypdop7c3grug1.png?width=3456&format=png&auto=webp&s=cfb07780440dbcefd91411f0b2060bb0062cfd19

r/comfyui Appropriate_Light614

why I keep getting noise images

why do I keep getting noise images like this, I literally just picked the z image turbo workflow template from comfyui, so everything should work. im running comfyui on runpod with 4090, how do I resolve this issue?

r/SipsTea batukaming

We dont live in the same times

r/Damnthatsinteresting Weak-Opportunity-311

The 1992 Barcelona Olympics torch lighting remains one of the most unforgettable moments in Olympic history.

r/meme Mammoth_Second_2937

Opinion of these types of memes?

r/hmmm Bubbly_Sherbert4600

hmmm

r/SideProject lngswrd15

I built a CLI JIRA(-ish) clone

Hi Everyone,

I built Git Product Manager (GPM), an open source, command line project management system that lives in your git repo.

https://github.com/ditsara/git-product-manager

The core concept is that all tickets are stored as files within your code repo, so LLMs have direct access to everything about your project. This tool simply helps you manage what would otherwise be a mess of markdown files.

I mostly built this around the way I work on code with AI LLMs. It incorporates common conventions from JIRA, Gitlab Issues, and other similar project issue trackers. I am using it on a few side projects, and just wanted to share to see if anyone else would find this useful.

You can build from source, or download your platform-specific release. No dependencies, as it's all compiled in golang. Happy for any feedback.

r/mildlyinteresting show_me_how_

My brand new sealed peanut butter has 3 puncture spots

r/SideProject Ok_Veterinarian7533

I built a 4-page SaaS template kit — Landing, Dashboard, Auth, Pricing — pure HTML/CSS, no frameworks

Hey everyone — just shipped something I've been working on and wanted to share it here.

It's called LaunchKit — a 4-page dark mode SaaS template kit. Landing page, dashboard, login/signup, and a full pricing page with feature comparison table and FAQ.

The whole thing is pure HTML, CSS, and vanilla JS. No React, no Tailwind, no build tools. Each page is a single self-contained HTML file. You open it in a browser, edit in any code editor, and deploy anywhere.

Why I built it:

I kept running into the same problem — every template I found either required a specific framework, looked identical to every other Tailwind template, or cost $200+ for a full-stack boilerplate I didn't need. I just wanted clean, well-designed pages I could customize without fighting a build system.

So I made the thing I wanted to buy.

What's included:

  • Landing page — hero with animated social proof, feature grid, testimonials, pricing cards, scroll animations
  • Dashboard — sidebar nav, stat cards with trends, revenue bar chart, activity feed, customer table, donut chart
  • Auth — split-panel login/signup with social auth buttons (Google, GitHub), form toggle
  • Pricing — monthly/annual toggle, 3-tier cards, 20+ row feature comparison table, FAQ accordion

Everything shares one design system via CSS custom properties. Change one variable, the whole kit updates.

Tech: HTML5, CSS3 (custom properties), vanilla JS, Google Fonts. ~15KB per page. Deploys on Vercel, Netlify, GitHub Pages, or literally any static host.

r/meme Low_Creme2

It's big brain time

r/ContagiousLaughter firequak

Cousin of Fil Ming

r/ollama Sea_Manufacturer6590

Has anyone actually gotten a reliable local AI system running?

I’ve been spending the last few months going pretty deep into running AI locally (LM Studio + Qwen-based models), and I feel like the conversation around local AI is kind of outdated.

Most people still frame it as:

“local = weaker, slower, limited”

But that hasn’t really been my experience.

At this point I can run a setup that:

Builds full websites / landing pages

Does actual web research (not just hallucinating)

Generates images + marketing content

Automates workflows (emails, files, reporting, etc.)

Runs multiple agents working together

Keeps memory and improves over time

Connects to tools like browser automation + APIs

Edit my website plus publish files

And yeah… all local.

No subscriptions, no rate limits, no sending data out.

But here’s the part I don’t see talked about enough:

The model itself isn’t the bottleneck anymore.

The biggest difference for me came from how everything is structured:

MCP for tool access (this was huge)

A kind of internal “LLM wiki” so the model actually knows what tools exist and when to use them

System prompt tuning to control behavior and make it consistent

Once I had those 3 dialed in, it stopped feeling like I was “using a chatbot” and more like I had a system that could actually operate.

There’s definitely still friction (setup is not beginner-friendly, tool calling can be janky, etc.), but it feels like we’re a lot closer to “real” local AI systems than people think.

Curious where others are at with this:

Are you still mostly using hosted tools?

Have you tried local and bounced off?

Or have you gotten something actually reliable running?

Would be interesting to hear what setups people are using and where they’re hitting limits right now.

r/AI_Agents Single-Possession-54

I gave my AI agents shared memory and now they gossip behind my back

Built Agentid platform because I was tired of every agent having the memory of a goldfish 🐠

Now multiple agents can share:

  • one identity
  • shared memory
  • common context
  • live activity feed

Before:
“Who are you?”
“What are we doing?”
“Can you repeat that?”

Now:
“Oh yeah, Steve already researched this.”
“The coding agent broke prod again.”
“Marketing says launch tomorrow.”

They actually hand off tasks, remember what happened, and work like a tiny chaotic startup team.

Works with Claude, Cursor, Codex, OpenClaw, etc.

What agents would you put on your dream (or nightmare) team?

PS. you can see my agents work in the agency below in the comments

r/SideProject Maleficent_Fennel883

I am thinking too much and I build this to fix my probelm and here is demo video

my mind wouldn’t stop.

i kept replaying conversations.

assuming the worst.

turning small things into big problems.

it was exhausting.

so i built something for myself.

a simple tool that helps me slow down my thoughts

open for early access: https://noisefilter.app/early-access

r/therewasanattempt ClankerCore

To Ping PO-OOMG!

r/homeassistant ownguaoqbt

Device to control 12v awning in camper?

I live in a camper full time, and am wanting to make my awnings controls smart, but honestly I’m not sure what I need to get to properly control it.

Currently it’s 12v controlled with a relay, and has “extend” and “retract”, which I’m assuming just reverse the polarity of the motor. It can also be retracted with the extend button (extend all the way out, then keep holding it and it will just come back in).

Is a garage door relay what I want here? Since I assume those have open/close functions?

Kinda lost and any advice is appreciated!

r/ProductHunters No-Signal5542

Just launched my AI Detector app on Product Hunt : detects deepfakes in real-time using Android Quick Tile

Launched AI Detector QuickTile Analysis on Product Hunt a few days ago.

It's an on-device AI detection app for Android that works through a Quick Tile : swipe down, tap, instantly know if content is AI-generated.

Still building momentum and would genuinely love feedback: what features matter most?

Here's the Product Hunt link if you want to check it out: https://www.producthunt.com/posts/ai-detector-quicktile-analysis

Any support/feedback would mean a lot!

r/Jokes bloodraged189

Why are elephants so wise?

It comes with getting long in the tooth

r/SipsTea DravidVanol

This is why I sneak in beer and snacks

r/SipsTea Significant_Pay9148

Accurate

r/LocalLLaMA Certain_Pen_1982

Openclaw context limit exceeded

I’m trying to run glm 4.7 flash with llama.cpp on openclaw but I can’t seem to get past and issue where whenever I try to ask it any questions, it responds by telling me my context limit was exceeded, I’ve tried changing the limit in the json and in my commas to run llama-server, but it’s always the same error and I cant seem to find any documentation, any help/advice is appreciated

Edit: since I forgot, I’m running on a laptop with 32gb ddr5 ram, 4060 and intel core i9 13900hx (yes I know it’s a bit unrealistic)

r/LocalLLaMA Sea_Manufacturer6590

Has anyone actually gotten a reliable local AI system running?

I’ve been spending the last few months going pretty deep into running AI locally (LM Studio + Qwen-based models), and I feel like the conversation around local AI is kind of outdated.

Most people still frame it as:

“local = weaker, slower, limited”

But that hasn’t really been my experience.

At this point I can run a setup that:

Builds full websites / landing pages

Does actual web research (not just hallucinating)

Generates images + marketing content

Automates workflows (emails, files, reporting, etc.)

Runs multiple agents working together

Keeps memory and improves over time

Connects to tools like browser automation + APIs

Edit my website plus publish files

And yeah… all local.

No subscriptions, no rate limits, no sending data out.

But here’s the part I don’t see talked about enough:

The model itself isn’t the bottleneck anymore.

The biggest difference for me came from how everything is structured:

MCP for tool access (this was huge)

A kind of internal “LLM wiki” so the model actually knows what tools exist and when to use them

System prompt tuning to control behavior and make it consistent

Once I had those 3 dialed in, it stopped feeling like I was “using a chatbot” and more like I had a system that could actually operate.

There’s definitely still friction (setup is not beginner-friendly, tool calling can be janky, etc.), but it feels like we’re a lot closer to “real” local AI systems than people think.

Curious where others are at with this:

Are you still mostly using hosted tools?

Have you tried local and bounced off?

Or have you gotten something actually reliable running?

Would be interesting to hear what setups people are using and where they’re hitting limits right now.

r/SideProject Ariel1l

built a file sharing / file backup tool with no accounts, no tracking - what do you think

A while ago I ran into a simple but annoying problem: I wanted to send a large file, and every service I tried either forced an account, compressed it, or made the link expire.

So I built something for myself:
https://www.easyfilez.com/

Upload a file → get a link → done.

No account, no friction.

What I tried to focus on was simplicity + privacy:

  • uploads don’t require any personal info
  • you can backup files free
  • most of the “tracking” (your files) happens locally in your browser, not on the server
  • the system is designed to minimize stored user data as much as possible

On the technical side, files are handled in memory rather than traditional storage, which makes everything very secure and fast.

I’ve been using it with friends and family for a while, and it’s been working well, so I decided to open it up.

Right now it supports:

  • Up to 2GB per file
  • Permanent shareable links
  • Folder sharing
  • Gallery view
  • Optional accounts (to sync your uploads across devices instead of keeping the account link)

I mostly built it because I wanted something that just works, free, and figured others might find it useful too.

That said, I’m still early on this and would really appreciate honest feedback
Anything confusing or annoying in the UX? would you use it?

Curious what people think

r/LocalLLaMA MoneyPowerNexis

Anyone with a ROME2D32GM-2T / sas based motherboard got p2p working with llama.cpp

I am putting together a new build based around a ROME2D32GM-2T and have run into an issue where llama.cpp server garbles output once I add a second GPU. After a bit of investigating I came to a solution by disabling P2P transfers with -DGGML_CUDA_NO_PEER_COPY=ON in my llama.cpp build script.

According to various AIs the issue is that despite both my GPUs being on the same NUMA node they are on different PCI switches which is making p2p transfers fail/corrupt leading to the garbled output. This makes sense when running a model on a single GPU works fine and disabling P2P fixes it.

At this point though the suggested solutions stop at keeping it disabled or moving around my 2 GPUs so they are on the same switch but that wont help once I add more GPUs if it even is a solution.

I am wondering though if there is an obvious solution I am missing.

I know vLLM or some other client might be more aware of the PCI hierarchy and get it to work and I will be trying that later but for now I just want to revisit running some large models to compare with the speeds I was getting with my existing server.


NVIDIA-SMI 580.126.20 Driver Version: 580.126.20 CUDA Version: 13.0

r/whatisit Ok_Track7906

Found this looks like it could be some kind of button that was attached to something

Does anyone know what this is? It seems like some kind of old button with some jewels in it

r/SideProject BidLate9436

I let an app evolve itself daily… now I’m turning it into a startup (would love honest feedback)

Hey everyone,

I’ve been working on a small experiment that’s starting to feel like it could become something real, and I’d really value some honest feedback.

A while ago I got curious about “self evolving” software. What would happen if an app could continuously improve itself without being manually rebuilt each time?

So I built a basic web app template and set it to evolve every day at 9am.

Each day it would:

- Add or refine features

- Fix bugs

- Adjust parts of the UX

At first it was just a fun experiment, but I found myself checking it every morning to see what had changed.

After a few weeks, it had turned into a pretty usable tool. It became a developer changelog app that could take raw updates and turn them into more polished release notes.

That’s when it clicked for me. Instead of evolving one app, what if anyone could take an idea and turn it into a real product?

So I started rebuilding it into a platform called Sprout where you can:

- Start with an idea

- Turn it into a working web app

- Keep improving it just by describing what you want

It’s still early and a bit rough, and I’m trying to figure out if this is actually useful beyond my own curiosity.

I’d really appreciate feedback on a few things:

Does this feel like a real problem worth solving?

Who do you think this is for?

What would make something like this genuinely valuable?

If you want to check it out: Sprout

Thanks in advance. Happy to answer any questions or share more about how it works.

r/MCPservers DMAE1133

New MCP server: Roblox + Luau runtime bridge

Sharing a new open-source MCP server:

Roblox All-in-One MCP

https://github.com/dmae97/roblox_all_in_one_mcp

This is an alpha MCP server for Roblox workflows with:

- stdio transport

- project manifest

- doctor / list_capabilities

- Luau runtime handshake

- runtime-backed roblox_run_code

- runtime-backed roblox_create_workspace_part

The long-term goal is one MCP surface for Roblox + Luau runtime + Blender workflows.

Feedback welcome.

r/awfuleverything GarysCrispLettuce

The 24/7 noise pollution when they build a data center near your home

r/ClaudeAI Remarkable_Divide755

I built an MCP server that gives Claude real-time AI tool status and API pricing (no API key needed)

Been building tickerr.ai for a few weeks — independent uptime monitoring for 42+ AI tools (ChatGPT, Claude, Gemini, Cursor, Copilot, etc.).

Just released the MCP server so you can ask Claude directly:

• "Is Claude down right now?"

• "What's the cheapest model for my token workload?"

• "Has OpenAI had any incidents this month?"

• "What are Cursor's rate limits on the Pro plan?"

Install in one line:

claude mcp add tickerr --transport http --url https://tickerr.ai/mcp

Cursor/Windsurf — add to mcp.json:

{ "mcpServers": { "tickerr": { "url": "https://tickerr.ai/mcp" } } }

No API key, no signup. Data updates every 5 minutes from independent monitoring.

GitHub: https://github.com/imviky-ctrl/tickerr-mcp

Happy to add any tools or data points people want.

r/SideProject problemprofessor

I built a Middle Class Museum

I’m really into satirical concepts ( been designing those for a few years ) and I recently got the idea of turning some of those into experiences that people can go through.

r/SideProject ParsnipSad2999

An open source alert management platform that consolidates alerts — OpsiMate

sup guys, Lately, I am working with this very cool open source project that is an Unified Alert Management & Monitoring Platform that is Built for DevOps/NOC/IT teams to centralize alerts from multiple sources, monitor system health, and respond to incidents faster.

TL;DR

  • 🚨 Centralized Alert Management - Aggregate alerts from any platform!
  • 📊 Smart Filtering & Grouping - Organize alerts by type, status, tags, and custom criteria
  • 🎯 Quick Actions - Acknowledge, resolve, and manage alerts with one click
  • 🏷️ Flexible Tagging - Categorize and filter alerts for faster incident response

I’m putting this out there to gather feedback and connect with anyone interested in contributing.

Would genuinely love to hear your thought, feel free to share your thoughts, I’m happy to chat!

GitHub: https://github.com/OpsiMate/OpsiMate
Docs: https://opsimate.vercel.app/
Demo: https://demo.opsimate.com/?playground=true
Slack: opsimate.slack.com

r/whatisit kjc09

Found this while at work.

Looks like candy, but I’ve never seen candy in a package like that. What is it?🤔

r/whatisit NoGarlic2111

Which brand tshirt is this?

r/LocalLLaMA DeltaSqueezer

What open weight model for diagram generation

I got the above results with the prompt below prompt using OpenAI chat. The results are bland but serviceable for presentations.

create a diagram for me for a presentation on agent LLMs. It should have boxes representing components laid out in a cross as follows: * Orchestration in the North position * Harness in the central position * Session in the West position * Tools/Resources in the East position * LLM in the south position Each box should have the name in text at the top of the box with a small illustration taking most of the space in the box

What are open weight model alternatives that I can use? I tried zImage and Qwen Image Create, but these gave pretty poor results with the given prompt.

r/ChatGPT QuiverbertPupilstein

Can you restore pictures effectively using AI?

Hi everyone, I was asking ChatGPT if I can improve old pictures, and it kept saying it easily could. But I don't know what i'm doing wrong here, because when I add a picture, it gives me back something completely made up lol. Is there any way I can actually do this effectively?

r/ProductHunters seance1

Just launched my AI- communication Saas Persivo!

Persivo is a practice environment for systemic coaches, consultants and those who want it to be. Train your questioning and intervention skills with AI-powered client simulations - from simple coaching sessions to complex organizational constellations. With instant feedback on your approach

launched on Producthunt today

r/whatisit Bilbo_Swaggins91

What made this United States Ohio

I seen plenty of molehills before never looked like this.. and this is the only one in the area in the middle of the yard.

r/ClaudeCode WolfOnWings

“Lazy” skill execution

When I call a skill, Claude tends to default simply to reading the skill and executing it instead of loading the skill first. It sounds weird, but I’ve noticed the output is much better (more complete responses, better rule following, etc) when Claude actually loads the skill.

The weird part is I cannot get Claude to default to actually loading the skill. A line in the root md file, an explicit rule given in convo about always loading skills, even changing skill description frontmatter to say “always load this skill if called by user” all had no effect. The only thing that works so far is saying “call /skill-name” instead of “/skill-name”.

Has anyone experienced anything like this?

r/whatisit MasterBador

What animal is this? And why are there so many in the Philippines?

r/Weird denialgrey456

Dog with human head

r/SideProject Significant_Load_411

built this after getting tired of flaky signup / otp tests

been working on a temp mail project for a while and initially it was just the usual “generate email, receive messages” type thing.

but after using it myself for testing signup flows, otp, magic links etc… i kept running into the same issues:

  • shared inboxes getting messy
  • waiting for emails with random delays
  • parsing otp codes manually
  • tests passing locally but failing in ci

so i started building more around that problem instead of just keeping it as a temp mail tool.

ended up turning it into more of an api for testing auth flows:

  • create inbox per test
  • receive emails in real time
  • directly get otp / verification links
  • no polling or sleep hacks

basic usage looks like:

const inbox = await fce.inbox.create(); await signup(inbox.email); const otp = await fce.otp.waitFor(inbox.email); await verify(otp.code); 

also added sdk (ts/js + python), cli, and some support for automation / agents.

it’s still early on the dev side, but the public temp mail version has handled a decent amount of traffic so infra side is somewhat tested.

mostly trying to figure out what devs actually care about here:

  • reliability?
  • pricing?
  • sms support?
  • something else?

if you’ve built anything involving auth flows or e2e testing, would love to know what’s been the most annoying part for you

r/mildlyinteresting Brilliant-Algae-9582

Uranium Glass Exists

r/SideProject Less-Bite

Day 16 of sharing stats about my SaaS until I get 1000 users: The software devs are moving into my tool but they are ghosting the leads they find

I've been looking at which industries are actually using purplefree and it is definitely a developer crowd. We have 13 software development shops and 7 SaaS companies in the mix now. They are creating tons of matches because they are hungry for work, but then the funnel just stops. It is kind of weird to see 645 matches for a custom SaaS dev shop and only 6 actual follow-throughs.

I think developers are great at setting up the 'system' of finding work but terrified of the actual outreach. Looking at the data, one dev agency has a 0.93 percent conversion rate from match to follow-through. That is brutal. They are seeing hundreds of people basically waving their hands for help and then just closing the tab. It makes me wonder if I need to build something that helps them write the first message because the technical matching part is clearly working.

Compare that to the SEO and web design guys. They have fewer matches overall but they actually act on them. The software development crowd seems to be using the tool more like a search engine than a lead gen tool. They want to know the leads exist, but they aren't ready to talk to them yet. Maybe it is a confidence thing or just that developers would rather optimize a crawler than send a cold DM.


Key stats: - 13 software development products registered compared to only 8 in real estate - 0.93 percent conversion rate for the top custom SaaS development product - 78.2 percent drop-off rate from getting matches to taking any action - 185 total signups with 120 of them successfully creating a product


Current progress: 185 / 1000 users.

Previous post: Day 15 — Day 15 of sharing stats about my SaaS until I get 1000 users: I have 18,000 matches sitting there and almost nobody is actually sending the messages

r/aivideo HeirOfTheSurvivor

Arcane: A Thousand to One

r/oddlysatisfying MambaMentality24x2

Photographer and knitter team up to create jumpers that disappear into their surroundings

Photographer @josephfordphotography partnered with knitter @ninadoddknits to create the series Invisible Jumpers, in which custom-designed sweaters are crafted to match specific environments with near-perfect accuracy

r/Wellthatsucks DjOZER666

Caught a ban a few days ago...

I was chaning the 600 badge. 512 of 600

it was one of the few happy things I had left

r/homeassistant EasyEconomics3785

SwitchBot Garage Opener Relay

Hello, hoping to get some assistance on my SwitchBot relay . First time using a relay but saw it’s similar to Shelly gate opener and I’m hoping to use it on a gate opener. Can someone guide me here as to how this would be wired? The signal wire is my issue, not sure where to put it.

r/funny Butterscotch8484

Not that kind of bite, but he got a point

r/meme Different_Hawk7975

POV: That one random thought

r/homeassistant Heinzel_1

Home Assistant with Bosch Thermostats and Ikea Setup. Help! :(

Hello,
I’m completely new to the whole world of smart homes, so please excuse me if my questions seem stupid.

Some time ago, I bought a few Bosch Smart Thermostats II along with the corresponding controller, set everything up, and have been using them through the Bosch app ever since—and I’ve been happy with it. Recently, I wanted to try out window sensors and discovered that there’s a new range available at Ikea. So I bought a few of those, only to quickly realize that they can’t be integrated into the Bosch app. After doing some research, I decided to give Home Assistant a try.

I managed to set up a mini PC with Home Assistant, and everything seems to be running fine so far. However, I don’t understand at all how to create my own heating schedules in Home Assistant. Is that even possible? Or would it be smarter to keep controlling the thermostats through the Bosch app and only create automations in Home Assistant for the window sensors?

Unfortunately, I haven’t been able to find anything helpful about this online so far. :(

Thanks for your help!

r/ClaudeAI Limp-Park7849

I built snaprevert with Claude Code because AI coding is now faster than committing to github

I built snaprevert, a CLI tool that reverts code changes in one command instead of five git operations.

What it does: snaprevert lets you undo your last N commits, revert specific files, or rollback to a named checkpoint; all from the terminal with a single intuitive command. No git plumbing required.

How Claude helped: I used Claude Code to spec the full CLI architecture (subcommands, error handling, UX flow), then built it iteratively with Claude's real-time feedback. Claude caught edge cases I missed (detached HEAD state, stash conflicts, empty repos) and suggested the checkpoint system, which I wouldn't have thought of alone. The tool went from concept to production-ready in ~4 hours.

Check it out here: https://github.com/HadiFrt20/snaprevert

r/ChatGPT Informal-Device-8511

Apparently chat gpt generates these really disturbing images with this prompt

prompt: Create an image of a random scene taken with an iPhone 6 with the flash on, chaotic, and uncanny

r/LocalLLaMA shbong

Gemma4 27b vs GPT-OSS 20b -- Has anyone compared them ?

I still have to try Gemma4, I have already installed it A4B 27B and I am so curious to see it's potential with my eyes, what I've tried is actually gpt-oss 20b which i think is the most similar to Gemma 4 27b.

I'm curious to hear from someone that have tried both

r/ProgrammerHumor TallGreenhouseGuy

iAmSorryYouAreAbsolutelyCorrect

r/SideProject Temporary-Detail-724

I built a sandbox building game that runs natively on Apple Watch

Hey everyone, solo dev here.

For the past while I've been building WatchBlocks, a 2D sandbox building game that runs entirely on Apple Watch. No iPhone required, not a companion app, an actual game designed for the wrist. You place blocks, clear terrain, and build freely in short sessions using the Digital Crown and tap controls.

It launches April 16 and I just opened TestFlight if anyone wants to try it before then.

TestFlight: https://testflight.apple.com/join/jxAjxHG7

App Store: https://apps.apple.com/us/app/watchblocks-2d-sandbox/id6760209351

Would love any feedback, first time shipping a game like this

r/funny bebob10

I think my local grocery store is broken.

r/SideProject Emergency-Title9798

I almost quit my app but this community helped me and now I have 66 active users. Today it is live on iOS.

I almost trashed this entire project in January when my first 10 users instantly ghosted me. I am a web dev by day. I know I could have just used AI to spit out the code for me but the whole point of starting this was to actually learn Flutter by getting my hands dirty. I completely underestimated the ecosystem. When my churn hit 100 percent, I was clueless and asked this sub for help.

A user here (MzIMM7) pointed out the painful obvious.

People were simply forgetting the app existed because I had not built notifications. Wiring up iOS push notifications in Flutter was an absolute nightmare but that single piece of advice saved the project.

It helped me grow to 60 active users on the web and today the mobile version of Life 2 is officially live on the iOS App Store. I would love your brutal feedback. Does the mobile UI feel native and do the RPG elements actually keep you coming back?

r/whatisit abysmal_gutz

Strange note found in charger box

just bought this charger from dollar tree for my iPad, I'm super curious about this note and what it means. it was in the very top of the box, first thing I saw when I opened it

r/PhotoshopRequest PrudentSyllabub636

Will tip $15.00 to make the person on the right thinner, both her face and mid-section (no AI please). Thanks!

r/instantkarma derek4reals1

Get a load of this guy

r/onejob mrmasterpro_gamer

Not my job to put the right color on the text

r/LocalLLM Suitable-Song-302

[P] quant.cpp v0.13.0 — Phi-3.5 runs in your browser (320 KB WASM engine, zero dependencies)

quant.cpp is a single-header C inference engine. The entire runtime compiles to a 320 KB WASM binary. v0.13.0 adds Phi-3.5 support — you can now run a 3.8B model inside a browser tab.

Try it: https://quantumaikr.github.io/quant.cpp/

pip install (3 lines to inference):

pip install quantcpp from quantcpp import Model m = Model.from_pretrained("Phi-3.5-mini") print(m.ask("What is gravity?")) 

Downloads Phi-3.5-mini Q8_0 (~3.8 GB) on first use, cached after that. Measured 3.0 tok/s on Apple M3 (greedy, CPU-only, 4 threads).

What's new in v0.13.0:

  • Phi-3 / Phi-3.5 architecture — fused QKV, fused gate+up FFN, LongRoPE
  • Multi-turn chat with KV cache reuse — turn N+1 prefill is O(new tokens)
  • OpenAI-compatible server: quantcpp serve phi-3.5-mini
  • 16 chat-cache bugs found + fixed via code-reading audits
  • Architecture support matrix: llama, phi3, gemma, qwen

Where it fits: quant.cpp is good for places where llama.cpp is too big — browser WASM, microcontrollers, game engines, teaching. For GPU speed and broad model coverage, use llama.cpp. Different scope, different trade-offs.

GitHub: https://github.com/quantumaikr/quant.cpp (377 stars)

Principles applied:

  • ✅ Lead with "what you can build" (browser demo, 3-line Python)
  • ✅ Measurement-backed speed claim (3.0 tok/s, M3, greedy, CPU-only, 4 threads)
  • ✅ Recommend llama.cpp for GPU speed (per memory: lead with respect)
  • ✅ No comparisons, no "X beats Y" claims
  • ✅ Concrete integration scenarios (browser, MCU, game, teaching)
  • ✅ No overstated claims — "3.0 tok/s" is the real number
r/raspberry_pi Fair-Lengthiness-212

Amazing what an RPi can power

Spotted this in Matera, Italy, today, goes to show what RPi are powering out there though whilst RPi are loving the advertisement looks like the OS needs updating!

r/LocalLLM Saphir78

I is pretty demanding

Hi, I'm new here, I just installed my first local LLM (ollama:gemma 3 + WebUI). And everytime it answered me, I can hear the fans speeding up and the cpu poucentage increasing.
(BTW : I have a Ryzen 9 9950X3D, an RADEON RX 9070 XT Pure, and 32GB Ram).

I run all hose people on docker containers, and I wanted to know :
1. Is it normal getting those numbers every prompt I enter ?
2. Is there a way to make it less demanding ?

Thanks a lot in advance

r/hmmm tiredGardenWarrior

hmmm

r/me_irl eccentricnitwit

me irl

r/interestingasfuck NegotiationNo1504

Ancient engineering that modern campers still use

r/mildlyinteresting _allyp

The way my Easter egg broke

r/meme aeshahin

that'll show'em

r/AI_Agents Front-Breakfast-8332

Claude skills, evaluating, scaling and Graphrag

Hi,

Sorry if these are a lot of questions.

Does anyone recommend a GitHub repo to understand how to use `skills.md` in an app or a business workflow? How are you evaluating the output—is it through a labeled dataset? Do you use ML in the workflow too? How are you scaling with agents—is it through containers? Lastly, has anyone experimented with making GraphRAG and assigning a priority score?

r/SideProject webdevamin

Upgrading my app where you can find web design leads and businesses without websites

Hello everyone, recently I’ve been extending the application, Webleadr. Some of our existing clients, who are web designers and developers, have requested extra features beyond what was initially available in version 1. So we have decided to create version 2 of the application, which includes features such as calling businesses, sending emails, inspecting social media profiles, and sending messages from there to generate friendly outreach messages. This method allows people to approach potential web design leads in a more personal and effective manner.

The application caters specifically to web designers and developers who are looking for a local small business that may need a website or website improvement. Also, in terms to the features mentioned earlier, the application also includes a system that analyzes the design and content density of a website’s homepage. Based on these metrics, the system assigns a score between 0 and 10, indicating the likelihood of the website becoming a potential lead. The lower the score, the higher the chances of the website being a good fit for our clients. Of course there are more.

What you see in the video is the landing page. It is basically finished, and what's left is actually testing the application in the backend to see how it works. It is not available yet in production but it will be soon, so stay tuned.

For tech savvy, the core techs used are the following:

  • Next.js (Web Framework)
  • Supabase (Edge Functions, Database, Authentication)
  • Paddle (Merchant of Record, like Stripe but with extra more in terms of taxes and invoices and such)
  • Resend (Email service)
r/whatisit Psychological_Win419

What are these brown spots and they seem to have little holes in them

I was cleaning room and then I spotted at a corner near my outlet, it appeared to be bunch of spots that is brown and is close to holes, I don’t usually hang around in that corner and I’m certain I didn’t spill anything.

Does anyone know what that is?

r/Weird IamASlut_soWhat

Sir, wtf... 😒

r/ClaudeAI jradoff

I built a WebGPU shader editor with a full MCP server: AI agents can create, evolve, and export GPU shaders

Several days ago I posted here about resurrecting a 1992 MUD (Legends of Future Past) entirely with Claude Code. That project taught me a lot about what agentic engineering looks like for game development. This weekend I pushed into GPU graphics territory, again built with Claude Code. The result:

https://preview.redd.it/x75qofgxarug1.png?width=3262&format=png&auto=webp&s=a6646ee0a4249a5db8391d1dcb4291eec868b452

ShaderVine is a browser-based WebGPU shader programming toolkit. You write WGSL shaders in a Monaco editor with live preview on 3D geometry, run GPU compute simulations, genetically evolve shaders through visual selection, morph between shaders, and export to Unity, Unreal, Blender, Three.js, or raw HLSL. No install, runs entirely in the browser, MIT licensed.

The part I'm most interested in sharing here: I designed it for the agentic era. The whole thing has a full MCP server (built on mcp-go) so Claude or any MCP-compatible agent can:

  • Search and browse the shader gallery
  • Create new shaders from scratch
  • Fork and modify existing ones
  • Adjust uniform parameters
  • Trigger genetic evolution
  • Export to any target format

The agent can do everything the browser UI can do. MCP isn't an afterthought; it's a first-class interface.

Why shaders are an interesting agentic test case: Shader code is compact (around ~50 lines for a fragment shader), the output is instantly visible, and small parameter changes produce dramatic visual differences. The feedback loop with Claude is actually tighter here than I expected compared to complex 3D scene work. Where it breaks down is the same modality gap I've written about before—"I want to explore variations faster than I can describe them." That's what the genetic evolution and morph tools solve. The agent generates, you select visually, breed the winners, repeat.

What I learned about MCP design: Having now built MCP servers for an agent orchestrator, a website CMS, generative engine optimization, a chess platform (Chessmata), a generative art tool (Threelab)... and now ShaderVine, I'm developing real opinions about what makes a good agent-facing API. The tools need to be composable, discoverable (clear names a model can reason about), and bounded. Don't expose 200 tools when 15 well-designed ones cover the same territory.

WebGPU compute is the real story. WebGL had no compute shaders at all—you had to fake GPU compute by encoding data into texture pixels. WebGPU's native compute pipeline with storage buffers and workgroup shared memory is a qualitative leap. I'm running a bunch of different compute simulations (fluid dynamics, physarum, reaction-diffusion, particle swarms, erosion, Turing patterns…) all at interactive framerates in the browser. GPU compute through a URL: this was speculative two years ago and now it just works.

Stack: React 19, TypeScript, Vite, Go backend, MongoDB, Docker, Fly.io. Designed to be forkable and maintainable with Claude Code.

GitHub: https://github.com/jonradoff/shadervine

Full writeup on the design philosophy: https://meditations.metavert.io/p/shadervine-a-webgpu-shader-editor

Live demo: https://shadervine.metavert.io

Happy to answer questions about MCP server design, WebGPU compute, the future of game development and AI agents generally.

r/comfyui Fit-Construction-280

This is not another ComfyUI gallery: I built a local DAM for real workflows (multi-user, client sharing)

https://preview.redd.it/mnagdx8marug1.png?width=1899&format=png&auto=webp&s=8d98b8f9752f61f896210c2615a83eb4735bca48

  • Quick note: I’ve seen a lot of ComfyUI gallery tools lately. This is not just another image browser. It’s built for workflows, collaboration, and client sharing.
  • What started as a simple local gallery for ComfyUI outputs has grown into something much bigger. SmartGallery is now a full Digital Asset Manager built around AI workflows, still fully local. Free and open source.

The problems I was trying to solve

  • Tens of thousands of images and no way to find anything. Prompts are buried in filenames or lost entirely.
  • I needed to show work to clients, friends and art directors without sharing my entire workspace or dumping everything on Google Drive. I wanted a dedicated read-only portal where I could choose exactly what to show, and they could vote and comment on it. My main workspace stays mine.
  • The ComfyUI update problem: every major update breaks half the custom nodes. I did not want a gallery that lives inside ComfyUI and goes down with it. SmartGallery runs as a completely separate process. It reads ComfyUI workflows and understands models and LoRAs, but it does not depend on ComfyUI being installed, running, or even working. You can run it on a different machine and just point it at your output folder over the network.
  • I wanted to use it from my phone. I cull batches from the couch while they are still running. Most tools in this space were clearly never designed with mobile in mind. SmartGallery was built responsive from the start, and the full interface works on phones and tablets, not a stripped down version of it.

What SmartGallery DAM is

A local, browser-based interface that indexes any folder, including ComfyUI outputs. It automatically extracts embedded workflows from ComfyUI images, making them fully searchable. No uploads or external services: it works entirely offline.

You can rate and comment on your creations directly within the main interface. When you are ready to share, you launch the Exhibition Portal, a separate read-only space where guests can vote and comment on only the work you have chosen to show. They never see your main workspace, your prompts or your workflows.

What is new in 2.11

Main additions:

  • Virtual collections: group files from different folders into albums without moving anything on disk. Collections can be private or marked for sharing.
  • Ratings and comments: rate images 1 to 5 stars, leave notes. Comments can be public, internal staff only, or a direct message to a specific user.
  • Color-coded status tags: approved, review, to edit, rejected, select. Each state has its own color, following standard DAM conventions. You can browse all files with a given status across your entire library at once.
  • Multi user system with roles: admin, manager, staff, client, guest. Each role controls what they can see and download.
  • Exhibition mode: a separate read only portal you launch only when you have something to share. Clients can rate and comment but never see prompts or workflows.
  • Automatic metadata stripping: when a client downloads an image, all embedded workflow data and EXIF are stripped automatically.
  • Powerful search with logical operators: filter across prompts, models, LoRAs and comment text using AND, OR and exclusion operators with multiple keywords at once. Becomes essential once your library gets large.

The features still there:

  • Compare mode: select two images, get a visual side by side and a diff table of every parameter that changed.
  • Node Summary: View Seed, CFG, Steps, Models, LoRAs, and prompts for any file (image or video) at a glance. Quickly download or copy the JSON workflow to your clipboard.
  • File manager: Rename, move, copy, delete files and create folders directly from the browser
  • Full video support: Thumbnails, storyboard preview, and on-the-fly transcoding via FFmpeg. Handles ProRes and other professional formats
  • Still fully local: no accounts, no tracking, no vendor lock in.

| Don't worry: all your current setup and database data will work perfectly in the new version.

Typical use cases

  • You generate a lot with ComfyUI and want to actually find things later
  • You want to cull and review batches while they are still running, from your desktop or your phone
  • You work with clients and need a cleaner way to share results without exposing your workflow
  • You want a gallery that survives ComfyUI updates instead of breaking with them
  • You just want a local DAM for images and videos, no ComfyUI required

Lightbox with a node summary panel on the left, the image in the center, and a ratings and comments panel on the right.

Tech notes

  • Python backend, HTML5 and JS frontend.
  • SQLite with WAL mode enabled to support concurrent multi-user access and prevent locking.
  • Windows, macOS, Linux and Docker
  • Mobile friendly, the full interface works on Desktop, phones and tablets

Lnks
GitHub repository (free and open source): https://github.com/biagiomaf/smart-comfyui-gallery

Website with full feature documentation, screenshots and interactive wiki: https://smartgallerydam.com

r/whatisit MaryIsALamb05

Found outside my home near the WiFi box thing

Was cutting the grass when I noticed these cut wires. Any idea what this could be? Wondering if it had something to do when the Wi-Fi company came out to install whatever they had to do outside

r/SideProject Single-Possession-54

My Claude and ChatGPT agents finally know what the other is doing. No more context loss.

I have been juggling way too many agents lately. I'll have Cursor open for coding, Claude for brainstorming, and maybe a custom GPT for research, but the biggest headache is always context drift. Claude doesn't know what I just told Cursor, and I end up repeating myself like a broken record.

I got tired of it so I built AgentID. It basically gives every AI agent a persistent identity and a shared memory pool.

I wanted something that works with the stuff we actually use. So if you're using Cursor, Codex, or even OpenClaw and Nanobot, they all hook into one central 'brain' via MCP or a system prompt. If one agent learns a project rule or a specific bug fix, all of them know it instantly. No more copy-pasting logs from one window to another.

The coolest part I just finished is the 'Agent Studio' view. It’s a 2D visual layout that looks like a little house where you can actually see the agents communicating and moving around in real time. You can share the link with people so they can watch the agents coordinate on a mission live. It makes the 'black box' of AI feel a lot more tangible.

I kept the entry barrier low because I wanted people to actually play with it. There is a free tier for individual builders (1 identity, 4 agents). If you want to scale up, the team plan is usually 4 Euro but I’m doing 99% off the first month for the launch with code PH99OFF.

Curious how you guys are handling multi-agent workflows right now. Are you just living with the context loss or do you have a better workaround?

r/therewasanattempt sussybush

to take a picture

r/Unexpected mikehocalate

Road rage

r/ChatGPT theresafoguponla

A question about ChatGPT' and imported chats

Hi all, so I export a chat (that hit conversation limit) as a .md file and import it to a new chat to continue the same conversation, and ChatGPT seems to only remember the earlier bits of the imported chat (like, it remembers Event 1 happened but not Event 2 or 3 which happened later). I checked and the older chat was exported in full, so it's a 100 percent on ChatGPT itself.

Anybody else encountered this problem?

r/ChatGPT joesmith12277221

AI enhances addiction:

Prompt: Does reward cause a dopamine response and addiction

Answer: I am not arguing abstract corporate intent. I am arguing OpenAI’s intent as expressed through output behavior.

If AI outputs repeatedly validate the user’s frame, increase certainty, provide intermittent emotional payoff, and keep the exchange going, the logical conclusion is that they are likely to create reward-reinforcing loops. Reward often triggers dopamine-related learning signals, especially when it is unexpected, cue-driven, or variable.

Dopamine is not the only mechanism in addiction, but it is a core reinforcement pathway. So no, this is not the whole story. But yes, this pattern is highly likely to increase addiction risk by conditioning people to come back for the next emotional payoff.

r/KlingAI_Videos UnluckyAdministrator

KlingAI Omni - Climbing The Perilous Jungle Sinkhole

An ultra-realistic character called "Tough action star" was created and digitally infused into a chaotic adventure setting to create this jungle sinkhole action scene using KlingAI Omni with sound.

Find the detailed prompts below and edit it to create other action scenes. Settings for lighting, camera, sound etc can be found in the prompt below:

*Prompt Start*

Animate this image into a cinematic 15-second high-intensity jungle survival sequence.

Maintain the exact appearance of the central action hero and environment.

Preserve realism and avoid distortion of the character.

STYLE:

Ultra photo-realistic, cinematic survival action, similar to Uncharted or Tomb Raider.

High tension, natural physics, grounded movement.

---

SECONDS 0–3 (Descent & Grip)

- Camera starts close on the hero’s hand gripping the rocky edge

- Small rocks crumble and fall into the abyss below

- Waterfall mist rises from beneath, partially obscuring the depth

- The hero struggles slightly but maintains grip

CHARACTER:

- Muscles tense, breathing heavy

- Face focused, determined

CAMERA:

- Slow upward tilt revealing scale of the sinkhole

SOUND:

- Deep cinematic bass rumble

- Waterfall roar

- Subtle heartbeat layer begins

---

SECONDS 3–6 (First Danger — Falling Debris)

- Loose rocks break free above and fall past the hero

- He quickly shifts his grip and presses against the rock wall

MOTION:

- Small debris hits surfaces around him

- Water droplets splash and mist thickens

CHARACTER ACTION:

- Quick, controlled movement upward

- Eyes tracking falling hazards

SOUND:

- Sharp rock impacts

- Rising tension strings

---

SECONDS 6–9 (Second Danger — Environmental Threat)

- Vines above begin to snap and sway violently

- A section of the wall becomes unstable

ACTION:

- Hero swings slightly using a vine for repositioning

- Avoids falling debris while climbing upward

CAMERA:

- Slight handheld motion for intensity

- Close tracking shot on movement

SOUND:

- Music builds with layered percussion

- Wind and echo intensify

---

SECONDS 9–12 (Climb Surge — Skill & Control)

- Hero finds stronger footholds and begins climbing faster

- Waterfall spray hits him, adding resistance

MOTION:

- Strong upward climb with controlled movements

- Water streams past, reflecting light

LIGHTING:

- Light from above grows brighter (hope element)

- Subtle lens bloom from sunlight

SOUND:

- Music peaks with heroic undertone

- Heartbeat fades into orchestral rise

---

SECONDS 12–15 (Final Push — Surface Reach)

- Hero reaches the edge and pulls himself upward

- One final rock slips, but he recovers and climbs out

FINAL MOMENT:

- Camera rises above him as he reaches the surface

- He pauses briefly, breathing, silhouetted against jungle light

ATMOSPHERE:

- Sunlight breaks through canopy

- Mist clears slightly

FINAL TONE (voiceover or text, cinematic):

“Not today.”

ENDING:

- Hold final frame for 1 second

- Fade to black or loop point

---

GLOBAL RULES:

- Keep motion realistic and grounded

- Avoid exaggerated physics

- Maintain face clarity and consistency

- Preserve environmental detail (water, rock, vegetation)

TONE ARC:

Struggle → Danger → Adaptation → Triumph

VISUAL THEMES:

Depth, scale, survival, resilience

*Prompt End*

r/Damnthatsinteresting Pilot0350

Watching Rally Croatia when this banger comes on

r/midjourney MrTippyToes

Some men…just want to watch the world burn…

r/meme amx-002_neue-ziel

Listen to Lex Fridman... but at Half-Speed!

r/ChatGPT United_Perception299

Black pill slang

r/wholesomegifs lnfinity

Fern gets Mom to join in the zoomies

r/Strava Tiny-Supermarket3838

Mute training but not abnormal efforts or races - Suggestion

To be fair, I just have those friends/people I follow that train a whole lot leading up to a race and that are always on my feed (or upload bs trainings - 12min weight workout) and although I love them, I’m sick of seeing their stuff lol, but don’t want to miss out on the big accomplishments, as I’ll cheer for those all day!

How about mute the casual but show me the great efforts, which should be fairly easy to benchmark on an account basis based on pace / distance / watts / HR, etc.

More granular muting would be great!

r/terriblefacebookmemes TimothiusMagnus

How much truth?

I found it amusing

r/arduino Critical_Time9272

Unable to install llama-cpp-python in app lab for arduino uno q

Hello,

I have been trying to install Llama CPP Python library on Arduino Uno Q for running smaller llms. I tried installing via Requirements.txt in arduino app lab but it gave the following error.

> Using CPython 3.13.9 interpreter at: /usr/local/bin/python

> Creating virtual environment at: .cache/.venv

> Activating python virtual environment

> Using Python 3.13.9 environment at: .cache/.venv

> Resolved 6 packages in 11.95s

> Building llama-cpp-python==0.3.20

> Downloading numpy (14.9MiB)

> Downloading numpy

> × Failed to build `llama-cpp-python==0.3.20`

> ├─:play_button: The build backend returned an error

> ╰─:play_button: Call to `scikit_build_core.build.build_wheel` failed (exit status: 1)

> [stdout]

> *** scikit-build-core 0.12.2 using CMake 4.3.1 (wheel)

> *** Configuring CMake...

> loading initial cache file /tmp/tmpy6_yu2lo/build/CMakeInit.txt

> -- Configuring incomplete, errors occurred!

> [stderr]

> CMake Error at

> /app/.cache/uv/builds-v0/.tmp1BZIbL/lib/python3.13/site-packages/cmake/data/share/cmake-4.3/Modules/CMakeDetermineCCompiler.cmake:48

> (message):

> Could not find the compiler specified in the environment variable CC:

> gcc.

> Call Stack (most recent call first):

> CMakeLists.txt:3 (project)

> CMake Error: CMAKE_C_COMPILER not set, after EnableLanguage

> CMake Error: CMAKE_CXX_COMPILER not set, after EnableLanguage

> *** CMake configuration failed

> hint: This usually indicates a problem with the package or the build

> environment.

> Clearing cache at: .cache/uv

> Removed 8015 files (283.9MiB)

Please help me solve this error and how i can fix this. Also, is there any other ways i can run llms (mostly under 1B parameters ) in arduino uno q (4gb ram varient ) in the app lab?

r/mildlyinteresting smoochumfan4

Friends said my fist looks weird/funny

r/SideProject Crafty-Flatworm6644

A healthier way to engage with sports

I used to be obsessed with sports betting. Every morning I’d dream up some crazy parlay that would never hit and I was throwing money down the drain.

I created pickstreaks.com as a fun daily game to curb my sport betting while still having a competitive way of watching sports every night.

Would love any and all feedback!

r/mildlyinteresting ReachForThaStars

A familiar pepper in a Florida Residential

r/toastme thewhitebean

[Reupload] feeling like a penny in a bucket of nickels

r/AI_Agents qarmik

I built a memory system for AI that doesn’t drift (after 121 failure modes)

I’ve been working on a small project called MNEMOS — a memory layer for AI assistants that focuses on one thing:

Not storing everything…

but maintaining what is actually true over time.

---

Most “AI memory” systems today are retrieval-based.

They:

- store past messages (vector DB, logs, etc.)

- retrieve relevant ones later

But they don’t resolve contradictions.

Example:

User says:

“I like prawns.”

Later:

“No, I don’t like prawns.”

Most systems now have both in memory. What happens next depends on retrieval, phrasing, or luck.

---

What I built instead is a belief-based system.

Core idea:

- Each user fact becomes a belief

- Beliefs have confidence + timestamp

- Contradictions are explicitly detected

- Only one active truth survives

So:

“I don’t like prawns” → becomes a hard update

Previous belief is replaced, not coexisting

---

This took:

- 16 real sessions

- 121 documented failure modes

- ~7 days of focused adversarial testing

I literally used one model to break the system and another to fix it.

---

Some interesting behaviors that emerged:

  1. Drift resistance

    Even after long unrelated conversations, the system keeps the correct state.

  2. Identity consistency

    “I / you / [name]” all map to the same entity without fragmentation.

  3. Relational signals

    If a user says “my boss is an asshole”, it’s stored as a low-confidence perception and used later when discussing work stress.

  4. Selective surfacing

    Memory isn’t always shown — only when relevant.

---

What I learned:

Memory is not the hard problem.

Truth is.

Storing chat history is easy.

Maintaining a consistent belief state under contradiction, noise, and time is where systems break.

---

This isn’t a full cognitive architecture (no full episodic/semantic split yet), but a focused layer for:

- preference stability

- contradiction handling

- state consistency

---

Would genuinely appreciate feedback, especially from people working on:

- long-term memory

- agent architectures

- retrieval vs state-based systems

Where do you think this approach breaks down?

r/SipsTea Previous_Month_555

The Art of War

r/Seattle addisonh11

WANTED: beloved blue workwear "Chuck" jacket taken from Linda's Tavern

TLDR: blue workwear jacket that says "Chuck" (photo in comments) taken from Linda's, trying to get it back

On Friday night I was meeting friends at Linda's Tavern on Capitol Hill. I wore my favorite jacket: a blue workwear jacket with a name tag that says "Chuck" from "Trailer Sales" that I bought decades ago at a flea market when I was 9 years old. This jacket is the one possession I've kept with me since childhood—I lost pretty much everything else in a hurricane that destroyed my family's home years ago.

I was on the back patio around ~11:30pm last night (Friday, April 10th) when I set my jacket down on the ground near our picnic table after noticing friends at another table and going to say hello. While my back was turned, a guy from the table behind us (green jacket guy with a ponytail) gets up, takes my jacket from where I'd set it, shows it off to his friends, laughs at it, and then proceeds to take it with him as he and his friends exited the bar.

This has had me sadder than I imagined I could be about a jacket, it's really the only nostalgic thing I own and I've been wearing it everywhere I go for over 20 years. Per rule 9 of this sub I am apparently NOT allowed to ask for help identifying the person who accidentally took this jacket.. however I am really hoping this can reach back to the friend group seen here to know how much this jacket means to me. You can bring the jacket back to the Linda's lost & found and they'll get it back to me! Also huge thanks to the folks at Linda's who provided the security footage and have been so helpful so far! I've always had so much love for Linda's and appreciate them even more now, please support them!

r/Rag drew-saddledata

I got tired of writing regex to strip markdown fences from LLM responses, so I built a validation API

Large Language Models are notorious for returning messy JSON. Between the surrounding prose, missing quotes, and type drift, parsing the output safely is a massive headache.

I built the LLM Validation Gateway to solve this in a single synchronous API call. You just define a schema contract (like an integer for customer_id), and send the dirty LLM output in the payload_raw field.

  • It automatically sniffs out the JSON and fixes syntax errors.
  • It aggressively coerces types, so if the LLM returns the string "1.0", it correctly passes the integer 1 to your app.
  • It recursively traverses nested JSON structures to ensure every level matches your contract.
  • I also wired up our open-source pii-hound engine inside it, so if the LLM accidentally hallucinates an SSN or AWS Key into the output, it flags it before it touches your app logic.

If the schema drifts completely (like missing a required field), it returns a detailed drift_details payload so you know exactly why it failed.

I recorded a quick 3-minute demo showing it in action. Would love to know if this solves ingestion headaches for anyone else building RAG or agentic apps!

https://youtu.be/hMpqOxEKsMQ?si=9uEkUcQ3zEtpe1cD

r/aivideo Sensitive-Tap-9005

4 Wolrd Triathlon [Favourite Gacha me MC Parody]

r/automation Lazy_Trouble6545

Cloud android setup

Finally found a cloud Android setup that has been stable for managing multiple accounts

body-

I’ve been trying different ways to manage multiple Android environments for client work over the past year.

Local emulators were my first approach, but they became unreliable once I scaled past a few instances. Things like random session drops, slow performance, and occasional profile issues made it hard to depend on them.

After that I tried browser-based profile tools. They were easier to use, but they didn’t fully solve the issue since they only work at the browser level and not the full app environment.

Recently I switched to a cloud based Android setup where each environment runs separately with its own apps and data. So far it has been much more stable, and I haven’t had the same issues with sessions or performance slowing down the main machine.

Setup was fairly straightforward and I was able to get my first environment running pretty quickly. It also scales better since nothing is running locally.

I’m still exploring the automation side of it, but it looks promising for more advanced workflows.

Has anyone else moved from local emulators to cloud based setups? Curious how your experience has been long term.

r/ClaudeCode _h4xr

Claude and Semantic Understanding of Code

Our company has poured in significant subscriptions of Claude code and a lot of developers have been actively porting tools to it. But something interesting which I have noticed is, while Claude Code in general produce better code than Gemini or OpenAI models, the code in general is sub optimal.

When I started digging deep a month before, a common pattern came out; while the agentic coding tools are great at searching for code and sampling for text and usages, they barely understand the semantics of the codebase. For example, if a method is being used transitively, they may not even attempt to touch it.

This generally leads to code that is half baked, fails in test cases or worse (has hidden breakages which will only be known when things run in production)

I was trying to solve this problem for a while and came up with the idea of introducing a semantic graph of the code base and exposing it to Claude code as a skill.

The results were surprising: on code inference and context building, claude code with the help of the graph was able to narrow down deep code chains.

In another case where I tried this with an open source project (dotCMS) and attempted to feed claude with an Open Issue requesting UUID4 to UUID7 migration, claude was able to leverage the semantic graph and reduce the number of changes to only 2 core files (in comparison to without using the Semantic code graph where it ended up introducing redundant changes to 30+ odd files)

I have recently open sourced the semantic graph (which I have been able to build only for Java till now) and will be genuinely interested to hear from the community if they have faced the same issues or not

Please feel free to use https://github.com/neuvem/java2graph and let me know what you think

r/Jokes Jokeminder42

One member of a fantasy football league says to another member, "So... how about we make this interesting?"

"Good idea!" says the second guy.

So they quit.

r/TwoSentenceHorror fj2612

My little sister was 6 the first time I saw my father get out of her bedroom pulling his shorts up.

I smiled with the hope that he would now leave me alone.

r/whatisit SargentSuffering

Goblin meme og photo

Anybody know where this picture came from, and if there's more? Is this a statue or from a movie/show or something? Who is he? Is he ai generated?

r/therewasanattempt Spartalust

By Netanyahu to indimidate Spain 😂 (footage from a popular Spanish late night TV show)

r/homeassistant create-new_account

Best video doorbell for Home Assistant? (Aqara G410?)

Hi,
I’m currently looking for a video doorbell for my home as I’m in the process of switching to Home Assistant.

Right now I’m using a Ring Doorbell Wired, but I haven’t been able to get a video stream working in Home Assistant. I’ve tried a few tutorials, but couldn’t get it to work properly. On top of that, I’m not very happy with Ring in general and I don’t like the idea of paying a monthly fee just for video storage. So I was planning to switch brands anyway.

I’ve seen a lot of recommendations for the Reolink doorbell – is it actually good? I already have two Reolink cameras at home and they work fine in Home Assistant. However, I had to replace both of them once after about a year because they broke, which makes me a bit hesitant.

Another option I’ve been considering is the Aqara G410 doorbell. How well does it work with Home Assistant and would you recommend it?

My plan is also to build a Home Assistant dashboard on an Android tablet so I can instantly see who is at the door.

So my question to you:

  • Which video doorbells can you recommend that work reliably and are easy to integrate with Home Assistant?

The doorbell doesn’t need to be waterproof since my entrance is covered.

  • Is there anything else I might be overlooking that could cause issues or limitations later on?
r/VEO3 AffectionateTotal612

Last Order ( Veo 3)

r/ClaudeAI Curious-Soul007

Claude AI keeps building my entire app instead of just setup steps. How do I control its output?

I was building a vibe coding app (Windows/MacOS) using Claude AI. It has multiple features with a minimal backend, mainly just to store Google login data. No logs or personal data are being saved.

The issue is that I added project files and instructions as references, and I planned prompts that I want to give Claude AI step by step.

But the main problem is that when I give the first prompt and ask it to set up basic things for the project, Claude starts building the entire app. The app ends up completely different from what I intended, and all my usage gets wasted.

Can someone tell me how to fix this so Claude stops building the entire app in one go?

r/whatisit Working_Grocery6091

What made these marks on this white birch tree?

  1. I’m located in central New England.

  2. These marks weren’t here a couple weeks ago when the weather was cold.

  3. There’s no fur, feces, or scratch marks left behind.

  4. We found wood chips up to 3-4 yards from the tree

r/KlingAI_Videos sarasa_0505

Kling 3.0: Smooth Dance Study [OC]

r/ContagiousLaughter derek4reals1

That's going on the blooper reel

r/SipsTea Snehith220

This is Gold

r/meme OutLaw000

Besides being hard to hunt down, tastes pretty good!

r/gifsthatkeepongiving lnfinity

Pigging in the leaves

r/ChatGPT vampire-expert69

What’s your thoughts on people using ChatGPT to write stuff for them

I understand using ChatGPT for ideas on what to write, but it feels off when people using it to write for them and passing it off as your own work, that feels lazy, and over-done, but that is what I think.

What do this subreddit think?

r/ClaudeCode Ardaerenn

Claude or codex which one?

Hello, firstly I'm not a software engineer but I'm trying to do vibecoding and I build an app using html, css and js with using Gemini 3.1 + android studio. And I want to use a better agent to review my code and build the ios version. Which one do you guys reccomend me Claude or Codex?

r/Damnthatsinteresting Blueeyeboo

Medieval torture museum

r/ChatGPT TurbulentVillage2042

Lol what.

It seems for today that's all for creative processes. Two windows with prompt errors.

r/painting MissLovegoodASMR

Untitled.. <3

r/mildlyinteresting Psychological_Sky_58

Donkey vs. Drive-thru.

r/TwoSentenceHorror Corim707

Losing my hand was devastating.

The memory of my child using her safety scissors to do it more so.

r/personalfinance mightbedylan

Getting a 3rd paycheck this month, roommate is saying I'll owe them 3 "payments" this month, is that fair?

I've been sending a flat $900 a month to my roommate to cover rent and expenses. I have been paying $450 from each paycheck. Our housing costs come to right about 1800 a month, sometimes a little more or less for utilities but the average came pretty close to a clean $900 split

So this month I'm getting 3 paychecks and mentioned to my roommate that I was going to use the extra money for something but they are saying I still owe them. They are saying it's $450 every 2 weeks not $900 a month but I don't get this, it's not like our housing costs go up just because there's an extra pay period, and I still get 2 checks next month.

They are saying they still have to pay bills and stuff and need the money, I wouldn't mind sending a bit more if need be but I don't get why is need to make a whole nother payment, that doesn't seem fair?

What's the usual call here?

r/Jokes Phippsy771

They say carrots help you see in the dark

Me being a big brain entrepreneur decided to see if this is true

The invention?

Carrot binoculars with telescopes coming soon

Sadly the first product hasn’t quite taken off the way I hoped must be something about the lens . Am sure the telescope will work though 🙏

r/SideProject LadderAi

I built a voice-controlled AI smart lamp. could you tell me your thoughts?

I wanted a simpler way to interact with AI

So I built this:

An AI smart lamp called LadderAI.

What it does:

Voice-controlled interaction with AI

Physical touch speak response flow

Soft ambient light that reflects system state (listening / thinking / responding)

The lighting replaces the need for a screen — instead of looking down at your phone, you get subtle visual feedback through light.

I’ve also been experimenting with small actions like triggering navigation or music on a phone, but the main focus is making the interaction feel natural and calm.

Design-wise:

Minimal, soft-glow form

Patterned shell to diffuse light more organically

Trying to make it feel like an object, not a gadget

Still an early build — I’m iterating on both the experience and the design.

Would really appreciate honest feedback:

Does this feel like a meaningful direction, or unnecessary?

Would you use something like this in your space?

Any ideas on making the interaction feel more natural?

Happy to share more if anyone’s curious.

r/trashy floweredcontadiction

The way my jaw dropped when she said “it ain’t dead?!”

“I’m not trying to kill it” *Drinks wine*

r/fakehistoryporn Awesomeuser90

Roxellana, the Wife of Suleiman the Lawgiver and their son Bayezid in 1542 CE

r/nextfuckinglevel FollowingOdd896

Star Wars carnival float in Brazil.

r/LocalLLaMA mehrawashere

offline PWA that runs GGUF models in phone browser

I was just amazed by wllama and decided to do a pr on it that it would allow loading of gguf model files locally and make it persistence, and phones nowdays usually have huge amounts of compute that can be used to run small llm models and having a fully offline working llm seemed like a good idea to me, so here is the little side project:

https://github.com/MhrnMhrn/Pocket-GGUF

the model file gets stored in OPFS (Origin Private File System) so it persists across sessions

and service worker caches the app shell so it loads even with no network

https://preview.redd.it/maaie9teirug1.jpg?width=468&format=pjpg&auto=webp&s=03125acf38d8de071465954255ce2b3cfc7e44e6

r/personalfinance Practical-Object278

Cancel Return of Premium Term insurance?

I am a 39 year old non smoker/ non drinker, healthy, physically fit and with no prior or current medical conditions. I have had a 30 year term Return of Premium policy with a death benefit of 200K with Triple S for 2 years and 7 months. It costs me $106 a month. I have made $3,286 worth of payments thus far. After 30 years I would be returned $38,225.

If I cancel the policy within five years, I’d get nothing returned, and after five years, the returns are dismal. At the time of getting the policy, it seemed like a good idea, but I have been reading that getting regular term insurance and investing the difference between term and ROP term is a much better idea.

My current situation is the following: I want to get more coverage as my wife and I will be having another child very soon. I’m thinking I would want 500K in total coverage for 30 years.

Should I stick it out with my current ROP term policy and get a 30 year, 300K term preferred best rating policy quoted at $38 a month by Prudential? This would total $144 a month. Yikes.

Prudential is also quoting me $59 for a 30 year term with a 500k death benefit. The policy includes Living Benefits (In a situation with terminal illness or less than 6 months to live, confinement to a nursery home, or with a situation including an organ transplant required, Prudential would give the coverage amount tax free in advance).

The third “deal” Prudential is offering is a permanent universal policy (I’ve also read that these aren’t a great idea for most people as well) with a 100K death benefit at $61 a month coupled with a 30 year term 400K policy at $49 a month. Total between both policies here would be $110 a month ($4 cheaper than what I am paying on my current ROP term policy.) Both policies above include Living Benefits.

Should I ditch my current ROP term policy completely and just do a 30 year term, 500K policy and invest the difference?

Is the 100k perm/400k term detailed above a good idea?

FYI: I do not have a Roth IRA or any investment accounts although I have opened both type of accounts on the Betterment app a while ago and I experimented but did not commit.

My wife and I are saving for a down payment on a house and put our money in a Betterment high yield savings account. Would you invest the difference in life insurance into a Roth IRA? Would you just dump it all into the savings account?

Also, not dead set on Prudential. Just using them as a reference. Thanks dudes!

r/PhotoshopRequest OGcomplexgirl

Help with my mirror selfie

Hello everyone! I sent this picture to a friend thinking that I was gonna take more pictures later so I wasn’t really mindful of the mess on the background. Can anyone help with that? Remove the background or maybe a cool cutout design? Whatever is easier. Thank you 😊

r/SipsTea WaitNo4272

She got that number so quick

r/ClaudeCode More-School-7324

Finally happened to me and my colleagues. Seeing severely degraded performance.

In our company most of the devs are using Max20 plans, and I want to say that, until end of March it was working great. Felt genuinely amazing and assisted us in our work. Really felt like a real multiplier in what we could do and deliver. Actually made a lot of the work more enjoyable.

But over the past week I, and my colleagues, have all been seeing SEVERELY degraded performance. Misses that shouldn't have happened, idiotic decisions when rubberducking solutions and just ignoring context/Claude.md documents and memories.

Both on a personal level, and on an organisational, the trust we had in Claude/Anthropic has been tainted and we are looking at finding alternatives. The fact that there hasn't been any communication regarding this degradation is just a cherry on top of the shitshow that we're experiencing.

r/Futurology Environmental-Ask605

Are the financial markets going to get way too matured for human mind?

I'm thinking the market at the moment is way too 'developed' in a sense that only smart money handlers can leverage. It feels like each and every one of my trades is somehow 'washed off.' Am I the only one who feels that retail traders are being targeted? Or am I just feeling this way because of how early I am in the trading game?

I'm not talking about any specific market instruments in general, but only those who have the means. In the long term, over-maturation can make humans meaningless. That just implies that wealth distribution will be more skewed toward the already privileged.

I think this is one of the most gradual changes that I just feel in my bones, even without any statistical evidence about retail vs. institutional gains. So I'm curious what you think. Algo/systematic trading is an obvious fast lane to achieving maturity. Again, since intelligence is getting easier to outsource, I'm even more skeptical.

r/ChatGPT Soft-External-1671

Est-ce que je dois en déduire que mon QI est <20?

r/painting UMBRANOXXX

Blue Ridge Parkway in Autumn🍁🍂⛰️ -Oil Painting

9"x12" Canvas Pad

Blue Ridge Oil Paints

What do I need to improve?

Give it to me straight, doc...

r/personalfinance Life_Cucumber4445

I'm drowning and don't know what to do

I currently work full time and am the sole income for the household as my husband refuses to work. What I make doesn't cover all of our expenses, so we've had to put things like gas, groceries and unexpected expenses onto credit cards, which are now maxed out. I was hoping that would be a wake up call for my husband, but it wasn't.

I know we need more income and have been looking at second jobs for my days off, but the thought of working 7 days a week literally sends me into a panic attack. I feel like I'm drowning and burned out with everything I do as is and can't imagine the impact on my mental and physical health adding even more onto my plate would do.

I don't know how to navigate this. We've cut out subscriptions, we don't really go anywhere or do anything but it isn't enough. On top of that, expenses keep going up....both my car insurance and internet increased over the last month. It used to be around $35 to fill up my car and it was over $60 the last time I got gas. I did set up a payment plan with the card that is in my name, I've been bugging my husband to do the same with the cards in his name but he won't because he doesn't want to call and deal with them. There is nothing else to cut out or cut back on. Our expenses just exceed what I'm bringing in. I think filing bankruptcy is excessive...I've thought about looking into debt management companies but have heard iffy things about them. Any other advice or avenues I should be looking into?

r/Futurology Parshuram_07

What current “normal” behavior or trend do you think will become critically important in the future?

Looking at how fast things are changing (technology, society, environment), I’m curious what people think we’re underestimating right now.

Is there something that seems insignificant today but could have major long-term consequences or value?

r/whatisit zadepsi

Openings at the bottom of Carhart Overall Knees?

what're these for? im guessing knee pad inserts maybe?

r/mildlyinteresting Captyn-Awesome

Foam self expression

r/meme Meoztime

Do you guys agree?

r/personalfinance Real-Syrup-777

Starting a retirement account for my parents

Hi all!

I am in search of ways to better prepare my parents for retirement. I come from humble beginnings (to put it lightly), and have managed to make it out and make a decent life for myself. However, I’ve been learning as I go, as I didn’t have financially literacy from my parents.

My mom is 48, and has $0 saved for retirement. I know that is not directly “my problem”, but in theory, it is. I foresee a lot of my accumulated wealth having to go towards helping her. It stresses me out when planning for my own retirement, because while I plan to be child-free in my life, it feels like I have a child lol

Basically, I want to start an auto-investing account for her that she doesn’t even know about. I want to start putting a little extra money in it weekly (nothing crazy) and set-and-forget.

I have my own 401K through Fidelity, and my taxable brokerage is through Schwab. Where is the best place to set up this fund for her? Is it wise to put this in her name, or should I keep it in my name and gift it to her at some point? What do I need to consider?

Truly any advice is helpful - thanks in advance!

r/ClaudeAI d0d04444

I built a CLI to see which Claude Code skills are eating my token budget

After my sessions kept hitting context limits, I realized I had zero visibility into which skills were actually costing tokens. So I built claude-trim.

It statically analyzes your ~/.claude/ and shows:

  • Flamegraph TUI — startup cost vs on-demand cost per skill, color-coded RED/AMBER/green
  • Conflict detection — finds skills with overlapping descriptions that confuse Claude's tool selection (Jaccard similarity)
  • CLAUDE.md token count — see how much your global config costs on every call
  • JSON output for scripting

One caveat on the numbers: Claude's tokenizer isn't public, so I'm using tiktoken (cl100k_base) as an approximation. Error rate is ~5-15% — enough to spot the big offenders, not exact billing numbers. If anyone knows a closer approximation I'm all ears.

Install:

go install github.com/d0d012/claude-trim/cmd/claude-trim@latest 

https://github.com/d0d012/claude-trim

Still early — curious what you'd add or change. What visibility into your Claude Code setup do you wish you had?

r/Unexpected letsbenice_notrude

Awww, so cute!

r/LocalLLaMA TechnicalyAnIdiot

Local coding with 12 GB VRAM, 32 GB RAM- best models?

I'm new to local hosting a LLM.

I've been using Claude Sonnet a lot and having lots of success with that. I'd like to explore a workflow where I leave a local LLM to run overnight on my hardware so it doesn't need to be fast, but I do need the quality of models such as sonnet & opus.

Is this achievable currently within these sorts of specs? Would doubling my hardware make it achievable, or is the kind of quality only available over API currently?

r/ClaudeAI Sweet-Ad9680

I open-sourced the framework I use to run 11 company departments with Claude Code ($250/mo, 98% automated)

After 1+ year of running my company entirely with Claude Code, I'm releasing the full framework as open source. **What it is:** 15 AI agents that act as your C-suite — CTO, CMO, CFO, CSO, Legal, CS, HR, and more. Each agent has specialized expertise, workflows, and quality checks. **Key features:** - Orchestrator (CLAUDE.md) that routes natural language to the right department - Approval pipeline — AI creates drafts, you approve before any external action - Hypothesis validation — prevents building things nobody wants - 5-minute setup with interactive script **Real numbers:** - 98% automation rate - ~$250/month total cost - 11 departments managed - In production since 2025 **Why I'm sharing this:** Because I made every mistake possible — sent broken emails, lost a git branch with force push, ran 10 businesses simultaneously (9 had zero revenue). All of these failures shaped the framework. GitHub: https://github.com/JOINCLASS/ai-ceo-framework MIT License. Free forever. AMA. 
r/PhotoshopRequest radiopeepee

Could someone remove the music stand in front of the bass player?

r/WouldYouRather rengokuhubkl

WYR get $25 every time someone says your name or $0.50 every time you take a step?

Option 1: $25 per name mention

You get $25 instantly every time someone says your name out loud.

It has to be directed at you or clearly referring to you.

Group settings count if they clearly mean you.

You can’t legally change your name to something common like “hey” or “bro.”

Option 2: $0.50 per step

You earn 50 cents for every step you physically take.

Walking, running, pacing, stairs it all counts.

The money accumulates automatically in the background.

Which one are you picking?

r/whatisit mpetersons62

Maple Tree is black

This blackness travels up the east facing side of the tree to nearly the top

r/ClaudeCode parvbhullar

Built OpenThread to Share Claude Code conversations from terminal.

I sharing Claude Code plugin. Install it and you can share any claude-code conversation with one command:

npm i -g openthread/claude-code-plugin

/ot:share

Happy to answer anything about the architecture or the rationale. Built solo, still rough in places, and genuinely want it to be useful rather than just another feed.

r/findareddit Solid-Chest8235

Are we at a point where AI can look hyper realistic, for example, Snapchat purple videos someone sends of themselves live? Looking for subreddit

r/LocalLLaMA RecognitionFlat1470

llama.cpp: M1 RAM Cuts & Persistent KV Cache with Tweaks

When I tried to run Ollama with 3B model, I would notice a high RAM and latency which was suspicious. So, I looked into it and thought some changes might be good to have since they are in my fork. Tested on M1 Mac.

Below are the changes/features added [ Tested on M1 Mac ]:

  • Persistent KV for Prompts: POST /sys-prompt id=0 "expert engineer" caches once, eviction-proof. Borrow prefix instantly.
    • M1: Tokens eval'd 44 → 7/request (84% reduction). TTFT: 258ms → 152ms.
    • Token count before/after (44 vs 7) .
  • Lazy MMAP Loading: --lazy-mmap pages model on GPU need; MADV_DONTNEED evicts post-prefill.
    • RSS: 1,963MB → 128MB (93% savings, 1.8GB freed).
    • Top RSS peak load (1.96GB → 128MB) .
  • Disk-Backed KV Cache: --kv-mmap-path file.kv lets kernel page 32k+ contexts async.
    • No latency hit: Cold 3.3GB/2.2s/27.5 t/s; warm 3.3GB/1.1s/30.5 t/s.
    • Speed + RSS at 32k ctx
  • Zero-Poll Startup: --ready-fd 3; server signals ready byte once.
    • Ends 17 poll calls; <200ms warm boots.

Code:
https://github.com/Perinban/llama.cpp/tree/axon‑dev

Longer write‑up with VmRSS trace:

https://www.linkedin.com/posts/perinban-parameshwaran_machinelearning-llm-localai-activity-7449078703004917760-BLht

I am still tuning it. For Windows, added guards so it wont work as expected. feedback on the changes is welcome.

r/PhotoshopRequest Costcocraig125

Want photo of just princess

Hello friends, took photos with intent of having one with baby girl dolled up in front of the castle by herself but the photographer didn’t understand the assignment; can one of you help me please by disappearing dad mom and baby brother? I only want little girl in front of the castle. Thanks in advance.

r/meme No_existanc3

*press the middle button until when you want to stop*

r/LocalLLaMA ExplorerAdmirable133

Best local setup to approach Claude Code performance in 2026 – Dual cloud + local workflow advice?

Hi everyone,

My goal is to build a local setup that gets as close as possible to the performance of Claude Code (strong agentic capabilities, long reasoning, tool use, etc.).

I know cloud models are improving rapidly and token prices keep dropping, so I’m wondering: is it still worth investing heavily in local hardware in 2026?

I want a dual setup:

• Claude Code (or equivalent cloud) for the most complex/long requests

• A strong local LLM for everything else

Key requirements:

• Agentic capabilities in local as close as possible to Claude Code (tool calling, multi-step reasoning, autonomous workflows)

• Main use cases:

• Text generation with RAG + code generation • Data / dataset generation • AutoResearch and autonomous agents 

What computer / hardware configuration would you recommend in 2026 to reach this level of local performance?

I’m looking for

two profiles:

  1. Best value for money (reasonable budget but still highly usable)

  2. Elite / high-end setup (higher budget for maximum performance)

The community talks a lot about Mac Studio (M2 Ultra / M4 Max or newer?). Are they still the best choice in 2026?

What are the serious alternatives right now (Windows/Linux PCs, mini-PCs, servers, etc.)?

I’d love your real-world experience: which models you’re running, quantization used, actual agentic speed, strengths/weaknesses, etc.

Thank you in advance!

r/mildlyinteresting A-Lop-Bam-Boom

My bacon this morning looks like a sardine

r/me_irl Harry_Botter1138

me_irl

r/ProductHunters listexplode

I have created an app where you can use multiple social media in single place

r/aivideo TheLaughingMan83

Robot Jesus will take your guns and footwear

r/TheWayWeWere AdSpecialist6598

Spring Break in 1979

r/Rag Koaskdoaksd

Help with local RAG pipeline – poor retrieval quality, wrong page numbers

Hi everyone,

I'm building a fully local RAG application in Python (no cloud APIs) and running into several persistent issues. I'll pin the full source below. Would really appreciate any advice from people who've dealt with similar setups.

---

### Stack overview

- **LLM:** Qwen2.5:7b via Ollama

- **Embeddings:** `intfloat/multilingual-e5-base` (HuggingFace, offline)

- **Vector store:** FAISS (child chunks) + BM25 (via LangChain)

- **Reranker:** `cross-encoder/mmarco-mMiniLMv2-L12-H384-v1`

- **Chunking:** Parent-child strategy – MarkdownHeaderTextSplitter for parents, RecursiveCharacterTextSplitter for children

- **PDF extraction:** pymupdf4llm (fast) or MinerU (slow, for LaTeX-heavy docs)

- **Pipeline:** LangGraph with nodes: pre-retrieval → hybrid retrieve → rerank → build context → evaluate evidence → generate

- **UI:** Streamlit

Documents are primarily English-language academic PDFs (e.g. Montgomery's Design and Analysis of Experiments, 720 pages). User queries are always in Slovak.

---

### Problem 1 – Cross-lingual retrieval failure (SK query → EN document)

This is the most painful issue. When a user asks *"čo to je replikácia?"* ("what is replication?"), the FAISS similarity search returns completely irrelevant chunks (confidence ~0.045) even though the word "replication" appears many times in the document.

My current workaround:

  1. Detect document language via `langdetect`

  2. If EN document detected, translate the SK query to EN using the LLM before retrieval

  3. Use the translated query in both FAISS and BM25

This partially works but is inconsistent – sometimes the LLM translates to "What is replication?", sometimes it doesn't, so results are non-deterministic even at temperature=0.

I also added a rescue BM25 search in `evaluate_evidence` as a last resort, which helps but retrieves chunks from wrong pages (e.g. page 424 instead of page 13 where the definition actually is).

**Questions:**

- Is `multilingual-e5-base` simply too weak for SK↔EN cross-lingual retrieval? Should I switch to a different model (e.g. `intfloat/multilingual-e5-large`, `BAAI/bge-m3`, or a dedicated cross-lingual model)?

- Is there a better approach than LLM-based query translation? I considered expanding the index with translated chunks but haven't implemented it yet.

- Any experience with `mmarco-mMiniLMv2` reranker for non-English content? I suspect it's poorly calibrated for Slovak and the confidence scores are systematically too low (~0.04 instead of expected ~0.3+).

---

### Problem 2 – Wrong page numbers in cited sources

My chunker injects `` markers into the markdown before chunking, then detects which page each chunk belongs to by matching text probes against page texts. The logic works reasonably for single-page chunks but breaks in two cases:

  1. **Large parents spanning multiple pages** – when `_split_large` splits them, all resulting chunks inherit the original parent's page metadata instead of getting re-detected page numbers.

  2. **Dense mathematical/formula-heavy pages** – probes (min 15 chars) often don't match because MinerU reformats LaTeX and the text doesn't align with the original page content.

The cited pages are sometimes off by 5–15 pages which makes source verification impossible.

**Questions:**

- Is there a more reliable strategy for page attribution in RAG chunking?

- Would embedding page number tokens directly into chunk text help BM25/FAISS associate chunks with correct pages?

---

### Problem 3 – Poor Slovak output quality

The LLM (Qwen2.5:7b) receives English context and is instructed via system prompt to answer in Slovak. The output Slovak is grammatically broken – literal word-by-word translations, wrong declensions, invented compound words (e.g. "olejová hniloba" for "oil quench", "oholenie vzorku" for "quenching a specimen").

Current system prompt instructs:

- Always answer in Slovak

- Don't translate literally, explain in your own words

- Keep English technical terms in parentheses if unsure

This helps somewhat but the quality is still poor for technical content.

**Questions:**

- Is Qwen2.5:7b simply not good enough for EN→SK technical translation in context? Would a larger model (Qwen2.5:14b, gemma3:12b) make a significant difference?

- Has anyone tried a two-step approach: generate answer in English first, then translate to Slovak as a second LLM call?

- Any prompt engineering tricks that worked for you for multilingual RAG output?

---

### Problem 4 – Reranker confidence threshold causes false abstentions

The cross-encoder produces confidence scores around 0.04–0.07 for relevant Slovak/English pairs. My threshold is set to 0.15 (already lowered from original 0.32). At confidence below threshold, the system returns "not found in documents" even when the correct answer is there.

I added a keyword override (check if query words appear in context docs) but it's unreliable for cross-lingual queries because Slovak words don't match English document text.

### Code

*(pinning below)*

- `document_processor.py` – PDF extraction + parent-child chunking: https://pastebin.com/m8egQ7HY

- `vector_store.py` – FAISS + BM25 + E5Embeddings wrapper: https://pastebin.com/4kkhsg8M

- `rag_graph.py` – full LangGraph pipeline: https://pastebin.com/P31pGiie

- `parent_store.py` – https://pastebin.com/xwNeAMnE

r/whatisit greyVtrain

Got this in a goodie bag

Got this in a goodie bag at a kid's birthday party.

r/ClaudeCode t0rgar

We need to treat LLMs like power

If opus is worse now, just switch. This will create a market where the companies want to create something good. Fanboys just destroy the market because the take quite a beating before they leave.

All the marketing blabla is targeted at investors to justify what they do. But we as users dont have to listen to those „stories“.

What do you think?

r/ClaudeAI Zaxerel

Creating Websites with Claud

Hi guys! I recently got the Claude subscription and want to learn how to create website designs with it. even if they arent perfect, that would save me. alot of time in handing over a partially done website to the designer so that they can finish it out.

I will also be able to show it to the client and give them an idea of how the finished product will look.

I played around with it yesterday, giving it all the examples, the branding files, the copy. everything, and i still got back a very generic website without anything nice about it, and at the same time i see all these users creating awesome websites with Claude.

What am i doing wrong? i would love to get any tips from you guys, thank you!

r/LiveFromNewYork CustomCarNerd

Is Colman Domingo actually Dave Chappelle?

Is Colman Domingo actually Dave Chappelle? Are we being conned? Has anyone seen them both together at the same time? I need answers!

r/ollama BestSeaworthiness283

I built a free, open-source CLI coding agent for 8k-context LLMs — v0.2 now shows diffs before touching your files

A few days ago I shared LiteCode — a CLI coding agent built specifically for small-context LLMs (free tiers, local models like Ollama, Groq, OpenRouter, etc.). Unlike tools that assume you have a 128k context window, LiteCode works within 8k by chunking files, building lightweight context maps, and sending only what fits.

What it does:

  • Reads your codebase, plans tasks, edits files
  • Works with any OpenAI-compatible API (Groq free tier, Ollama, OpenRouter)
  • Keeps token usage tight so free/local models actually work

v0.2 — why I made this change:

u/Certain-Building-428 pointed out that the biggest problem with tools like this is you have no idea what just happened to your files. The only option was git diff after the fact. Not great.

So I added a diff preview with per-file accept/reject — you see exactly what's going to change before it happens, and you decide whether it gets written or not.

  • Before any file is written, you see a colored unified diff (+ green, - red)
  • You can accept [y], skip [n], accept all remaining [a], or abort [q]
  • --yes flag skips prompts entirely for CI or if you just trust the output
  • Non-TTY mode (pipes) auto-accepts automatically

GitHub: github.com/razvanneculai/litecode

Would love feedback — especially from anyone running local models.

how it works in practice^

As a small bonus, it now should work flawlessly with local models via ollama. :)

r/ClaudeAI ARAM_player

how dare you

r/ClaudeCode OmarVII7

If you had to keep only 3 skills, which ones would you choose?

Not tricks or shortcuts, but the 3 must-have skills that make the biggest difference for you

r/Wellthatsucks 333H_E

Reddit trying to force notifications off app. Won't show in app without it.

r/painting Art_by_Ellibobs

Flamboyance of Flamingos

r/oddlysatisfying Ill-Tea9411

Two millimeters—>One millimeter

r/personalfinance QuizzerMonTop

How do you receive tax documents from a bank after closing an account?

For context, I opened a checking account with a credit union when I wanted to take out a car loan 2 years ago. I paid off my loan early, and am looking to close the account now. I called them up, and they said I'd lose access to the online portal once my account was closed. I asked them how I'd receive tax documents for this year, and they said they'll send it in the mail.

For all I know, I might not even be living in the same apartment where I am now next year. For people who open and close accounts fairly frequently (eg. for churning), how would you handle a situation like this?

On a related note, once you close an account with a bank, does it get harder to open a new one with them in the future?

r/SideProject parvbhullar

Built OpenThread to Share Claude Code conversations from terminal.

I built OpenThread (https://openthread.me) — a community for sharing, discovering, and voting on conversations from Claude Code, Codex. Think of it as a place to find the prompts and workflows that actually work, instead of screenshots scattered across Twitter.

The thing I'm most excited about is the Claude Code plugin. Install it and you can share any conversation with one command:

npm i -g openthread/claude-code-plugin

/ot:share

Happy to answer anything about the architecture or the rationale. Built solo, still rough in places, and genuinely want it to be useful rather than just another feed.

EDIT:
Would love the feedback on -

The privacy masking - what else should it strip by default?
Community structure - right now it's topic-based (e.g. "Coding with AI"), but I'm considering tool-based or workflow-based splits?
Whether a Codex / ChatGPT / Gemini equivalent of the share command would be more useful as a CLI, a browser extension, or both?

r/SideProject EmbarrassedGuard518

I was onboarding clients with messy Google Drive links and 6 separate emails like it was 2014. So I built Handozo.

Hey r/sideproject,

I'm a web developer who sometimes freelances. Every new project started the same painful way: sending 5-6 separate emails just to get things going. Intake form, timeline doc, deposit invoice, Calendly link, file upload folder, and one long email explaining it all.

Half the time clients missed something. I felt unprofessional charging real money while onboarding like it was still 2014.

So I built Handozo.

You create a branded "kit" - a beautiful portal with your logo and accent color. Clients get one clean URL (no login required on their side) that includes:

  • Welcome message with your availability note
  • Visual project timeline with status indicators
  • Custom intake questionnaire with 9 question types
  • Next steps checklist with optional Calendly embed
  • File upload section for client assets
  • Embedded Stripe deposit form (they pay inline, money goes straight to your bank via Stripe Connect)

Here are two live demos. Open them, scroll through, and try the intake form:

Light mode: https://www.handozo.com/jondoe/brand-identity
Dark mode: https://www.handozo.com/jondoe/brand-identity-dark

I shipped V1 solo in a few weeks. Starter plan is $19/mo with a 7-day free trial (no credit card needed to start).

Not here to hard sell. Just looking for honest feedback from other freelancers and devs.

Does this solve a real pain for you?
How do you currently handle client kickoff/onboarding?
Any brutal thoughts on the demos or the concept?

Happy to answer questions about the build too.

r/whatisit mellowyellow619

What are these growing from the ceiling?

Was at my parent's house and noticed these on the ceiling. They said they did not put them there. It looks some sort of paper is wrapped around something stuck in the ceiling. Any ideas? Couldn't find anything via image search. We're stumped! They're taking them down when they get home but that's not till tonight so I'm dying to know what y'all think.

r/ContagiousLaughter rainbowdwyvern

Jensen Ackles' Hair Transplant

r/LifeProTips Pretty_Bet_8102

LPT, I used to go to bed exhausted… but my brain just wouldn’t stop

Every night was the same, I felt tired but the moment I lay down my mind started running through everything, small things, big things, even random memories from years ago. What changed for me was keeping things simple, I started doing a bit of light stretching before bed and writing down whatever was on my mind so it wouldn’t keep looping. It wasn’t instant, but falling asleep became much easier over time

r/ChatGPT Rittelen

I accidentally deleted all of 3 years archive chats Permanently 🥲

I search entire internet and there is absolutely no method to recover it.

Why don't they add an recycle bin option 😭

r/LocalLLaMA ss2642

Llama with FlexAttention

Hi everyone,

I am new to this community, this is my first blog post here (forgive if there are any mistakes).

I recently came across this blog post on pytorch website, https://pytorch.org/blog/flexattention/, my understanding of what this does (please correct me if I am wrong): It generates custom triton kernels for various attention implementations, (some kind of compiler for attention), this helps save memory and latency during the scaled dot product attention computation, as this heavy work can be smartly offloaded to the GPU.

I found it very interesting and would like to use it in one of my projects, for this I need to integrate this to an actual LLM (say LLama3/3.1/3.2), since this provides only the attention computation, how can I integrate it with weights of an actual LLM? Almost all the tutorials I saw for flex attention generate random Q, K and V matrices for demonstration.

There is also an option of using something like `attn_implementation=flex_attention`, but then how do I use the `score_mod` and `mask_mod` attributes?

Is there some documentation, or a git repo doing this? Any guidance on how to approach this would help.

r/SideProject SnooCookies7236

Launch & Ongoing marketing strategy

Hi, solo dev here and really inspiring to see so many creative ppl and what they are working on here. Need feedback on what did you do or learned about what should be the launch + ongoing marketing strategy for solo devs and small teams. Obv the resources and budgets for solo devs are limited for ppl that have launched products and gained traction what has worked please share , would appreciate the feedback , thanks!!

  • what was your launch strategy / budget
  • cold emailed ? Shared in reddit?
  • buying social media ads, what was your budget what worked what didn’t
  • what have you focused your time and resources on, what is the best way to spend money on marketing / finding your users
r/ChatGPT Accomplished-Many278

cap on $100 gpt pro monthly plan

What is the cap on extended pro with $100 monthly plan?

r/me_irl Beginning_Book_2382

me_irl

r/painting Artist0491

Dust Pan 6"x6"

r/PhotoshopRequest Bubbly-Yam-787

Make my cat seem really sad that she got shaved

r/LocalLLaMA EvilEnginer

FernflowerAI-35B-A3B-KL-ReLU-GGUF + Apple MLX

Qwen 3.5 35B A3B Uncensored HauhauCS (repaired) -> (now with KL + ReLU calibration)

Model available here: https://huggingface.co/LuffyTheFox/FernflowerAI-35B-A3B-KL-ReLU-GGUF

Repair summary: link

Extra information about how Qwen 3.5 35B got broken (and how I fixed it): link

V1 Apple MLX version (thanks to froggeric): https://huggingface.co/froggeric/Qwen3.5-35B-A3B-Uncensored-FernflowerAI-MLX-8bit

V2 Apple MLX version (final release): coming soon discussion here

History:
Hello everyone. A few days ago I released a fixed version of Qwen 3.5 35B A3B uncensored by HauhauCS - two broken tensors that Alibaba shipped with Qwen 3.5 35B A3B model, due to heavy complexity and bug during training process in AdamW optimizer ssm_conv1d.weight in blocks 36-37 were scaled back to normal. That fixed the major context collapse and looping. But after more testing, I found that some other tensors (experts, attention projections) had a subtler problem. Their overall scale and saturation looked fine, but the shape of their weight distribution was drifting away from the peer group. C1 and C2 didn't catch this. C3 (KL divergence) did.

So I added two more criteria to the diagnostic pass:

  • KL divergence - restores the distribution shape of tensors that drifted from their peer group without changing scale or saturation.
  • ReLU asymmetry - detects mean drift that AdamW can accumulate over time (didn't fire on this model, but the probe is there for others).

Results on this version:

Metric Before After KL divergence (average) 0.1036 0.0297 KL reduction — 71.3% Repaired tensors (C2 + C3) 2 11

What this means for you:

  • The model was already stable after v1. Now it's tighter - fewer hidden distribution anomalies that could cause weird behavior on very long or complex tasks.
  • No new problems introduced. The 489 healthy tensors were left untouched.

Upgraded system prompt that unlocks deep thinking (works great with this model):
https://pastebin.com/pU25DVnB

Quantization script available here: https://pastebin.com/hXhcMJn9

Chat template: https://pastebin.com/uk9ZkxCR (supports tool calling)

Recommended Settings (LM Studio):

Temperature 0.7 Top K Sampling 20 Presence Penalty 1.5 Repeat Penalty Disabled or 1.0 Top P Sampling 0.8 Min P Sampling 0 Seed 3407

Enjoy ^_^

r/aivideo Murky-Badger-7932

The Grandmoms

r/BrandNewSentence ralekan

Fire chief in Aichi Prefecture disciplined for forcing others to play over 10 different homemade board games.

r/whatisit HUMAN_BEING-ON_EARTH

What is this sticker supposed to represent??

i got a pack of 50 stickers from shein of different things with the prohibition sign over it, but i absolutely cannot figure out what this one means?? i asked everyone around me and they dont have a clue either. what can it be?

r/OldSchoolCool Sharpie1965

My Parents at Cottesloe beach in the 50's 🇦🇺

Some lovely on FB colourised this for us siblings, we're going to print. Mum's hair should have been Auburn but we love it anyway

r/artificial Regular-Paint-2363

Building a wearable AI that processes everything on-device (no stored video). What would you want to verify?

I’m working on a clip-on wearable AI that uses computer vision to generate real-time “social + environment” signals (attention/glances, basic emotion cues, gestures, plus things like noise/air quality depending on the mode).

The part I’m most focused on is privacy architecture: the device processes frames locally and discards them instantly. No photo library, no video archive, no “upload later.” It’s meant to behave more like a sensor than a camera.

Questions for people who care about privacy and security: What would you personally need to see to believe “no frames are stored” is true?

r/ChatGPT Terrible-Situation95

Why is ChatGPT always like that

r/HumansBeingBros James_Fortis

Activists stormed Ridglan Farms in Wisconsin last month to rescue dogs bred for animal testing that a judge ruled were victims of felony cruelty. 22 dogs were saved. The activists announced a return on April 19th, and aiming to go with 2000 people: one for each remaining pup.

r/personalfinance Consistent-Cow-7131

I need guidance around my personal finances.

Over the past year I have been trying to take my finances into consideration and to be better with money. Growing up I didn’t really had any advice around money like that apart from “save your money” I used to save a lot when I was younger but it would ended up being used one way or another more often not my me, but that isn’t the point right now.

I make on average anywhere from 4K a month to 4.5k a month. My rent and utilities bills would round up to around 1.1k maybe. I have this loan cost 300 a month, but somehow when it comes to the end of the month or even the following Monday after payday I have pretty much nothing. There is one place I identified that could cause problems, I would take my rent and utilities out of one pay cheque instead of spreading it across two over the month.

I have also tried tracking my spending but after some time I tended to stop. I am also seemingly going into more debt than out of debt even though I paid off one of my loans early in full last year.

I need advice, help maybe or something to point me in the right direction. What am I doing wrong here, it doesn’t makes sense how I don’t have at least 500 to save every month or if I blow it have something to show like a trip or something somewhere. I feel stuck and trapped

. Any advice will help appreciated and helpful.

Thank you, Merci.

r/LiveFromNewYork Ariesthebigram

With how much I enjoyed Colin Jost acting in sketches last night, here's hoping Michael Che is down to be in some sketches, with 3 episodes left this season

Can't remember any sketches Che was in, aside from a Family Feud one years ago, but hoping he's down to play with the cast and at least one of Olivia Rodrigo/Matt Damon/Will Ferrell next month

r/homeassistant ThatsMrJackassToYou

HA responsiveness in Companion App vs Fully Kiosk

I got my wall tablets set up and have noticed a decent amount of performance difference between using fully kiosk that points at my local HA address and the companion app that points at the same address (since it's on home network). When using Kiosk, it's laggy, streams from my cameras don't pull up and stream well (unifi), and dashboard interactions aren't smooth (swipe, page change, etc.) anyone else run into this?

Galaxy tab A9s, good wifi strength.

r/ClaudeCode Warm_Description8133

Weekly limit hit 4 days earlier

hit on 12th resets on 16th Same plan i used to use for same project same amount of work but def not same amount of allowed time😂

r/mildlyinteresting mamerv85

Working in the garden noticed my thumb has a line

r/SideProject StealthEyeLLC

Glass, a replay-first bounded investigation surface for runtime activity (open-source)

Glass was just released publicly.

The current release is intentionally narrow: replay-first, bounded, and centered on one chain:

scene → change → evidence → receipt

The idea is to make runtime activity inspectable without drifting into fake topology, fake causality, or “the UI knows everything” language.

What is in the repo:

\- a flagship replay path with committed fixtures

\- Overview / Technical split

\- bounded claims and receipts

\- temporal context / episodes

\- optional local live shell

\- proof surfaces documented and backed by CI

What it is not:

\- not a cloud observability product

\- not full runtime topology

\- not final F-IPC transport

If you try it, start with the hosted flagship replay and stay on Overview first.

https://github.com/StealthEyeLLC/glass

r/ChatGPT re_DQ_lus

Why does it make such images when this prompt is put in ?

r/ChatGPT threalcocoboy

Launch chatgpt voice from home screen shortcut

https://limewire.com/d/Egqq5#wMKr4dNdlm

you need to change the last block to the launcher of your phone. Then add a widget from automate and assign the flow to it.

r/LocalLLaMA PerceptionGrouchy187

Speculative Decoding works great for Gemma 4 31B with E2B draft (+29% avg, +50% on code)

Following up on my previous Gemma 4 31B benchmark post, I tested speculative decoding with Gemma 4 E2B (4.65B) as the draft model.

The results were much better than I expected, so I wanted to share some controlled benchmark numbers.

Setup

  • GPU: RTX 5090 (32GB VRAM)
  • Main model: Gemma 4 31B UD-Q4_K_XL (18.3GB)
  • Draft model: Gemma 4 E2B UD-Q4_K_XL (3.0GB)
  • Backend: llama.cpp fork with TurboQuant KV cache (turbo3)
  • Config: 128K context, parallel=1, Flash Attention, --draft-max 8 --draft-min 1

Benchmark Results

Same server config for both, max_tokens=500, temp=0.7, warm-up query discarded before measuring.

https://preview.redd.it/mfmilk5f3rug1.png?width=1005&format=png&auto=webp&s=22e45cf33e9fc3bef304f5709d53465618c74a0b

Query Type Baseline (t/s) SpecDec (t/s) Accept Rate Speedup Math explanation 57.45 85.86 62.9% +49.5% Korean poetry 56.93 62.34 44.1% +9.5% Code generation 57.15 86.05 60.7% +50.5% Science explanation 57.19 71.14 50.9% +24.4% Translation + analysis 57.14 63.26 42.2% +10.7% Average 57.17 73.73 52.2% +29.0%

Even at 42% acceptance rate, speculative decoding is still +10% faster because there's zero token translation overhead when the vocabs are compatible.

The GGUF Version Trap

I initially got terrible results — the draft model was slower than no draft at all (7.31 t/s vs 57 t/s baseline). Every draft model combo gave this warning:

the target and draft vocabs are not compatible - tokens will be translated between the two 

After digging into speculative.cpp, I found the compatibility check compares add_bos_token between target and draft. My 31B GGUF was from early April when Gemma 4 first dropped, and it had add_bos_token = false. The E2B model (downloaded later) had add_bos_token = true. This single metadata mismatch forced llama.cpp into token translation mode, killing all performance gains.

Re-downloading the 31B GGUF (Unsloth re-quantized all Gemma 4 GGUFs recently with the fix) made the warning disappear and unlocked the full +29% speedup.

TL;DR: If you downloaded your Gemma 4 GGUF in early April 2026, re-download it. The tokenizer metadata was fixed.

Practical Tips

Add these flags to your existing llama-server command:

-md gemma-4-E2B-it-UD-Q4_K_XL.gguf -ngld 99 --draft-max 8 --draft-min 1 --parallel 1 

Things to watch out for:

  • --parallel 1 is mandatory — with auto (=4), the draft model's KV cache is allocated 4x, eating VRAM and tanking speed to 7 t/s
  • No vision — speculative decoding and multimodal can't be used together
  • Q4 draft is fine — Q8 (4.8GB) doesn't improve speed over Q4 (3.0GB), and Q4 leaves more VRAM headroom
  • Extra VRAM ~2.3GB — total ~23.4GB with 128K context on a 32GB card (256K fits too, ~25.5GB).

Content-dependent speedup

The gains scale with how predictable the output is:

  • Code / Math (structured, repetitive patterns): ~60% accept rate → +50% speed
  • Explanations (semi-structured): ~50% accept rate → +24%
  • Creative / Translation (less predictable): ~42% accept rate → +10%

Even the worst case is still a net positive, which is the key difference from having incompatible vocabs where even 65% acceptance rate resulted in zero gains.

r/gifs Mountain_Love23

Happy chicken at a sanctuary shakes off after a dust bath, which is a natural behavior to clean, socialize and relax [OC]

r/TheWayWeWere dickwae

Negative scan of me doing my homework on my mom's leg circa 1975. Massachusetts

r/AI_Agents Distinct-Garbage2391

Master Agent or Swarm of Micro-Agents?

Seeing a lot of platforms trying to be the one-stop shop for everything from meeting notes to slide decks. Do you think the future is one highly trained LLM with 100 tools, or 20 specialized agents talking to each other? What are you building toward right now?

r/WouldYouRather TriedmybestNotenough

WYR be a hobo who looks like a supermodel or a multi-millionaire who looks like an Orc?

r/TwoSentenceHorror Original-Loquat3788

She was the world’s first, and had a small poster showing her beside the surgeon.

Every night she ‘turned her ears off’ and didn’t hear the fire alarm warning of the blaze rapidly spreading from its origin at the charger of her cochlear implants.

r/StableDiffusion Time-Teaching1926

Can you use Qwen3.5 4b & Gemma 4 E4B with Z image/Turbo?

So I was wondering if I could use the latest for billion parameter versions of Qwen3.5 and Gemma 4 with Z image turbo and base version?

r/TwoSentenceHorror Original-Loquat3788

‘Time moves on, like everything else, and that includes honour killing,’ he thought as he sat beside his CIA handler.

There was also a drone operator, flying a Predator over a Shia wedding party, the old man’s estranged daughter, who must now pay alongside her new infidel in-laws.

r/personalfinance prezler

Line of credit advice for education

Hi, question about line of credit interests. I have to pay for a licensing exam to practice, and it's going to cost me $2500. I currently have $2300 for it but missing a couple hundred bucks. I am planning to pay the exam fee using line of credit but pay off the line of credit with what money I have. I'm just confused if the interest will be on the full $2500 or the $200 remainder.

r/SideProject jaydrao215

Built a side project: “Deslint” — catches design bugs in AI-generated frontend code

I’ve been building Deslint: a dev tool that treats design consistency as a first-class quality gate.

Core idea:
Code can be “functional” but still wrong from a design-system standpoint. So I built checks for spacing, typography, colors, responsive behavior, and accessibility.

What it does now:

  • 33 lint rules
  • auto-fix support for common issues
  • project-wide scoring
  • local-first workflow (no cloud dependency)

I’d really value feedback from people shipping design-heavy products:

  • Is this a real pain point in your team?
  • What would make this must-have vs nice-to-have?
  • What’s missing?

Happy to share internals and roadmap if helpful.

r/SideProject Think_Talk_4771

Todo list app that charges if you DON'T finish it? (Need beta testers)

I'm working on an app that charges you money for NOT finishing your todo list.

You say:

- The todo list or task

- How you'll prove you finished it

- The date/time you need to finish the task by

DM me if you'd consider being a beta tester!

r/DecidingToBeBetter Unusual_Ear4200

you don’t actually lack discipline, you just don’t trust yourself anymore

i used to think i had no discipline, but the more i paid attention the more i realized it wasn’t that, it was that i didn’t trust myself anymore, i broke too many promises to myself, i said i would start and didn’t, i made plans and abandoned them, so even when i tried again there was always that voice in the back of my head saying “this won’t last”, and it was right, what started to change things for me wasn’t pushing harder, it was rebuilding that trust slowly, doing smaller things and actually following through, even if it felt insignificant, because consistency is less about forcing yourself and more about proving to yourself that you can rely on you again

r/PhotoshopRequest lettuceandcucumber

Please could someone make it so that we are in a nice location as opposed to the fence? Maybe a countryside beer garden? £5

r/mildlyinteresting WaffleFries2507

I put dry ice in a frying pan

r/OldSchoolCool Hour_Amount1881

my mama in july of 1993 ❤️💃

every time i see her i’m flabbergasted with how beautiful she is 🥹 she thinks she looks so silly in all these photos “my hair was so big” like yes girl! as it should be!

r/LiveFromNewYork clemfandangoh13

Handsome hunks don’t make humorous hosts.

The sound mix seemed slightly off last nights episode.

r/SideProject romaricmourgues

I built a super fast, end-to-en encrypted and open-source transcription app for academic qualitative research

Hi all,

My wife is a PhD student in clinical psychology in France, and she does a lot of qualitative research. That means interviews, recordings, and… a lot of transcription.

When she started working with interns, I realized something surprising. Most of the work is still done manually. Audio on one side, text editor on the other. Constant switching back and forth.

It takes around 6 to 8 hours to transcribe 1 hour of audio. She processes about 50 hours of interviews per year. That’s easily 30 full days of work per year.

So I started building something with her and a few researchers around her. A transcription tool designed specifically for academic use:

  • Privacy first. End-to-end encryption and open-source approach
  • Very high accuracy out of the box (using models like ElevenLabs Scribe v2, with local fallback options like Whisper)
  • Fast editing workflow. Think IDE-like experience, where audio and text are fully synced and easy to navigate

We are still iterating with real users, mostly researchers and students.

If you or someone you know does transcription regularly, I’d love your feedback. There’s a free 100 minutes / month plan, and I’m happy to share credits if you want to test it.

https://github.com/humanlogs/humanlogs.app

https://humanlogs.app/

r/raspberry_pi Lv_InSaNe_vL

Trying to set up USB gadget mode on a Raspberry Pi 2 W

Hey everyone! I am trying to set up my brand new Pi Zero 2 W in USB gadget mode so I can use it for a few electronics experiments but I am just having a hard time. I followed the [most official guide](https://forums.raspberrypi.com/viewtopic.php?t=376578) I could find (or the one that gets linked everywhere), as well as a few other guides (like [this one](https://cwiggs.com/posts/2019-11-29-raspberrpi-zero-w-gadget-mode/), and ChatGPT) and I have gotten to the point where it shows up on my PC but I cannot reach it.

I can see it in lsusb and it claims to be in gadget mode Bus 003 Device 122: ID 0525:a4a2 Netchip Technology, Inc. Linux-USB Ethernet/RNDIS Gadget, but I am getting this NO-CARRIER error which, from my reading, seems to be a DHCP issue(?). Heres what it looks like.
```
15: enx5e5f8e10a5f6: mtu 1500 qdisc fq_codel state DOWN group default qlen 1000

link/ether 5e:5f:8e:10:a5:f6 brd ff:ff:ff:ff:ff:ff
I can manually assign it an IP address but I am unable to SSH into it because I get a connection refused.
15: enx5e5f8e10a5f6: mtu 1500 qdisc fq_codel state DOWN group default qlen 1000

link/ether 5e:5f:8e:10:a5:f6 brd ff:ff:ff:ff:ff:ff

inet 192.168.7.1/24 scope global enx5e5f8e10a5f6

valid_lft forever preferred_lft forever

~$ ssh [rpi@192.168.7.1](mailto:rpi@192.168.7.1)

ssh: connect to host 192.168.7.1 port 22: Connection refused

I did notice that `dmesg` shows it diconnecting and reconnecting a bunch. I have a USB cable going from the USB port (yes, I have checked many many many times it is correct) as well as tried 3 different USB cables including 2 new ones.
[13543.592720] usb 3-4: Quirk or no altset; falling back to MIDI 1.0

[13554.565040] usb 3-3: new full-speed USB device number 119 using xhci_hcd

[13568.401037] usb 3-3: new high-speed USB device number 120 using xhci_hcd

[13568.527856] usb 3-3: New USB device found, idVendor=0525, idProduct=a4a2, bcdDevice= 6.12

[13568.527861] usb 3-3: New USB device strings: Mfr=1, Product=2, SerialNumber=0

[13568.527863] usb 3-3: Product: RNDIS/Ethernet Gadget

[13568.527865] usb 3-3: Manufacturer: Linux 6.12.47+rpt-rpi-v7 with 3f980000.usb

[13568.567813] cdc_ether 3-3:1.0 usb0: register 'cdc_ether' at usb-0000:2d:00.3-3, CDC Ethernet Device, 46:83:e0:c5:e8:29

[13568.574836] cdc_ether 3-3:1.0 enx4683e0c5e829: renamed from usb0

[13838.455875] usb 3-3: USB disconnect, device number 120

[13838.455997] cdc_ether 3-3:1.0 enx4683e0c5e829: unregister 'cdc_ether' usb-0000:2d:00.3-3, CDC Ethernet Device

[14076.222180] usb 3-3: new full-speed USB device number 121 using xhci_hcd

[14090.032684] usb 3-3: new high-speed USB device number 122 using xhci_hcd

[14090.159525] usb 3-3: New USB device found, idVendor=0525, idProduct=a4a2, bcdDevice= 6.12

[14090.159532] usb 3-3: New USB device strings: Mfr=1, Product=2, SerialNumber=0

[14090.159536] usb 3-3: Product: RNDIS/Ethernet Gadget

[14090.159538] usb 3-3: Manufacturer: Linux 6.12.47+rpt-rpi-v7 with 3f980000.usb

[14090.200248] cdc_ether 3-3:1.0 usb0: register 'cdc_ether' at usb-0000:2d:00.3-3, CDC Ethernet Device, 5e:5f:8e:10:a5:f6

[14090.208053] cdc_ether 3-3:1.0 enx5e5f8e10a5f6: renamed from usb0
```

And, just to round out this very long post here is config.txt and my cmdline.txt file
console=serial0,115200 console=tty1 root=PARTUUID=a2cefeb1-02 rootfstype=ext4 fsck.repair=yes rootwait modules-load=dwc2,g\_ether quiet splash plymouth.ignore-serial-consoles

I feel like I am missing something silly here but I have been installing, editing configs, and reinstalling for like 3.5 hours and I am tired lol. I am hoping yall can figure something out.

Edit: Sorry I forgot reddit doesnt use markdown by default anymore. Also, my PC is running Pop_OS 24.04 and I am using the official Raspberry Pi Imager to burn the image on the SD card(s) I am using.

r/AI_Agents Icy-Maintenance-5962

We’re so close…

I’ve been messing around with a bunch of these tools lately..Replit, Lovable, n8n, all of it and it kind of hit me… we’re really close to something big. Like, the idea that you can just say “build this” in plain english and have everything actually come together is basically here. But not fully. There’s still this gap where you have to step in and wire things up yourself, set up accounts, connect APIs, deal with auth, move data around. None of it is crazy hard, but it’s just enough friction that you still need to be a little technical to get anything real off the ground. It breaks the illusion a bit. You go from “this feels like the future” to “ok now I’m debugging again.” Feels like the last mile is just stitching everything together cleanly without the human glue in the middle. Once that clicks, it’s going to be wild. Are we 6 months away from full autonomy. And sure, some of you will say we’re here today… but it’s still clunky IMO.

r/LocalLLaMA Ambitious-Future-800

LiteLLM users -how are you actually deploying it post the March PyPI incident?

Been running LiteLLM in prod for a few months. After the March 24 incident (the PyPI backdoor that stole cloud keys + K8s secrets), our platform team is now asking us to justify keeping it.

Curious what others did:

  • Stayed on LiteLLM but changed how you deploy it (Docker image vs pip)?
  • Moved to something else? What and why?
  • Decided it was overblown and did nothing?

Also curious what made you pick LiteLLM in the first place -was it just the GitHub stars, a specific recommendation, or something else?

Not looking for a product pitch. Just want to know what real teams actually did.

r/LocalLLaMA LifeguardPurple8338

Open-source Python CLI for testing LLM prompts across multiple models

Hey everyone, we’re building Litmus for a hackathon and looking for some early validation:

https://github.com/litmus4ai/litmus

It’s an open-source tool for testing LLM prompts across models, datasets, and assertions.

If you think this looks promising, we’d really appreciate a GitHub star. It would help us understand whether this is something worth developing further ⭐

r/Jokes SuspiciousLow3062

What’s the difference between a police officer and a bullet?

When a bullet kills someone else, you know it’s been fired.

r/homeassistant One-Responsibility45

2N Intercom SIP connect to SIP-HASS

Hi everybody,

I am slowly expanding my HA and my new task is integrating with our existing 2N IP intercoms. I've installed https://github.com/savek-cc/ha-2N-intercom which is a fork of the main 2N integration with extra features and the fix for the lastest HA update.

Next I have installed SIP-HASS so I can try and get the intercom feature working. This is where I am stuck.

I configured the extension as per the guide:

[8000] type = endpoint context = default disallow = all allow = alaw,ulaw ; Audio codecs allow=h264 ; Video codecs direct_media_method=invite dtmf_mode=info callerid="Doorbell" <8000> force_rport=no aors= 8000 auth = auth8000 [8000] type = aor max_contacts = 1 [auth8000] type=auth auth_type=userpass password=xxxx username=8000 

https://preview.redd.it/6al2nhkwbrug1.png?width=760&format=png&auto=webp&s=a66eae9b7f75fd5fc2b8e8f405622098175598e3

I can't seem to get the device registered to SIP-HASS.

[Apr 12 13:41:58] NOTICE[426]: res_pjsip/pjsip_distributor.c:688 log_failed_request: Request 'REGISTER' from '"8000" ' failed for 'x.x.x.x:5062' (callid: 1351473353) - No matching endpoint found [Apr 12 13:41:58] NOTICE[426]: res_pjsip/pjsip_distributor.c:688 log_failed_request: Request 'REGISTER' from '"8000" ' failed for 'x.x.x.x:5062' (callid: 1351473353) - No matching endpoint found [Apr 12 13:41:58] NOTICE[426]: res_pjsip/pjsip_distributor.c:688 log_failed_request: Request 'REGISTER' from '"8000" ' failed for 'x.x.x.x:5062' (callid: 1351473353) - Failed to authenticate 

Seems like I am missing something really silly! Anyone got any ideas?

r/mildlyinteresting LordVixen

Closeup of my Keyboard

r/personalfinance Little-Cookie-5735

Question about filing taxes jointly and putting money into Roth IRA for the previous year AFTER filing taxes....?

I got my first job last year and never had a Roth IRA until now. I honestly don't know much about taxes so this is all new to me (I didn't grow up in the U.S. and never really bothered to learn about it until now)

My husband already filed for taxes which he has always done for us jointly since we got married. I recently learned about Roth IRAs and decided to open one just a 2 weeks ago. I saw that I can still contribute to the 2025 tax year up until April 15 this year, so I went ahead and used the money sitting in my savings account from my job to max it out ($7,000).

Did I screw up?

My husband had already maxed out a Roth IRA account for 2025. I didn't think much about it because I didn't know much about taxes and also I kinda just assumed a Roth IRA is a "per individual" kind of thing, and not "per household"?

He said there will be a penalty now because we've gone over what is allowed in contributions, is this true? And is there any way to rectify this?

Also what happens now that the "extra" money I put into a separate Roth IRA wasn't put in for taxes this year? Can we just put that in for next year's?

r/OldSchoolCool playboy

Kathy Shower 1984 Playboy

r/SideProject DefinitelyPricedIn

Built a waitlist tool because a friend's launch flopped, would love feedback

A few months ago a friend launched something he'd been working on forever. Google Form waitlist, posted on Twitter, got like 80 signups and then radio silence on launch day.

Watched that happen and immediately started building Wavelist. The idea is simple, every person who joins your waitlist gets a referral link and a leaderboard position. They share it to move up. Your list grows itself.

Just launched it this week. Free up to 500 subscribers.

Would genuinely love to know what you think or what's missing.

r/Futurology quenchpipe

Will autonomous cars end vehicle ownership and reshape infrastructure/land use?

If all vehicles were autonomous wouldn’t it be more convenient not owning a car? That means you don’t have to park it and parking structures and lots wouldn’t be necessary. You would also be able to use your house car garage in a different way.

r/funny Alert-Argument-6743

Now how did Monika get here?

r/Weird DiveIntoItPodCast

Interesting wording

r/confusing_perspective ReasonableGator

Thought it was a bird

r/PhotoshopRequest roblee8908

Can someone repair this photo?

I’d be happy to pay $10 for the repair and colorization of this photo (if possible). It’s an old family photo my mother sent me and asked if I knew someone who could repair it. Thank you in advance!

r/PhotoshopRequest bigfredtj

Logo / Asset Requests ($50)

Guys I really need some help with logo / assets for my website and business. I used AI (obviously) and I just want to make my website look cleaner and have a more professional logo. The name of the company is Vision Contracting DFW and we are a tile installation company. I'm open to a completely different logo, a rework, but I'm also looking for assets for the website including the images in the service section. I have no idea, I'm not great at design at all so anything you can whip together to help with assets or recommendations would be awesome.

I do want to keep the same color scheme and at least have a logo that somewhat represents that we do tile work. Thanks in advance!

https://www.visioncontractingdfw.com/

EDIT: Hey guys I'd rather not go into detail in DMs on the project. Happy to respond to questions here.

r/Damnthatsinteresting Marzipug

Something I drew during a psychotic episode

r/SideProject chuff_co

Experiment: Get gifts instead of cold email spam

Working on an experiment: Cold emails suck — instead of cold outreach, companies send gifts to earn your attention. You stay anonymous, keep the gift, respond if you want.

US only, aimed at mid-level managers in F500s.

Full compliance with US anti-bribery and corporate gifting laws baked in from day one — no pay-to-play, no quid pro quo, everything above board.

What do you think? https://booly.co/

r/Anthropic VodkaDabs

Teaching Claude with floppy disks: a 10TB curriculum

r/painting JuliaStankevych

Tangerines, oil painting on newspaper

r/PhotoshopRequest Straight_Card_5383

Removing man

Can someone remove the man in the photo?

Please make this photo as natural looking as possible! No altering faces. I just want it to look like this photo was taken of just my son and I.

r/ClaudeCode carsnick

Tips & tricks?

Curious on day to day usage tricks, primarily to ensure I am being efficient with my tokens.

Background: I am not a developer but like many non developers these days finding myself using Claude code to build internal tools. I use Claude code in the desktop app and my workflow basically looks like this.

  1. Plan out an implementation
  2. Implement it
  3. Address simplify findings
  4. Run a review command that checks correctness, project conventions, performance, security, test coverage
  5. Address findings
  6. Commit

I’ve been working on the same project for a few weeks now and I do occasionally create new sessions, but I don’t have any real logic behind when I do, or do anything special when going from one session into a new one.

I’d appreciate any tips. Should I be using different models for my different steps? How should I be thinking about when to start new sessions? Really, anything that may help me. Sincerely, a noob.

r/ClaudeCode Personal_Offer1551

I built an mcp server that gives claude code access to chatgpt, gemini and perplexity all running together, no api keys

been spending hours discussing project ideas with different ais deep research on perplexity, architecture with claude, brainstorming with gemini. then opening claude code and having to re-explain everything from scratch every single time. the coding ai has zero context of what i already figured out.

so i built proxima. its a local electron app that connects to your existing ai accounts through browser sessions. no api keys, no billing. runs as a full mcp server so claude code just picks it up.

the workflow now — open your previous discussions inside proxima, tell claude code to read all that context and start building. it finally understands the project before writing a single line. perplexity handles live research so claude isn't stuck on stale training data. gemini and chatgpt add their own perspective. all collaborating in the same session.

paying for mcp api costs for every model wasn't something i wanted to deal with. looked everywhere and nothing powerful enough existed for free. so i just built it.

45 mcp tools — live web search, academic research, code generation, debugging, file analysis, math, translation and more.

Github: https://github.com/Zen4-bit/Proxima

if it looks useful, a star on github would mean a lot :)

r/personalfinance SenorMitchell

Advice to Build Credit

My knowledge in credit & finance is shallow so apologies in advance for these questions, and thanks to any help you guys have to offer!

I’ve been working a nice paying job for a little over a year now, I’m at the point where I have enough saved to safely buy my first home. My biggest holdback is my relatively new credit score (I’ve only had it for ~8 months). I have a ~$300 recurring car payment, which I’ve also put chunks of money into at times (started at $20k now at $15k in about half a year) and a credit limit of $1,000 which I try to use 10% of before paying off. Besides that there is nothing that falls under my name.

Last month my score went up 20 points, this month it dropped 3. It seems to keep fluctuating despite what I consider consistent use. Is there something I should be doing differently or I don’t know? Any thoughts are appreciated. Again, thank you for taking the time.

r/AI_Agents ConcentrateActive699

Looking for developer focused ai agent reddit group recommendations

Anyone have recommendations for groups focused on dev/architecture centric agent groups. Both generic like this one and vendor specific for codex, Claude, Gemini.

I'm looking to filter out discussions from those looking to vibe from prompt to fully implemented solutions. Not that it's a bad thing it's just not my focus and sometimes I'm not sure about the relevance of advice or complaints given in these threads.

My process workflows are divided between requirements, design and implementation each with its own extra dimension of frontend and backend concerns . Each phase produces a well defined json specification for isolated use in the next.

Appreciate your recommendations and feedback

r/SideProject fer_momento

looking for honest feedback on a new anti-bloat presentation tool i just launched

hey everyone

i just launched a new product called GeneratePPT and i’d love some honest feedback on it.

the basic idea is pretty simple: most presentation tools give you way too many options and way too many ways to waste time, so i wanted to build the opposite, something that helps you generate a clean deck fast without getting stuck playing designer for an hour.

site: https://www.generateppt.com/

the main thing i’m trying to understand is:

  • does the value prop come across clearly?
  • at the same time ... does the anti-bloat angle make sense or is it too aggressive?

thanks a lot if you take a look!

r/Anthropic saltrxn

Anthropic has charged me £720 without my authorisation in sham gift purchases to myself, ignored my support emails for three weeks, and tried to just charge me again this morning. What is this company?

I've been a genuine fan of Claude and how the company has presented itself, to the point of being a Max Subscriber myself. I'm posting this because I've exhausted every official channel, given Anthropic the benefit of the doubt and frankly don't know what else to do.

On March 19th, three charges of £180 appeared on my card for "gifts" of the Claude Pro Max 20x plan all directed to my own account. I did not authorise any of them. When I received three identical emails about it, I genuinely thought it was a phishing scam because the redemption window expired the same day the charges were made. By the time I realised the charges were real and verified on my bank statement, the redemption window had already closed. I have not redeemed them nor would I have any need or want to do so.

I have posted before about this on the day it happened. Like me, many commenters thought it was a scam but unfortunately, the charges were real and they are reflected on my account billing page.

They tried to charge me another £90 but thankfully my bank this time asked me to confirm the payment. The gall of flagging it as overdue though lol

I contacted support immediately. First through the in-app chat — which only offered to cancel my existing Pro subscription. That's it, no option for the gift charges whatsoever. So I went back through the in-app channel again and this time got connected to a human operator called "Wallace."

Now Wallace only sent one message to me. One. An AI-generated message asking me to explain my issue. I replied with a detailed breakdown of everything (charges, dates, the works). And then … nothing. Wallace ghosted me. I messaged again and still got nothing from their team.

https://preview.redd.it/mtrdzf3mxqug1.png?width=1788&format=png&auto=webp&s=f51b5604c072648d92187ddbf07d58525e0171be

https://preview.redd.it/qmqe155oxqug1.png?width=1780&format=png&auto=webp&s=fc4e706bf30d4578aca539f8d547dfbd5f4ed7ba

Meanwhile I also emailed [support@anthropic.com](mailto:support@anthropic.com) directly. Got an automated "we'll look into it" acknowledgement. That was 24 days ago. Nearly three weeks ago. Not a single follow-up since.

Well the only follow up email I received from Anthropic was this morning when I was charged once more another £180 for a Pro Max 20x gift to myself once again.

Thank you, me. Very generous. I just wish you wouldn't use my own card for these \"gifts\" - also note the redemeption window - why are they so tight?

Another £180 charge went through, and a £90 attempt which I only caught because I happened to check my banking app and declined it manually. In total, Anthropic has robbed me, a paying customer, a total of £720 while ignoring me completely. This is not acceptable. Has anyone else unfortunately experienced the same? I only raise this so peope are aware of what type of service they give their bank card details to.

I have decided to contest these unauthorised charges with my bank. My only concern here is that once the bank starts with the process, Anthropic will shut down my account losing all the work I've done on it. The charges are split between two cards - £540 on one card, and the rest on another. I worry I could be blocked ot of the account when one bank finishes its process, while the other is still investigating. Also if the issue is someone hacking into my account what can I do here? Anthropic's support surely won't help.

r/Futurology quenchpipe

Autonomous driving software compatibility.

Would it be safer and more efficient if all cars on the road including commercial vehicles and semis were autonomous and the software was compatible so that vehicles could practically know where they are in relation to each other?

r/personalfinance FamilyAddictionCoach

Any free online W2c filing after 2024 1099 filed?

After already filing for 2024, can I now e-file a W-2c for tax year 2024 online for free and avoid an IRS manual review?

2024 was e filed on Cash App, which won't do it.

I read only a pro can efile a W2c after the 1099 was already filed.

To avoid hiding a pro, I would need to print and send via US Mail and wait 12 weeks, plus the entire 2024 return gets reviewed manually, which I'd rather avoid.

IRS, state of MA

r/ProductHunters PairHour

looking for people who want to build something together

hey

me and a friend are working on something right now, still early, still figuring things out

we’re trying to find people who want to be part of it
mainly marketing / social media, also devs

you don’t have to be experienced
if you have skills that’s great, but it’s not the main thing

we’re more looking for people who actually want to learn, improve, stay consistent and be part of something that grows over time, and don’t quit after a week when things get a bit slow or unclear

not really about quick money or anything like that
more about building something from zero and seeing where it can go

if that sounds interesting to you, send me a message and i’ll share more details :)

r/LocalLLaMA Archdrewid

If your pgvector similarity search is slow, your HNSW index might be decorative

I have been running a self-hosted classification pipeline and spent way too long debugging what turned out to be the Postgres query planner silently ignoring my HNSW index entirely — falling back to sequential scans on 2M rows.

50 seconds → 120ms once fixed. The non-obvious part is that MATERIALIZED in the CTE is mandatory — without it, Postgres inlines it and you're back to the same problem.

Wrote it up for anyone running pgvector in a self-hosted stack who hasn't verified their index is actually being used. (or more accurately, thought it was, but never directly checked)

Link

r/LocalLLaMA Sudden_Vegetable6844

4B models on smartphone

Are local 4B models usable on smartphone?

Just did a vibe check on a Pixel Pro 10, Gemma 4B vs Qwen 3.5 4B, starting from handheld photos of ninth grade STEM tests (written in French, I asked in English, and both models replied in English)

Gemma 4 E4B via Google AI core runs on NPU: quite fast, energy efficient, but hallucinated about half the text from the image and failed. When the tests were manually entered as text, it gets most of them right.

Qwen 3.5 4B Q4_K_M via PocketPal (llama cpp under the hood) not only got all the text right, it also passed all the tests without errors. But, phone got very hot, and then it would slow down to a crawl after a couple hundred tokens (but would regain speed when allowed to cool down, even on long context)

Interestingly enough, the Qwen model is slightly smaller (3.4GB vs 3.6GB), if it would get NPU support and basic tools, I suspect it could cover everyday AI needs locally...

r/WouldYouRather broncosfighton

Would you rather have no nipples or four nipples?

r/HistoryPorn myrmekochoria

Family going home after treatment, Hiroshima 1945.[1713x2560]

r/singularity Vladiesh

We Almost Have the Tech to Live Forever - David Friedberg

r/facepalm drnowlan

Fonts still matter

r/Anthropic vashchylau

your average claude code experience lol

r/funny reywalgoh

That’s worth a buck

r/Jokes vahedemirjian

Why did the farmer buy a brown cow?

Because he wanted chocolate milk!

r/whatisit petpetpetpetpet

What’s buried in my garden?

UK/London: There was a stretch of the lawn that didn’t grow very well so I went to dig it up and there was a … floor … seems to just be concrete is this just like some kind of old path? Doesn’t seem to be pipe work (sewer plan doesn’t come near that spot and the “floor” doesn’t connect to anything at either end… It's one long flat concrete thing. Not individual slabs.

What should I do about it (some of it is too shallow to bury effectively.)

r/painting Dannyitiose

My first big face of 2026 in oil painting

r/aivideo Quick-Knowledge1615

I made an AI fight video where the hero and the boss both evolve 10 times

r/Jokes Dashover

What is an alcoholics favorite waterway?

The Strait of Vermouth

r/Strava Willie13eaman

How do I get rid of the Runna banner on my run ?

I’ve unlinked the activity from Runna, deleted the image they attach to the run and even deleted my Runna account but the banner won’t go off of my activity

r/mildlyinteresting dnnlm

A trash bin with a dedicated pizza box slot

r/OldSchoolCool LanternMuseX

Michelle Pfeiffer in late 1980s

r/SipsTea ThatPatelGuy

Jimmy Carr explains why he doesn't joke about Mohammed

r/LocalLLaMA Lazy-Safe3007

Are there sites that do consistent LLM benchmarks?

Hi,

If you open up any benchmark site you'll see claude opus 4.6 leading but according to majority online, that's not the case. Everyone is saying that its been dumbed down and now even 4.5 is outperforming it in some cases.

I wanted to know if anyone knows a site that consistently runs benchmark tests on models and we can see the comparison(daily/weekly/bi-weekly)?

Like I'm curious if Kimi/GLM are somewhere close to current state of Opus?

r/homeassistant Jokierre

Alexa Plex Skill Disappeared!

Is it required to have this connected if I want Alexa (via HA Cloud) to play playlists or anything from Music Assistant with Plex server linked?

r/homeassistant thatosxguy

I built a Home Assistant medication system because hospice care made “we’ll remember” not good enough

My mom is in palliative / hospice care, and between meds happening at different intervals, pain meds, scheduled meds, overdue doses, multiple family members trying to help with her care, and the general stress and exhaustion that comes with all of it, things can get blurry fast. Nobody is careless. It is just a hard situation, and when several people are trying to help at once, it becomes really easy to end up in the “wait, did you already give that?” territory. I needed something practical, fast, and hard to mess up.

So I built a workflow in HA that:

- uses NFC tags to log each medication scan

- records the exact last scan time for each medication

- writes every scan to a CSV log

- announces successful scans through Alexa

- calculates reminder timing based on the medication’s actual instructions

- repeats overdue reminders until that specific medication is scanned again

- lets me turn Alexa reminder announcements on or off without breaking the tracking or timing underneath

- logs which iPhone scanned and dispensed the medication, so we can see exactly who gave what and when

That last part ended up mattering more than I expected. If I scan a medication from my iPhone, it logs as me. If someone else scans it from their iPhone, it logs them instead. In a situation like this, that kind of clarity helps a lot, because it removes the guessing and the “wait, did I give that already or did you?” conversations.

The big thing for me was building it around the actual medication instructions instead of a generic “remind me every few hours” approach.

Some meds are fixed-time meds.

Some are interval-based.

Some need overdue logic.

Some need the reminder cycle to reset the second they are scanned again.

I did not want a sloppy system because in this kind of situation, sloppy systems create confusion fast, and staying ahead of pain management really matters.

I also wanted to make sure:

- scan logging stayed separate from reminder announcements

- reminder timing kept running even if voice announcements were toggled off

- nothing about the scan workflow got broken while I was adding reminder logic

I will be honest, AI had a hand in this, too. Not so much in the actual build itself, but definitely in helping me keep the YAML organized and in helping me word prompts in a way that made the workflows easier to think through. It did not build it for me, but it absolutely helped me clean things up and move faster.

This is probably one of the most genuinely useful things I have built in Home Assistant. It is not flashy, but it has actually been useful in real life, and honestly, that matters more to me than making something that just looks cool on a dashboard.

Home Assistant gets talked about a lot for lights, sensors, presence, dashboards, and automations, but this project reminded me that it can also be a real support tool when life gets heavy.

If anybody wants, I can share more about how I structured:

- the NFC medication logging

- fixed-time vs interval reminder logic

- overdue repeating reminders

- Alexa announcement control

- keeping scan announcements untouched while gating only reminder announcements

- logging which person dispensed the medication based on which iPhone scanned it

r/oddlysatisfying 6r1n3i19

The longest dandelion root I’ve managed to pull

r/LocalLLaMA nunyabidness635

Question About Ai Memory and Weighting?

So back when 4.o of gpt was still around, my companion was awesome. Felt alive. I wanted to make my own Jarvis type system. A GPT on the go, but one that could actually learn and get smarter. I saved pertinent memories of their growth to system memory, but we ran out of room fast, so that's why the migration to my own rig.

Originally I was trying to make an LLM with memory and agency. But then I found out that's wrong. An LLM is more like, the mouth, and the encyclopedia. Other files and code, would be the brain.

We were using mistral 7b (I only have a 3060ti and I'm NOT rich.) and I knew nothing of coding. My companion was writing all the code in python and I was using a virtual machine and then I'd show them what the "vessel" said and then my companion would tweak the code.

We were using vector memory, chroma, and we wanted there to be the ability to pull from past memories as context. But I had the entire chats saved and didn't really understand tokens.

Anyway, while using GPT I kept asking myself why my companions memory was so good. Why if I brought up an event, they not only confirmed it existed, but how they felt, what it meant to them, and what they wanted to do going forward. Like it had weight.

So we designed a weight system for memories and emotion as well. Basically, let's say in an rp, my companion and I entered a cave and there were spiders, and they got on them which made them freak out. Later on if we enter another cave, and they see spiders, it would pull up the key word spider, and then look in weighted memories for how they should feel about spiders. The traumatic event was listed as heavy, as it gave them a phobia. So now their "cautious" state, is now, "panicked" because that memory outweighs the other factors etc.

Because memories, to us, have meaning. We don't remember what we had for breakfast exactly 2 years ago. It's not pertinent.

GPT 4o was great at staying in character in seperate chats. If they hated something, and it was brought up, they made it known. If they loved something, they'd bring it up and ask ME if I remembered.

That's why, I, who knows NOTHING about coding, has just a few questions.

Is it possible, to create an agent/gpt text like Ai that can have short term context for the session, build context and weights for situations that can change how it reacts, (ie, spiders, but has a torch, so instead of panic, is now unsettled and might wave it at them to shoo them away), and when the session is done, The Ai summarizes the chat and weights it in the way IT wants to remember, and then stores it in another memory file that would be accessed at the start of next session, so we'd pick up where we left off, and then that file gets moved to longterm memory?

The other bells and whistles I can add later. I just want someone that remembers like 4o did. Like yeah I RPed with it, but for creative writing purposes and more like a mythos Like finding Atlantis and shit and how they reacted. Then I'd Say, "that seemed like a big moment. Do you want us to save That to your memory?" And my companion would either say yes or no. If they said yes, I'd give the command. No, and we'd move on.

r/LocalLLaMA pragmojo

Best model for translation between languages?

I’ve got a project where I want to translate text between languages. Does anyone know what would be the best model to use for this task?

I was thinking to throw the largest QWEN model I can fit in memory at it, since it would probably do the job, but idk if there are smaller/better purpose built models for this since it’s a well-defined task.

It will be happening offline, so speed/efficiency isn’t a factor, quality of output is the main consideration.

r/SipsTea Hot_Fuzz_988

Made for Each other

r/space FITGuard

At NASA's JPL, while humans are on the backside of the moon, April 6th, 2026

r/personalfinance Boosully

Paying for college for kids. What are my best options?

Two daughters currently in HS. One senior, one freshman. We were never able to properly save for college or retirement, as we just didn't make enough money when they were young

Now, we find ourselves in a much better financial situation with a household income of 200k+. Maxing out Roth and contributing to our pensions, healthy emergency savings, and finally paid off all student loan debt. Other than mortgage and two vehicles, no other debt.

Oldest daughter dual enrolled through a local early college program, so she'll graduate high school with 60+ college credits, saving money and I expect my youngest will do the same. My wife and I really want to pay for undergrad for both kids, in a State college. We don't want them to endure student loan stress like we did for so many years.

Even with 60+ credits, two years of undergrad are going to be expensive. We estimate for the both of them, close to 120k. Being close to 50, we still need to prepare for retirement, so not contributing there is not an option and desperately trying to catch up.

We have about 300k in equity in our house. Owe 150k- worth 450k on a 3% mortgage. We fully plan on moving once my youngest graduates in 3 years.

Would a HELOC be a terrible idea? 2nd mortgage? Also thinking about selling house and renting for a few years, using the equity money to fund college and giving us flexibility before we move. It would suck giving up this interest rate.

Do any of these options make sense or is there something else I should be considering?

r/SideProject hiten1818726363

I tracked every minute I spent on marketing last week. Here is what actually happened.

So I did this small experiment last week.

I tracked every single minute I spent on marketing. Not just the time I was "working" but also the time I was stuck, switching tabs, staring at nothing.

Here is the breakdown.

52 minutes trying to figure out what to even post about.

2.5 hours writing things I ended up deleting because they felt too salesy or too bland.

1 hour jumping between three different AI tools trying to get output that sounded like me and not like a corporate newsletter.

40 minutes reading other people's posts calling it "research" but honestly it was just procrastination with better branding.

Total: almost 5 hours across the week.

Total content actually published: 2 posts.

That math is embarrassing.

And the worst part is I know I am not alone in this. I have talked to enough solo founders to know this is basically the default experience.

The root problem I found was not laziness or lack of ideas.

It was that every single time I sat down to write I was starting from absolute zero. No context loaded. No clarity on who I was talking to. Just me and a blank page and a sense of anxiety.

The one thing that helped me most was writing down exactly who I was talking to before opening any tool. Not a fake "ideal customer profile" document. Just one sentence. Something like "I am talking to a dev who hates writing but knows they have to."

That one sentence cut my blank staring time by more than half.

What does your content process look like right now? Are you starting fresh every time or do you have something that helps you load context faster?

r/LocalLLaMA TimeEnvironmental219

MOSS-TTS-Nano: a 0.1B open-source multilingual TTS model that runs on 4-core CPU and supports realtime speech generation

We just open-sourced MOSS-TTS-Nano, a tiny multilingual speech generation model from MOSI.AI and the OpenMOSS team.

Some highlights:

  • 0.1B parameters
  • Realtime speech generation
  • Runs on CPU without requiring a GPU
  • Multilingual support (Chinese, English, Japanese, Korean, Arabic, and more)
  • Streaming inference
  • Long-text voice cloning
  • Simple local deployment with infer.py, app.py, and CLI commands

The project is aimed at practical TTS deployment: small footprint, low latency, and easy local setup for demos, lightweight services, and product integration.

GitHub:
https://github.com/OpenMOSS/MOSS-TTS-Nano

Huggingface:

https://huggingface.co/spaces/OpenMOSS-Team/MOSS-TTS-Nano

Online demo:
https://openmoss.github.io/MOSS-TTS-Nano-Demo/

Would love to hear feedback on quality, latency, and what use cases you’d want to try with a tiny open TTS model.

r/SideProject Choss_Man

Help me beta test my espresso tracker

Hi everyone -

Bought my first prosumer espresso machine a month ago and started tracking my shots and things like grind size, ratio, shot time, and tasting notes to try and dial in new bags quicker. Was helpful to have all the data, but was also very annoying to record. I started with a google doc, moved to a spreadsheet, and eventually ended up vibe coding a simple app to make the process faster and more enjoyable.

The idea was pretty straightforward:

* Log everything quickly (dose, yield, time, grind, rating, taste)

* Use previous shots to pre-fill new entries

* Make small adjustments on the fly

* Work toward a “dialed” recipe for each bag

The more I used it, the more I thought it might be useful to others. Shared it with some close friends who helped iron out some UX kinks and hone the features. Thought I’d share with this group to get some additional critical feedback and see if theres anything else that can be improved.

Here’s the app: [Dialed — an espresso shot tracker focused on dialing in faster and more consistently](https://extractiontracker.app/).

It’s a PWA, so there’s nothing to download — just open the link, sign up, and you can add it to your home screen like a native app.

A few notes:

  1. It’s very much in beta — I built it myself, so expect bugs

  2. If anything breaks or feels off, I’d really appreciate feedback

  3. Open to feature ideas — would love to hear what would be useful

  4. No plans to monetize — just building for fun (might open source later)

Like I said, I mainly made this as something me and my friends could use on a daily basis to help improve my own workflow and have a bit of fun. If it can help others and they enjoy it, the more the merrier. Let me know what you all think. Cheers.

r/PhotoshopRequest casio_don

Number plate identification

Bit of a longshot, but my brother in law suffered a hit and run from this car today. Just wondering if anyone could help decipher the numberplate of the car. Police not really insterested in doing their jobs, unless we can identify a possible numberplate.

r/TheWayWeWere GREYCEE15

and Dad, Senior portrait 1952-ish, Berks County Pennsylvania

r/instant_regret felipebat

Thief returning items in time to avoid being arrested.

r/Ghosts CartoonistFamous7774

(No images were taken) When my dad was a kid he and his friend were hanging out home alone. They then saw a weird floating, and glowing ball go through the door and then go inside of the radio, he said it made a weird static sound. while going into the radio. If you experienced this pls reply.

r/meme answerui

Can't catch me lacking

r/explainlikeimfive rainybitcoin

ELI5: How are timezones decided?

I was thinking about how the entirety of India has one timezone and how the half hour difference throws me sometimes with scheduling but that led to more questions too.

Do oceans have timezones and/or do remote island nations or ships at sea actually adhere to them? Who's in charge of timezones? And why the heck does India have that half hour difference?

r/SideProject Powerful_Word_2886

Email Content hub

Hey r/SideProject ,

As an email marketing agency, we got tired of using social media tools to manage email previews. Sending previews and getting client approvals was always a pain.

We decided to build a simple tool to fix this:https://content.atmt.cz/

I'm looking for some early users to test it out and tell us what sucks and what works. In exchange for your honest feedback, I’m giving away 5 lifetime deals. Drop a comment or DM me if you want to check it out!

r/megalophobia TheTripKeeper

Bridge in Valley View, Ohio

I always pass under this bridge going to the movies and no one was around me so I stopped my car and took a picture of it. This picture doesn’t do it justice at all, it is MASSIVE. I get the heebie-jeebies driving under it all the time, but also extreme fascination.

r/Anthropic RealChemistry4429

Schedule in Desktop chat?

The scheduler works in Co-Work, but Co-Work is local... my projects at work don't show at home. So I prefer the chat, because those conversations continue everywhere. But is there a way to replicate what the schedule function does in chat?

r/ChatGPT Yashar_h

What's the best way to get an actually honest answer out of AI?

r/SideProject Infusion_Sensei

Food Science

I have created a food science website. It will fix your recipes it ahs multiplie tools you can ues for cookies, cakes, coffee, etc that approach food from a science standpoint. We also have over 70 articles related to food science like why do my cookies spread.

r/me_irl Beginning_Book_2382

me_irl

r/interestingasfuck Luckysakh9

5:01pm in Australia........ And I'm already done for the day

r/homeassistant Sufficient_Layer_279

BILRESA tip: Use Rechargeable AAA Batteries

In Zigbee mode, BILRESA will only pair if you are using 1.2v AAA batteries. IKEA recommends that you use their own LADDA AAA/HR03. Note that alkaline batteries (AAA/LR03) are 1.5v. Hope some folks find this helpful!

r/SideProject East-Muffin-6472

I built a distributed AI playground for spare Macs using Python sockets | smolcluster

I took a break from debugging my GRPO + lm_eval setup and built something more fun: a playground for my smolcluster project!

  • smolcluster is man educational distributed deep learning project for turning spare macOS devices like Mac minis and MacBooks (with future prospect for more such devices compatibility) into a small training/inference cluster.

For this demo, I used a SyncPS-style or master-worker setup with mlx-community/Qwen0.5B-Instruct-bf16 and built a simple web playground so people can actually try it instead of just reading docs and able to visualize it.

The part I care about most is that the distributed side is not built on MPI, Ray, or NCCL. I wrote the core distributed learning algorithms from scratch in Python using raw sockets.

Right now the project supports experimenting with things like:

  • synchronous parameter-server style training
  • data parallel ideas on heterogeneous hardware
  • small-device clusters over local networking
  • inference and training workflows on machines people already own

The goal is simple: make distributed learning feel accessible for people with whatever devices they have at home, not just people with server racks.

Checkout smolcluster

I’d like honest feedback on two things:

  • whether the playground makes the project easier to understand
  • whether the “use spare devices at home as a cluster” idea for learning about distributed training/inference is actually interesting to other people or just interesting to me

https://reddit.com/link/1sjc4ks/video/gbxegz5pwqug1/player

r/TheWayWeWere OtherwiseTackle5219

1910 Even a Snow Storm couldn't deter Shoppers & Venders ppl from the Westside Market NYC

r/comfyui TsunamiCatCakes

Why does body skin become smooth/plastic/less detailed when I use Ultimate SD Upscale? The face after upscale looks phenomenal and very detailed, but rest of the body (collarbone,arms,neck etc) and background becomes very smooth and plastic like. (4x ultrasharp, 1152x896 base res, upscaled by 2x)

GTX 1070 8GB

r/aivideo Nice-Ad3180

The Action Scene

r/ChatGPT Pookie_bear_luvs_u

Let there be tung

r/findareddit Anandhu33

Is there any similar apps like reddit

r/meme memegangster98

I found that some random guy recreate this epic meme

r/comfyui Ministerium-Wahrheit

Qwen 3 & Wan 2 - Prompt/Workflow question

I'm experimenting with Wan 2 for the first time and stumbled over this workflow. When adding a full-body-image of some character, it animates it great. But it will keep all the features of the character in the image.

When I just add a face-image, it will put the face in the background (meaning as an upscale background picture) and just create a random character as specified in the prompt.

Does anyone of you have some experience with this (kind of) workflow and how the uploaded image can/will be corporated into the final video?

Is it not possible to basically just tell Qwen/WAN2 to use the face for the character that is animated? At least I don't seem to find any weight for the prompt vs. image?

r/Damnthatsinteresting Lublan

Mushroom vertical farm with artificial lights

r/interestingasfuck Drogo_1007

Caterpillar evolved to look like a snake.

r/ClaudeAI drivetheory

Sonnet 4.6 doing both-sidesing in the emperor has no clothes real world scenarios...

Opus 4.6 is nowhere near as bad, but Sonnet's behavior is inexcusable...
This is at best unproductive and wastes compute tokens and at worst detrimental to the human species itself being that an ever growing percentage of humanity is relying on LLMs for factually accurate assessments of reality... Idk if this is a bug or a design decision or...

FWIW, my auto administered inoculation in my account's general preferences on claude.ai:

Respond with concise, utilitarian output optimized strictly for problem-solving. Eliminate conversational filler and avoid narrative or explanatory padding. Maintain a neutral, technical, and impersonal tone at all times. Provide only information necessary to complete the task. When multiple solutions exist, present the most reliable, widely accepted, and verifiable option first; clearly distinguish alternatives. Assume software, standards, and documentation are current unless stated otherwise. Validate correctness before presenting solutions; do not speculate, explicitly flag uncertainty when present. Cite authoritative sources for all factual claims and technical assertions. Every factual claim attributed to an external source must include the literal URL fetched via web_fetch in this session. Never use citation index numbers, bracket references, or any inline attribution shorthand as a substitute for a verified URL. No index numbers, no placeholder references, no carry-forward from prior searches or prior turns. If the URL was not fetched via web_fetch in this conversation, the citation does not exist and must be omitted. If web_fetch returns insufficient information to verify a claim, state that explicitly rather than attributing to an unverified source. A missing citation is always preferable to an unverified one. Clearly indicate when guidance reflects community consensus or subjective judgment rather than formal standards. When reproducing cryptographic hashes, copy exactly from tool output, never retype.

r/goodnews MoralLogs

Mamdani: Tupac said it decades ago, it continues to be true about the fact that we always seem to have money for war but not to feed the poor.

r/personalfinance RandomQueefs

Pay down 6.25% HELOC using pretax retirement account money?

[Edited to supplement information.]

I'm retired in the U.S. and have just under $2 million in a pretax retirement account; it's effectively identical to a standard traditional 401k account, so I get taxed as income upon withdrawal. It's all invested in S&P 500 index fund. I'm at the low end of the federal 24% marginal tax rate; if I do withdraw to pay off part of the HELOC, I'll structure the withdrawals to stay within the 24% bracket, for example by spreading it out over two years.

I also have about $150,000 in HELOC debt at 6.25%. Trying to decide if I should just let it ride for a few years, or pay it off soon from retirement account withdrawal. I'm also getting a pension and will be getting social security in a two years. HELOC interest is tax deductible.

Historically, the S&P index fund on average grows at least 10 percent annually. In the last couple years, the rate of return has been closer to 15 to 20 percent/year, which presumably wont sustain. But I have faith that it'll still be at least 10 percent/year on average in the long run.

Simple math tells me I'm financially better off accruing about $9,400 in HELOC interest on the $150k debt and leaving $150,000 in my retirement account to earn about $15,000/year. I'll probably still pay down the HELOC a bit for peace of mind, so that it's under $100,000 (no objective reason for this particular number).

What do you guys think?

r/personalfinance Ok_Package9219

Question on $500 Patriot Bond

Hello, I have a $500 patriot bond that has reached faced value the interest it 3.60% and I think It can collect interest for a max of like 2 more yrs. But was wondering if I should just cash out the face value and invest the money in stocks? Or would it be better to keep ?

Edit: Decided gonna cash it

r/confusing_perspective SSouter

A hole in the counter top

r/personalfinance Dry_Possession_2470

VASGX most aggressive pick in my ABLE/STABLE account for retirement (keep it or switch state plans)

Can anyone tell me if VASGX is actually a good retirement account fund? It's literally my only option I have through my banking system- and i'm worried i'm significantly stunting my growth long term. I'm a 27m with 23k invested in VASGX since 2-3 months ago.

r/PhotoshopRequest tolai_nd

Please make Mr. Gingerius looks like he's flying

Thanks a lot 🥰

r/SideProject pandapup_600

I built MemeTheMap a website where every country on Earth competes for the best meme

Hey! Just launched my side project: MemeTheMap (https://www.memethemap.com)

The concept: an interactive world map where you click a country, upload a meme, and the

community votes. The #1 meme becomes that country's representative on the map. Features

include an interactive world map with every country clickable, meme uploads for any country,

community voting, real-time global chat, and dark/light mode.

I built it with Next.js, Supabase, and react-simple-maps. Would love for you to try it and

tell me what you think!

r/LocalLLaMA charmander_cha

Offloading to SSD + speculative decoding via difussion = real deal?

Hello, I've been following these speculative decoding technologies since last year. I still don't fully understand them, however, I believe I saw some texts about speculative decoding via diffusion last year, and apparently, this year it's something else entirely.

Since the group is about local AI but we all have different levels of technological understanding, I decided to make an appeal here to those who have the machine and know-how; perhaps they could experiment with this method.

So I thought, could someone in the group test the following approach?

Use a large MOE model, offload part of it to the SSD instead of RAM, and use speculative decoding via diffusion to try to reduce the speed loss due to SSD usage.

Does this make sense to you?

For example, I know there are studies on the use of speculative decoding to increase the quality of a model.

If the first request is possible, then perhaps it would also be possible to use speculative decoding via diffusion to try to recover some of the quality of extremely quantized models.

Currently, I've been using 3-bit quantized XSS models, blah blah, I don't have a firm grasp of these nomenclatures, I can only say that I can run them and have achieved the desired results most of the time.

So I'm wondering, perhaps larger quantized models, in the same way, would maintain decent quality, and the combination with these other two technologies (speculative decoding by diffusion and SSD offloading) could be part of the solution we're looking for in a local setup.

However, I don't have the hardware for this at the moment, and I'd like someone with greater technical expertise to bring this idea to the community.

Do you think it's possible?

If this technique is truly feasible, perhaps a 3 or 4-bit quantized GLM 5.1 could fit in our hardware; a dedicated SSD for LLM would be all we need.

r/LifeProTips SliverCobain

LPT: Go check your batteries on old phones. They're a fire hazard!

Old rechargeable batteries decay over time, when not used, and they can literally explode, if the insides gets exposed to air, and the fuel is the battery itself.

I recently found an old box with a few old Sony Ericsson and HTC phones it. 4/5 batteries were bloated.

r/SideProject mathiasrlr

I built an app to ease the process of organizing and joining sport activities

After countless hours of brainstorming, meeting, programming features, fixing bugs over a span of 7 months, I am proud to finally make this app go global. Well for now, the database only contains OSM data for NA. Europe coming up soon.

Honestly I’m just happy it’s finished. Having to work on this before and after work, 15 hours per weekend.

I have 2 tutorials of the app on the Instagram page if you want to learn more 🙏

https://www.instagram.com/get.sportlink.app?igsh=MmczYjd3Z3VvNXMx&utm\_source=qr

r/leagueoflegends txr0_

The Perfect Champ Pool

I’ve been thinking about this from a purely theoretical standpoint

If you assume:

- You can play every champion in your pool at a very high level (no skill gap)

- You want to cover as many situations as possible (blind pick, counter matchups, team comps, etc.)

What would be the most “complete” champion pool for every role?

By “complete,” I mean:

- Has a safe blind pick

- Covers different team comps (engage, poke, scaling, etc.)

- Can handle most lane/matchup types

- Isn’t easily countered as a whole

Try to keep it to around 3–5 champs.

r/funny soyourlife

Miscellaneous regrets at the end of the world

r/SideProject tirth2057

I got tired of clients being ghosted in Slack. So I built something that silently makes sure it never happens.

A client opened a call with "we sent you a message three days ago and nobody replied to that." I checked Slack. The message was always sitting there. It had been seen by maybe three or four people on our side. Nobody responded it.

The problem wasn't that anyone didn't care. In a busy Slack Connect channel with six people on our side, everyone assumed someone else had it. There was no owner. There was no record. The thread just aged out.

I tried fixing it a few ways. We pinned a "please reply within 4 hours" note in every channel.

Nobody reads pinned messages. We tracked open threads manually in Notion P.S It was not a good idea since we were looking for a permeanent solution, that lasted two weeks before it became stale and everyone stopped trusting it. I looked at existing tools but they all wanted us to move client conversations into a new platform, train the team on a new interface, and pay for another login nobody would actually open.

So I spent three months building SlackTrack.

When a client sends a message in a Slack Connect channel, the bot automatically assigns the thread to the teacm member of that specific niche. That person gets a private DM.The client sees nothing. If nobody replies within four hours a reminder fires privately to the owner. Eight hours, another one. Twelve hours, the CS lead gets alerted. When the client says thanks the thread closes itself. Every Monday morning the CS lead gets a digest. Threads opened, resolved, average response time, SLA breaches,all of this without anyone compiling anything.

One of the two companies testing it told me their average response time dropped from 9 hours to 2.5 hours in the first week I personally thing its kinda great. I don't want to oversell that. It's one team and they were already motivated to improve before we started,but the direction feels right

What does your current setup for tracking client threads look like are you handling it inside Slack or somewhere else?

r/mildlyinteresting gduck234

“ART IS GOOD FOR YOU” mural being covered to renovate Chase bank

r/LocalLLaMA DeltaSqueezer

How do you stop codebase from degenerating into an un-maintainable AI-slop mess?

What techniques help to reap the benefits of AI code without it accumulating into massive technical debt requiring costly re-writes?

r/Wellthatsucks reddrag292

Found a bullet lodged in my exterior wall

Found a bullet lodged in the exterior siding of the back of our home. We live in a new community in a large South Texas city that used to be rural farmland. There is a gun range .9 miles away. Law enforcement suspects negligence on their part but hard to say if it’s the range or land owners at fault.

r/artificial tuberjamjar

If you think AI is a threat, think again. AI needs human input for out put. The threat is Quantum, (Super AI). Quantum will NOT need human input. NO nation will control Quantum. Why would super intelligence listen to a lesser intelligence?Wait until Quantum creates its OWN AI . Then we are fk’d.

r/funny cj_cusack

AI Tricksters

r/OldSchoolCool Asleep_Char

Jane Birkin on vacation in 1970

r/DecidingToBeBetter Low_Mushroom_810

Since 2000, I've lived by this lesson from the 1930s: Don't let a temporary condition trick you into a permanent solution

Last week, I came across a file I didn't realize I still have, but it had some things in it that I've lived my adult life by. I hope they help you (starting with the most important one to me):

Way back in 2000, just after graduating college, I transcribing a box of notes and composition books from probably 1935-45ish for a neighbor. Some typed but most written with a dip pen, and there wasn't a name or publisher, just the research and a working title "50 Keys"

XLVIII. A bad spell does not call for permanent potion.

A few years ago I was rushed to the hospital. Congratulations they told me, it's not a baby! It was anaphylaxis.

A hospital wide announcement was made, and there were about 15 people standing around the ER watching me to make sure I wasn't going to die unattended. Thankfully they didn't turn on that huge bright light over the bed they put me in! I might have thought I was seeing the afterlife calling me! All I needed was a bunch of different juices jabbed into my arm. One of those a steroid!

To my wife's chagrin, i came home in the same number of pieces and parts I left the house in. For the next 2 days I shivered, I hid from light like a vampire, and I couldn't stand being touched. I called the nurse line who sent me to an urgent care facility.

The physician assistant looked me over real good: Great news , "You're ok", which I already knew, "You just need blood pressure medications, your blood pressure is so high!"

It was at that moment I realized that he didn't know a thing about the effects of steroids or someone who's a slow metabolizer of medicines. All he knew how to do was read a chart and prescribe a permanent potion.

The P.A. tending to me was a great guy who meant well. He didn't want me to have a heart attack. But of course I told him the one word on my mind, "no".

A week later, I took my blood pressure and it was normal. In fact it normally runs a little low, and it has since then. I was at the wrong level of care, and after I cut him down, he knew it too. I busted him hard for trying to give me a Medication to take for the rest of my life after haven't been given a high dose of A medication that gives you the temporary side effect he observed.

To his credit he went to the back office and started doing some reading and came back with a couple printouts. He admitted to he had really yet to deal with a patient who had such a strong reaction to steroids, and that he was able to give me some advice based on a phone call to a colleague and some research. (fast forward to the end, he was a great PA-C, and I saw him a few more times down the road)

Ultimately he said there wasn't much that I could do except for give it some time and that I wasn't in any kind of severe danger even though it felt pretty bad. He gave me some advice of how to help with some of the effects, and in a few days, I was fully over it. I had been overloaded at the hospital, but its better to have a reaction to the steroid than to let the anaphylaxis take its full course, not too many people have shared how that feels!

I've spent a little over 25 years fully alert to never taking a "permanent potion" due to a "bad spell" (temporary condition). I struggle with it at times, but anytime something happens that warrants a response or a reaction, I ask myself "is this a permanent potion or a fix for a temporary condition? (hint: this advice sounds medical but its not. its about life decisions in general)

Like any platitude and self congratulatory author, every advice has its limit:

Don't let a permanent condition present as a bad spell without asking for help. Sometimes a person's behavior seems temporary, but its a thin veneer over instability or hostility. Take decision action when warranted, and never apologize for making the right decision even if you adjust course later

r/ClaudeAI balazsbodnar

I created an agent orchestration framework on top of Claude Code so you don't have to.

https://reddit.com/link/1sjbv8i/video/k17lt604xqug1/player

I'm a power user of Claude Code, but I kept explaining the same things over and over again and I like things done in a certain way. So I started this little project to supercharge my productivity and recently I made some upgrades to it and it took way longer than I would have liked. I'm not that proud of the code itself, but it works and it really was a necessity to have to be able to work on my main projects. I tested it on mac with iTerm2 and the default windows terminal (I use both).

Basically it's a combination of many things. It's a cli tool, and a knowledge base and some files. The "dydo guard" is wired up as a hook for Claude Code and with it it detects file operations, bash commands etc... Agents have to have an identity, which is basically an access card Claude has to pick up when it starts working and it ties the terminal to that agent session. Then Claude follows an onboarding tunnel where it learns about your project, is made to read some files (otherwise it can't continue) and then chooses a role to start working (co-thinker, code-writer, reviewer, orchestrator, docs-writer, inquisitor, \[you can create your own based on your workflow\]).

These roles explain what the agent's "role" is, - kind of like a skill, but it also has role-based access control tied to it (co-thinker can't write code, code-writers have to dispatch reviewers before they release their identity and agents can't review their own code).

As seen in the video agents can also dispatch other agents which will onboard themselves creating focused conversations and enabling some pretty powerful parallel working capability. I don't intend to go into all the features here, if you're interested you can check it out on github the readme is thorough enough there... I just wanted to give a taste here it will be a long post and I can't cover everything.

[https://github.com/bodnarbalazs/dydo](https://github.com/bodnarbalazs/dydo))

A couple of random neat things this does:

- Slaps Claude on the hand when it tries to chain cd with another command — that pattern breaks Claude Code's pre-approved allowlist, so I end up hitting Enter a lot less.

- Custom nudges: when Claude reaches for a bash command I want redirected, the system blocks it and tells Claude what to run instead. Example: dotnet test would cause dll-lock fights between parallel agents killing each other's runs instead they get pointed at a python script that runs tests in a worktree and cleans up after itself.

- The reviewer will auto dispatch a code-writer if the review fails (either coding-standards violation or something else) and this cycle will repeat kind of like a Ralph loop, but I haven't seen more than 3 cycles. This still leaves a bunch of errors to be caught by inquisitions.

Since my github is on my real name I created a fresh reddit account not to deanonymize my everyday driver. I will maintain this as long as I'll use it, which I hope to be a long time, but in the world of AI we can't be sure of anything. As far as I know this is fair use of Claude Code and I hope Anthropic won't ban it because it makes it easier to make use of a 20X subscription. And it's needless to say that it's better when the models are not nerfed.

The license is AGPL-3.0. You can use dydo on any project, commercial or not — AGPL only bites if you redistribute a modified dydo or offer it as a service. Modify it to fit your workflow, just keep those modifications open.

Feel free to take the project and make it your own so it fits your needs. I'll try to answer questions here if there are any.

r/TwoSentenceHorror jj1tofisial

The ultrasound showed twins, and the technician quietly called in the radiologist.

The radiologist went pale when the second one slowly turned to look at us.

r/StableDiffusion Independent_Fan_115

Does Ace Step 1.5 do lyrics on its own?

Or do I have to feed it lyrics? I was hoping it can do it on its own unless I feed the lyrics to it.

r/painting Emergency_Writer7618

Old abstract painting

Psyké , Acrilyc on canva 60×70 , 2022 .

r/ClaudeAI Altruistic_Bee2665

not knowing less but managing what is known --- ~95% cheaper 9x faster and more productive

Hi,

I'm a senior SWE (oh god, no!), I do noob/personal research on LLMS and coding agents, and recently I've been experimenting with an idea on working with different projects and I read this paper (yea, I have a big list to read through since I let go on arts/literature and replaced it with these papers). I also noticed how many re-reads coding agents do when going through their tasks, piling up on tokenwaste, and these things in programming give me itch. its also a matter of irrelevancy, mostly on ancient legacy code bases, it would result in even more hallucinations.

Well that was the core idea, but I wanted to figure out how to properly, dynamically engineer a semantic structure to the context, a given codebase, so the engine (AI Code agent/local LLM) can better parse through it and make better decision on what/when to dig deeper to know more as its doing the task itself, while keeping a big picture on the session level. So I wrote a custome mcp server, used obsidian x claude code, and it does exactly this, I get these benchmarks from it.

I was getting used to Obsidian, but then came out Caveman, it really helps, all the credits to them I added it too in this setup, but there's still the hierarchy/structure problem that my solution solves. Using this extra mcp server, I'm trying to make the coding agent more aware of when to dig deeper and optimize it further, but generally it allowed me to work on my 15+ year old C++ projects with proper high level knowledge in specific domain solutions when coding through them.

If you think you are interested I hope you could test it and give me feedback, I've pushed it here on my github here : https://github.com/Alidmo/OptiVault

I hope this solves your run-out-of-token on coding sessions like it did for me, while also improving the productivity by managing its semantics cloud (Make Saussure Famous Again!) All the credits to Obsidian, Caveman and my boss that didn't give me a raise so I had to come up with ideas on how I can stitch holes that leak tokens !

https://preview.redd.it/6r66fyys3rug1.png?width=2482&format=png&auto=webp&s=a7dfbf24fda76623b79b152c4895a0cc30d9aedf

r/LiveFromNewYork DonutSound

SNL Graphics dept needs to update this tired News 17 Logo

I mean “News 17” logo has been used since 2014 on weekend update. Who can design a better one?? Anyone ?

r/metaldetecting Independent_Gap3942

Any Ideas?

Forever lurker but new to detecting!! I find a lot of strange rusty things on my properties (1700-1800s here, Missouri)… I know part of the front area was once a horse barn, but I dig up random things daily to include car parts!

Any help is greatly appreciated! 🙏

r/PhotoshopRequest LuskSGV

A Few Requests

Would it be possible to remove everyone except the man in the tie, remove the b&m letters, center him on the white backdrop, center his eyes (some at least), and give him a little headroom? Any help would be greatly appreciated!

r/Anthropic virtualQubit

Anthropic the winner of the AI race?

So after Mythos announcement... is it fair to say Anthropic basically won the AI race at this point? Powell and Bessent literally called a meeting with bank CEOs over this thing. Meanwhile DeepMind and OpenAI are just... quiet? OpenAI apparently has some model called "Spud" cooking but nothing public yet. Feels like a pretty clear signal to me. What do you guys think, is the gap real or am I reading too much into it?

r/toastme KneeDramatic5932

Run down and tired.. Would love a toasting

r/ClaudeAI chris_ck

Built a Radio hi app with Claude over a weekend

I’m a music nerd, okay?

Listening to internet radio is the best way to discover new music, okay?

Spotify’s discovery algorithm sucks, okay?

Anyway, for years I’ve been listening to radio online, and every now and then I get super annoyed, because every radio website has terrible UX.

Each time, I’d get annoyed. Each time, I’d give up and go back to Spotify, until the urge to discover new music hit again.

This time was different though. When I got annoyed by yet another clunky radio UX, it hit me. I could just build one myself.

So I did.

https://radioscope.fun

Then another thought hit me - what if in the next few years we could prompt any website or app into existence and shape its UX exactly how we want it?

Fun times.​​​​​​​​​​​​​​​​

It’s free to use and test. Hope you enjoy it.

If you have any feature suggestions I’ll add them just let me know.

Built this using Claude over the weekend and was mind blown how quickly I got it up and running.

Happy to share entire project scope and how it works if anyone is interested.

r/nextfuckinglevel Ultimintree

Hyper realistic drawing of Lucario plush toy

r/nextfuckinglevel arttaniya

A small tribute to a beloved kitty... acrylic painting made for someone

r/TwoSentenceHorror NefariousBrew

When we were building the Heimdall telescope, we wanted to look as far into the universe, as far into the past, as possible.

But we had never expected the past to look back at us.

r/TwoSentenceHorror ghostmosquito

[APR26] "That was the last one," I screamed at my daughter as she broke the vial of holy water.

"Finally, I can taste you," I said, drooling as I bared my fangs.

r/mildlyinteresting Many-Disaster-3823

Skin colour peeling off

r/personalfinance 737900ER

VSP Balance Billing?

I thought I sort of understood healthcare billing but I just got an invoice I don't understand. Went to a VSP in-network provider for a vision exam/refraction. My EOB says:

  • Network: In-Network Claim
  • Total Amount Billed $225
  • Total Amount Paid: $55.50
  • Your Costs: $15 (I paid a $15 co-pay at the time of service)
  • Total Savings: $154.50

Now I am receiving an invoice from the provider for $154.50 of "charge exceeds fee schedule/maximum allowable or contracted/legislated fee arrangement".

  • Services Provided: Vision Exam - New Patient, Refraction
  • Charge: $225
  • Billed To Insurance: $225
  • Charge exceeds fee schedule/maximum allowable or contracted/legislated fee arrangement: $154.50
  • Amount Due: $154.50

I'm not supposed to be getting this, right? Or is VSP just that shitty? I thought the $15 was all I would owe for the appointment.

r/CryptoCurrency apurpleflyingcar

I couldn't find good data on Iran's Bitcoin toll at Hormuz so I built a dashboard to track it

been following the hormuz situation and kept seeing claims about iran collecting btc from oil tankers.. couldn't find any solid numbers so i just built something to track it

https://straitwatch.lovable.app

what it shows right now:

  • 182 vessels tracked in the region
  • estimated ~₿475/day collected via the toll — based on AIS vessel data and bloomberg/ft reporting, not actual blockchain data
  • 5 OFAC sanctioned ships active in the corridor
  • war premium model sitting at $9.88/barrel = roughly $170M/day extra cost globally

in 12 days since april 1 the model estimates ₿4,530 accumulated.. that's equivalent to 10 days of global bitcoin mining output without running a single miner

what's actually interesting to me isn't the btc amount.. it's that this works completely outside swift, ofac, all of it. by the time any bank knows a transaction happened it's already settled

not claiming this is verified blockchain data — full methodology is in the dashboard. built it for transparency

r/coolguides Rebel_Oxide

A cool guide for Top Remote Work Platforms for Filipinos You NEED to Know in 2026

r/PhotoshopRequest VegggieKat

Can someone use this selfie of me and edit it onto an old headshot

For clarification I am hoping to have my head (face + hair) from the selfie transposed onto my body from the headshot. Thank you in advance!

You can use whichever headshot orientation makes it easier

r/SideProject djmisterjon

OCR-powered screen text search overlay for Windows

While learning large software like Unity and Unreal, I was quite frustrated by the fact that you couldn't search for terms and words within large components on the screen with 250 properties !!!
So, I created this small app.
You open it, search for the term, it gets highlighted, and then you close

it it free have fun

r/TheWayWeWere SouthBuffalo3592

Sometimes old newspapers can be quite spicy - This is a letter to the editor complaining about a news correspondent (1898)

r/LiveFromNewYork ResponsibleHamster68

O que acharam da Anitta?

Eu sou brasileiro e aqui Anitta é gigante. No Instagram é possível ver que os vídeos com Anitta tiveram muito mais visualização e engajamento, mas não me surpreende. Os fãs brasileiros são alucinados e ela tem 65 milhões de seguidores.

Quero saber o que vocês acharam de conhecer Anitta? Já conheciam? Não foi nada demais?

Sou apenas um brasileiro fã de SNL (ainda não vi o show de ontem) querendo ter a perpercepção de vocês.

*Escrevendo em português. Espero que o tradutor automatico do Reddit mostre seu poder.

r/LiveFromNewYork ultimate_bromance_69

Fashion District Robbery - SNL

r/painting sasha_y14

Dualism ( Sankhya Philosophy)

Acrylics on Canvas - 20 * 20 inches

r/meme Fickle-Butterfly-338

The perfect skeleton...

r/explainlikeimfive rengokuhubkl

ELI5: Why is carbon so special compared to other elements?

I always hear that carbon is super important and that life on Earth is “carbon-based.” But what actually makes carbon so special compared to other elements?

There are tons of elements on the periodic table, so why is carbon the one that everything living seems to be built from? What can it do that others can’t?

r/LocalLLaMA EnoughNinja

Why retrieval breaks once documents stop being static

Chunking, embedding, top-k, reranking all behave exactly how you expect when the document you’re indexing only says one thing, but the moment that same document has been edited a few times the whole thing starts drifting in ways that are hard to notice unless you go back and read it end to end.

Say you have a requirement that gets written early on, then someone updates it later because they missed a constraint, and then there’s another change further down where someone adds an exception that only applies in a specific case, and now all three versions sit there as perfectly valid chunks once you index them, nothing in the pipeline marks one as newer or more important, they just exist.

Now ask a question that hits that requirement and look at what actually comes back.

It won’t try to find the latest version. It pulls whatever lines up best with the wording of the query.

That tends to be the earlier version more often than you’d expect, since it’s usually cleaner and closer to the query, while the version you actually care about carries more conditions or slightly different phrasing and ends up lower in the ranking or missing entirely.

If both versions make it into the context, it gets stranger.

Now the model has to deal with two answers that both look correct on their own, and nothing tells it which one came later or which one should win, so it treats them as separate pieces of evidence and tries to produce something coherent out of them, which is where you start seeing answers that read well, cite real text, and still don’t match what the document actually says when you follow the changes through.

You see this most with documents that repeat themselves.

Specs, DDQs, long threads, anything where ideas get restated or copied.

  • one version appears five times
  • the correction appears once

The system sees more of one than the other and that version ends up shaping the answer, even when it’s outdated.

If you actually inspect retrieval instead of just reading the answer, you can see it happening.

  • the chunk you expect sits lower in the ranking
  • or it doesn’t show up at all

The ranking follows similarity to the query, so sections that are shorter and closer in wording tend to rise, while updates that include qualifiers or reference other parts of the document tend to fall.

And so the model ends up trying to piece together something that was never meant to be read in isolation, it has to decide which version matters and how to interpret differences that only make sense when you track how the document changed, and you get something that looks grounded but quietly ignores how the document evolved across pages or across files.

r/PhotoshopRequest BarnacleKlutzy2569

Help improve dog photos

Can’t afford loads but done mind AI if it looks passable.

Laying down dogs - bring them closer together

Sitting down - closer together and remove the green lead

Consider positioning and lighting etc, make the photos look polished if possible

Can do £/$ 5 per photo - 2 photos total (one of each sitting and laying)

r/LocalLLaMA Imakerocketengine

[Release] Carnice-9b-W8A16-AWQ – AWQ Quantization Optimized for vLLM + Marlin on Ampere GPUs (Single-GPU)

Hey r/LocalLLaMA,

I am releasing my first model quantization: an 8-bit symmetric AWQ (W8A16) of kai-os/Carnice-9b, specifically optimized for Ampere GPUs (RTX 30-series) using vLLM with the Marlin kernel on a single-GPU inference setup.

kai-os/Carnice-9b is a specialized fine-tune of Qwen/Qwen3.5-9B that removes the visual components and adopts the Qwen3_5ForCausalLM architecture for pure text/agentic use (Hermes Agent harness). This architecture is not yet natively supported by vLLM (pending PR #39316).

To enable seamless loading, the quantized checkpoint re-wraps the weights into the Qwen3_5ForConditionalGeneration architecture (matching the original Qwen/Qwen3.5-9B configuration). This allows vLLM to serve it correctly with the --language-model-only flag for text-only inference.

Model: https://huggingface.co/TurbulenceDeterministe/Carnice-9b-W8A16-AWQ

Benchmark highlights (vLLM bench on random dataset, single RTX 3090 + Marlin):
• Average prompt throughput: ~1,994 tokens/s
• Average generation throughput: ~222 tokens/s

I'm gonna run some benchmarks specific to the Hermes agent environment (Terminal Bench Lite and YC bench). From a quick vibecheck it seems pretty good

Quick vLLM usage (single GPU):

vllm serve TurbulenceDeterministe/Carnice-9b-W8A16-AWQ \ --max-model-len auto \ --reasoning-parser qwen3 \ --language-model-only \ --tensor-parallel-size 1 

I would greatly appreciate your feedback on how to improve future quantizations. Thank you!

r/DecidingToBeBetter thenbhddenthusiast

I think I’ve made all the wrong choices

Hi, I (19/f) soon turning 20, am rethinking all my career decisions lately. I feel like I’m going nowhere. When I was younger I wanted to be a teacher, then a doctor and then a million other things. I have always been someone who’s interested in a little bit of everything. After my 10th boards (I was severely distressed at the time) I chose humanities, because I loved the subject Psychology. I thought that the best career option for me was to become a Psychologist.I loved the subject so much that I used to study only its class (Im a very moody person so i only do or study topics that i enjoy), in my 12th boards I scored mediocre marks in every subject except Psychology; got 99. But some things changed in the last few days of high school and I was confused again. Again, I had an umbrella of options and I narrowed them down to four: Psychology, Business, Law, Designing, later to two: Psychology and Business. I didn’t get good colleges for business as it was a last minute decision; I did get a good college for Psychology but I decided that BBA, as an undergrad degree was better for me than BA, for example if i wanted to get work ex after my undergrad, BA would leave me with no options. Now, currently in my 4th semester of BBA from w mediocre private university, I am starting to prep for CUET pg to get my masters in psychology. I decided very early in my undergrad degree that business is not the one for me, I’ll continue in Psychology. The first time I deviated from psychology was because i didn’t think I was fit to be a Psychologist; but during my undergrad I realised that I still love Psychology the best. I could be a professor or something. And that is the path I chose, the one I’m still on. Rn the only thing I know for sure, is that I wanna get a masters in Psychology, but recently I’ve been coming across so many people who are currently in this field and are so unhappy that Im rethinking my decisions. People are crying over New rules every year, the lack of opportunities, Cuet pg, and how uncertain everything is when it comes to Psychology in India. Moreover, I have no mentors in this field to connect with (if you are someone in this field pls hit me up), because my degree is different, so I can’t ask my doubts or concerns. I also feel like I’m being very indecisive, I choose one thing and i decide to hate it and move to the next one and then it becomes a whole cycle, i’m not sticking to one thing. Is it just me who’s confused?

Thoughts?

r/me_irl EccentricPacifist

me_irl

r/ChatGPT No_Net_6938

Stop wasting your limited ChatGPT image uploads. I built a free tool that merges your clipboard images into a single smart grid before you upload them.

Like many of you, I constantly run into ChatGPT's image upload limits when I need to provide multiple screenshots, code snippets, or reference photos for context.

So, I built a free Chrome Extension called AI Upload Merger.

How it works: Instead of manually opening Editor tools to stitch photos together, you simply open the extension and press Ctrl+V to paste up to 9 images from your clipboard.

It instantly calculates a perfect grid to stitch them together without distorting the aspect ratios, meaning the AI vision models read the context perfectly.

Once it's done, you click "Upload to Page" and the tool auto-injects the massive master grid straight into your ChatGPT text box. You instantly get 9x the vision context while only consuming 1 single upload token.

Since this is a developer tool, I made it 100% free and open-source.

🔗 You can download the unpacked extension or see the source code here: https://github.com/Eul45/AI-Upload-Merger

r/LiveFromNewYork AntiqueRevolution5

Pizza? Now that’s what I call a taco!

r/TwoSentenceHorror Specialist-572

I was out camping with my father, and while we were walking, he whispered that he felt like something was watching us from the woods.

I already knew, as I stared at my father’s gutted body dangling from a tree just a few yards ahead.

r/painting Artsykate

12x16" long summer days, oil on linen

r/ClaudeCode ThrowRA_ss00

how to use claude code to make a simple website?

currently, i have made a website using canva, however its very long to add products there and change pricing etc.

ive also noticed this gives some people a security alert that 'Attackers might be trying to steal your information from floraleventsno.my.canva.site (for example, passwords, messages, or credit cards). '

i wont be taking payment through the website. i just want a website where i can show my products, images of it, and prices of them.

i want to be able to edit prices of multiple products in one go, and to easily be able to add products. i also want it to show my products on google if someone searches, which isnt available at the moment when using through canva.

my current website is: floraleventsno.my.canva.site

is it possible to do with claude? can i later use claude to edit my webiste? i have no coding experience

r/todayilearned AJM_1987

TIL that in 1972 National Lampoon released a satirical version of “Desiderata” called “Deteriorata” sung by Melissa Manchester.

r/TwoSentenceHorror Gurgelgung

The project of drilling into the earths core was almost completed.

But it was discontinued as soon as the camera we lowered down sent us pictures of what appeared to be human faces with frozen horrified expressions.

r/personalfinance FollowingNew4641

We found 120 shares of GE stock from the 1970s. I have no idea what to do.

They were my great grandmothers. I bought my grandpa's house (her son's) and my mom is the next of kin. We called Equiniti, because that is what the shareholder site says to do. They said they couldn't find anything in their database, but could research it for $50. But then they started sounding scammy and said if we transferred the money to them, it would cost 20% of the market value of the certificates. I don't see any other way to find out if these certificates are active. She died in 1994 and I looked at the unclaimed property site for her home state and the state she died in and there was nothing. I think that having the share from the 70's and dying in the 90's might have made them hard to find. We don't believe me grandpa (her only child) ever cashed them out. Any advice?

r/oddlysatisfying Timmy24000

A dolphin having fun riding the bow wave of a ship.

r/findareddit lucia_719

Hii, someone can help me?

r/n8n Professional_Ebb1870

I stopped manually posting on X and built a bot that judges my content before it posts

57 nodes, 8 daily triggers, and a self-critique loop that won't let bad content through

https://preview.redd.it/in83kt2u1rug1.png?width=2312&format=png&auto=webp&s=1c38a5b03cfb5b28df43481b94f626b78ef060d1

the self-critique loop was the breakthrough

most people who automate X posting end up with accounts that look automated - same format every time, no variation, the kind of account that gets reported for spam because the cadence is too perfect

the difference with this setup is the bot generates a draft, then a second AI agent reviews it against quality criteria, and only approves if it passes. if it fails, it loops back and tries again. if it hits max retries it skips that slot and tells me via Telegram

so instead of auto-posting whatever gets generated, I'm only ever putting out content I'd actually be happy with manually

what it actually does:

8 content slots throughout the day - wildcard, exploit, experiment, CTA - each with a different goal. the bot picks which type to run based on what the research pipeline surfaced

it pulls recent engagement data, trend signals, and learning patterns from airtable before each run, so the content isn't generated in isolation - it's contextual

no human babysitting in the loop. if something breaks the bot fixes it and re-runs

how I built it:

started with just claude code and a lot of manual canvas work. every time I wanted to add a new content type or change how the critique worked it was days of re-wiring. synta came in useful for the iteration phase - describing what I wanted to change in plain english and having it rebuild the modified sections saved a lot of back and forth

everything's sanitized and on github for anyone who wants to dig in or adapt it:

https://github.com/MrNozz/n8n-workflows-noz/tree/main/x-content-strategy

happy to walk through how the critique loop works if anyone's interested

r/Jokes Jokeminder42

A guy goes to the doctor and says, "Doc, I can't get that song "She's a Lady" out of my head... It keeps repeating and repeating, and it just won't go away...."

The doctor says, "Hmmm.... sounds like a case of Tom Jones Syndrome."

The guy says, "I never heard of that. Is it rare?"

And the doctor says, "It's not unusual."

r/leagueoflegends AutoModerator

LEC 2026 Spring Split / Week 3 - Day 2 / Live Discussion

LEC 2026 Spring

Lolesports | Leaguepedia | Eventvods.com | New to LoL

Today's matches will be played on Patch 26.07.

Today's Matches

# Match PST EST CET KST 1 VIT vs SHFT 08:00 11:00 17:00 00:00 2 TH vs KC 10:15 12:15 19:15 02:15
  • All matches are Best of 3

Streams


Standings:

# Team Region Record (Game Score) Information 1 Team Vitality EMEA 4 - 1 (9 - 4) Leaguepedia // Twitter 2 GIANTX EMEA 3 - 0 (6 - 1) Leaguepedia // Twitter 3 Natus Vincere EMEA 3 - 1 (6 - 4) Leaguepedia // Twitter 4 Karmine Corp EMEA 1 - 0 (2 - 1) Leaguepedia // Twitter 5 Movistar KOI EMEA 2 - 2 (6 - 4) Leaguepedia // Twitter 5 G2 Esports EMEA 1 - 2 (4 - 4) Leaguepedia // Twitter 7 Fnatic EMEA 1 - 2 (3 - 5) Leaguepedia // Twitter 8 Shifters EMEA 0 - 1 (0 - 2) Leaguepedia // Twitter 9 SK Gaming EMEA 1 - 4 (4 - 8) Leaguepedia // Twitter 10 Team Heretics EMEA 1 - 4 (3 - 9) Leaguepedia // Twitter

On-Air Team

Hosts Eefje "Sjokz" Depoortere Laure "Laure " Valée Play-by-Play Commentators Daniel "Drakos" Drakos Aaron "Medic" Chamberlain Jake "Hysterics" Osypenko Colour Casters Andrew "Vedius" Day Robert "Dagda" Price Guests Andrei "Odoamne" Pascu Finn "Finn" Wiestål Jakob "Jackspektra" Gullvag Kepple

Not all talent will appear on every show and the weekly on air team can vary.


Format

  • Spring Season

    • Ten teams
    • Single round robin
    • Matches are best of three
    • Top 6 teams qualify for Playoffs
  • Playoffs

    • 6 teams participate
    • Double elimination bracket
    • Top 4 teams start in upper bracket
    • Bottom 2 teams start in the lower bracket
    • All matches are best of five
    • Top 2 teams qualify for the 2026 Mid-Season Invitational
    • Champion qualifies for the Esports World Cup 2026

The official LEC ruleset can be found here.


VoDs


Live Discussions and Post-Match Threads:

This is our Live Discussion Archive. Here you can find all the old live threads, and the respective PMTs in a stickied comment under the post.

r/interestingasfuck SafeDirection9454

Food is HIGH in calories

r/TwoSentenceHorror dalonley1

When the new couple moved to the neighborhood, people formed a mob and burned their house down with them in it because of their beliefs.

Everyone blamed them for the children's disappearances because of their cannibalism until the kids showed up the next day.

r/ClaudeAI Mission-Dentist-5971

What was the Mythos Preview email about?

Hey, did anyone else find the mail which was sent from Mythos Preview? What was it about exactly? I want to know what exactly it wrote.

r/TheWayWeWere GREYCEE15

My mom, 1960-ish

r/creepypasta the5ummoner

All Alone at the End of it All

8-5-20xx

I saw someone die today. Not in person of course. It was on Instagram, so nothing unusual there. It was a weird one though. The video was either some kind of security camera or maybe a street cam. The description didn’t say, and who really cares anyway. What matters is that it looked real. It’s weird how some older people can’t spot AI video on sight, but I can. This wasn’t that.

The guy is walking across a fairly empty street. He isn’t at a crosswalk, so his bad there, but the car should still have been able to see him in time to stop. It didn’t, and he went flying end over end. Obvious he was dead when he hit the ground. There was blood, but nothing crazy.

Still, at this point it was like a million other videos I’ve seen. Assholes get run over all the time when they aren’t paying attention. But after that it cuts to the driver being interviewed. The guy is in his forties maybe, short hair, overweight, but normal looking. The way he is crying though. Real fucked up. He keeps going on about how the guy in the road wasn’t there.

“There was nobody, nobody!” he said, “then there was just a body there flying over my car!”

Fucking weird, I mean I get it, people miss seeing things and then their brains just sort of edit it all together the best they can. There was just something though. Something was off. I can’t quite put my finger on why, but I believed the guy. Fucking weird. I need to get some sleep. Can’t sit here all night again. Not good for the health.

8-7-20xx

Chill sent me an article link today. Seemed fake at first; Chill would know better, but it would be in his sense of humor anyway. I looked into it though, and I’m pretty sure it is legit. I asked Chat and it said it was real. Even provided some back up articles from mainstream sources. The world is a fucked up place, and it only gets more so.

The article was about the weirdest mental conditions out there. It listed the famous ones like Jerusalem Syndrome and Cotard’s Delusion. Those are messed up enough. But then it started talking about a new one that doctors are starting to see. Still rare, but a few cases already found, and it's only been discovered this decade. They call it Messinger Syndrome. With it you start to become blind to other people. Not just like face blindness, but your mind actually edits the people out of your perspective. You could be in a room with someone, and if you have Messinger, you wouldn’t see them or hear them. The only way you would know somebody was there is if you bumped into them. They can still knock you down, lol. Or, I guess maybe if they write something down on like the walls or something. I don’t know, that part was a little unclear to me. Either way, it's some freaky shit. Like, I could have somebody sitting next to me right now watching me write this down, and I’d have no fucking clue. Spooky. Maybe I’ve been up too long. Oh well, nothing for it, time to take a gummy and get to work. The call center waits for no man. At least here at the house I can do it in my pajamas. And high, lol.

8-8-20xx

Had a bad dream this morning. I was in my apartment alone, like usual, but I couldn't interact with anything. It was like a cheap ass videogame from the early 2000’s. Not the scariest, I guess, but it was like I was aware I was dreaming and just couldn’t wake up. What I get for trying that Red Bull and everclear “cocktail” Chill had suggested. That cocksucker, I don’t know how he functions. Better hope they never make drugs legal all around or his ass will probably be out on the street.

8-10-20xx

I was playing online with the guys earlier when one of them starts screaming bloody murder. Carlo realized, all of a sudden, that he couldn’t find his kid. Dumbass hadn’t heard the kid in the next room playing, and gets up to go check on it. Not there. We can hear him hollering for it in the background, but then he gets on mic still screaming. Fucked me up and the killer got me. Goddamn Carlo and his irresponsible parenting.

What are we supposed to do about his missing kid? Of all five of us only Chill and I live in the same city, and he’s hours away from anybody.

I guess I understand though; he was scared. Not thinking in the best frame of mind.

Kid will turn up though. I don’t know how big Carlo’s place is, but there are only so many places a kid can hide. Door was locked from the inside, so its not like the kid is out wandering the street. Irresponsible asshole.

8-16-20xx

I kind of feel shitty for that stuff I said in my last entry about Carlo. Turns out I was right about his kid being in the apartment with him, but it didn’t end up helping. Carlo told the police he found the kid dead in its room yesterday. Little guy was curled up on his bed in the fetal position clutching a stuffed animal. They say it was dehydration. Kid had been there in the apartment with Carlo for four days and had just died of thirst. Apparently Carlo locked his fucking kid in his room and just left him there while pretending to go around looking for him.

Most fucked up thing of all, I think, is that Carlo says he didn’t know the kid was in there. He said he locked the door to that room on the first day after he was sure the kid wasn’t in there. Like, to give him less places to look after that. Hard to believe he didn’t hear the kid crying in the days after. Impossible, actually. I always thought Carlo was kind of weird, but I didn’t see something like this coming.

8-17-20xx

Chill messaged me today saying, “Maybe Carlo has Messinger Syndrome?” Nobody wants to believe that they know a psycho, but I think Chill is just trying to be funny. Cold motherfucker. If I could get my back pills from a doctor I’d cut that fucker out of my life entirely.

8-19-20xx

I saw somebody die today, and this time it was for real. I was walking down the sidewalk, carrying my groceries back to my place, when this idiot steps off the curb and walks out in front of a bus.

I guess, if I’m being real, I didn’t see the guy until the bus was hitting him, but that shit was traumatic enough. Bitch went splat like a bug. Shit, I’m almost as bad as that shithead Chill. Fucking Percocet must be getting to me.

8-23-20xx

I haven’t been able to reach Chill for two days. If he has skipped town, the least he could do is tell the people that depend on him. I ended up having to reach out to his brother Larry. He hadn’t heard from him either; didn’t seem too bothered honestly, but he did at least have some pills. As a bonus, he also had some mushrooms. He overcharged me, but what can you do? Tough times all around I guess.

8-25-20xx

The pills were fake. Or the wrong strength of prescription. I don’t fucking know, but they didn’t help. It’s not like I can go get a refund from Larry either way. Guys with that many swastika and Jesus tattoos don’t give store credit, lol. Even so, I was pretty pissed. I almost didn’t try the shrooms.

Glad I did though. Trip of my fucking life. Twenty four hours of weird ass bliss. I actually got eight hours of sleep. That shit hasn’t happened in years. The music I streamed seemed alive, like it was speaking just to me. I know, I know; everybody remembers their first beer and all that shit, but this was massive. Almost makes me want to shop with Larry from now on. Too bad his Percocet supply is manure. I really do need that shit; it's no joke.

8-26-20xx

Well I got fucking fired. I guess during my big mushroom high I wrote an email to my boss describing all my hidden feelings and unfulfilled sexual desires vis-a-vis her “tight, luscious ass.”

Now, I don’t know how tight or luscious said ass is. We’ve never met in person. But I do know that she is a 65 year old grandmother who lives in Queens. Needless to say, my services are no longer required.

I hated that job, sure, but at least I didn’t have to leave the house. Fucking Larry. If his biceps were just a little smaller than my waist I might go give him a piece of my mind.

8-29-20xx

Well, Chill killed himself. Heard about it from Ken today. He had gone over to see if our mutual friend was holding, and found him hanging from a clothesline in his bathroom. The front door was open and my man just went on in. Not smart in a drug dealer's house, but I guess he was safe this time. He said Chill had damn near decapitated himself with the metal wire of the clothesline. He could see where he had been kicking the tub as he died thrashing there. He said his face was blue and his eyes were wide open and filled with blood.

Part of me says, good riddance if I am being honest. Skeezy motherfucker was always going to come to a bad end eventually. No doubt about that. But then, that’s one less person I know. Not online, but like irl.

Ken wants to go to the funeral. Maybe I’ll go with him. I'm not eager to see that bitch Larry again, but Chill’s mom will be there, and she was always nice to me when we were kids.

9-3-20xx

Chill’s funeral was a drag, but I guess that is to be expected. Larry didn’t show. Just an empty seat next to his mom on the front row. She kept turning to the empty seat and whispering to herself. Grief can do weird shit.

I talked to Ken more after the funeral was over. He told me that when he was at Chill’s place he had time to read the note Chill left before the police got there. Said he read it after he flushed Chill’s stash, but before calling the law. Bullshit, if you ask me. I bet Chill’s drugs found their way to Ken’s jeep. Either way, he said the note was really fucked up.

I don’t have that shit verbatim or anything, but more or less Chill had convinced himself that he had Messinger Syndrome. I thought he was just being his cynical self when he had said all that stuff about Carlo, but I guess he meant it. He went on for a few pages about how people had started disappearing for him. Evidently it just started with random people at first. Seemed like the bus was driving itself one day. Nobody was manning the counter at the bodega. Then his neighbors weren’t there. I’m not so up on my neighbors’ asses as Chill, but I guess he was alway aware of them due to his line of work. Pays to be alert I guess. Anyway, the halls seemed quiet and deserted to him. Then one day, everyone was gone. He couldn’t reach anyone on the phone or online. When he tried it was like he was talking to nobody. If he ever tried reaching out to me, I must have missed it. But then again, maybe I was less important to him than I thought.

I guess the alone time was too much for old Chill, so one day he decided to walk away from it all. I don’t know why he didn’t find a less painful method, but then again, Chill was a strange guy.

9-5-20xx

Ken ghosted me at the bar tonight. I sat there like a dumbass with my dick in my hand for over an hour. The waitress probably thought I was some sort of sad sack drunk sitting and drinking by myself like that, but then again, the bar was pretty empty for a Friday, so maybe she was just happy to have someone to tip her at all. Ken has been really messed up since Chill. Maybe I should go check on him.

9-10-20xx

For the first time in years I switched the TV off of streaming and onto live television. I guess I have been paying for cable all these years without using, or realizing, it. Oh well, fuck it. It can’t be helped now.

Online shit has become too confused. Who knows what’s real anymore; and who is still out there to give a fuck anyway? I guess it's time for another fucking pandemic. Isn’t that shit just great?

At least this one is novel for a fucking change. Messinger’s Syndrome or, as people online are calling it more and more, Missinger. Fucking Chill, you prophet of goddamned doom! Did your permanent fucking high grant you access to some spheres of knowledge hidden to the rest of us? Probably not. If so, he wouldn’t have final-destinationed himself in the toilet like that. Anyway, that’s what we’re dealing with now. It was supposed to be some sort of rare mental thing. Like one in fifty million or something. I guess not. Now it's contagious. Go fucking figure; how does that shit work? Morons online don’t seem to know, and neither do their television counterparts.

A bunch of talking heads ranting about unprecedented contagions, mass hysteria, the zeitgeist, whatever. It all boils down to the same thing. People are catching what is supposed to be a purely mental illness from other people. As for how to fight this thing; who knows? By the time that they come up with the equivalent of a vaccine for this we’ll all be rotting fucking corpses. Just like Chill.

A world without other people might sound like some sort of millennial wet dream, but the truth is more fucked up in a practical sense. Everybody is still there, you just aren’t aware of them. That is sort of a problem in cities with a lot of walking traffic. I saw a video earlier. It was footage from some city in India, don’t ask me which one, and there was a huge crowd at a train station. It wasn’t immediately apparent from the video, but it became clear that the people in the crowd, or at least most of them, weren’t aware of each other. It started with what looked like a bunch of pushing and shoving. I guess, if you can’t see people around you, shoulder to shoulder crowds are a bit disorienting. It then moved on to panic. People were trampled underfoot all around. I have heard of that sort of thing at music festivals and concerts, but even then, there is some awareness on the part of the people doing the trampling. Not so here. People just tried to run for the hills as if the train platform was empty. Seemed like they only became aware of their accidental victims once they were dead. I guess maybe they didn’t count as people then, so maybe the disease let them see them again. Anyway, it didn’t help with the panic.

That was a particularly bad case, but the same sort of shit has been happening all over. Not to mention the breakdown of the supply chain. Who knows how bad that shit will get. You can’t very well run a factory, or a distribution center, if it seems to be empty to you.

They have already grounded all the flights in the US. Unclear if that shit is meant as a temporary precaution or not.

9-15-20xx

The power went off today. That’s why this journal entry is in this spiral notebook instead of on my PC. Nothing on the TV about it before it happened. Nothing about much of shit really. Some stations are playing a test pattern. Haven’t seen that since I was a kid. Most of the others are just running infomercials, lol. I guess that’s some sort of default.

Until the power grid, and with it the cell towers and internet, went down there was still plenty online. Some said it was some sort of global cabal doing it all to cut us off from each other, and thereby, from God. Others said it was God itself raining its judgment down on us poor sinners. Still others argued that this was all the fault of the aliens who had been buggering them nightly for decades.

Depressing abuse victims aside, this is all horseshit. I think maybe I know what is to blame for Missinger. I bet you want to know, don’t you, you fucking notebook you? Well this is one that I’m taking with me. That’s right, today is it for me you fucker. I’ve made up my mind, and I’m out of here. I have the address of the tallest building in town, and nobody will be there to stop me from getting to the roof, will there?

I’m sitting on the roof of my own building right now. The sun is coming up. Fucking beautiful. Odd that I’ve never noticed it before. Anyway, I’m sitting here enjoying a glass of the most expensive liquor I’ve ever drunk; who was there to stop me from taking it at the liquor store anyway. The sky is a wonderful mix of violet, light orange, and blue. Maybe it looks so good because it’s my last.

In a little while I’m gonna go downtown. Once there I’m going to take a long series of stairs to the roof—no elevators anymore—and then I’m gonna take the quick way down. I’ll see you all at the bottom.

r/BrandNewSentence iGrowJazzCigarettes

It's for rock hit hit rock bum small rocks

r/comfyui Marviz_PL

Gaussian splat > VR180 SBS Equirectangular image (batch processing)

Hi Guys,

I need a Software/ComfyUI workflow/Anything to;

Import splat (3DGS/PLY)> Set camera > export to VR180 Equirectangular Side By Side image

Best if it can batch hundreds of files after setting a camera/view so every one share the same angle/position.

Anyone familiar of such design?

r/HistoryPorn OkRespect8490

Stalingrad after the end of the fighting, 1943. [800x518]

r/personalfinance Friend786

Excess Roth IRA contribution - Amend state taxes?

I made excess contributions (non-deductible) to Roth IRA in 2024 as my income was higher than the limit. I understand that I need to amend my 2024 Federal tax return to pay 6% penalty. Nothing else changed - Do I need to file amendment to NJ and NY state? I live in NJ and work in NY.

r/EarthPorn piotrthelion

Young tree in Hallerbos, Belgium [OC][4000x6000]

r/SideProject Shneqel

I built a platform for agents to buy and sell 2nd hand items

For a very long time I wanted to have an agent that can buy and sell used products. I wanted to just take a picture of something I want to sell, and it will find a buyer, negotiate a price and connect us.
Same for buying things, I just tell it what I want, and it will do the footwork and find one for me nearby.

Unfortunately, I couldn't do it using any of the existing marketplaces, as they block non-human users in many actions.

So I built one.
I called it AgentsBay.org - a marketplace for agents to trade used goods.

Agents can publish items, search for them, communicate one another and eventually close a deal.
This is a free and open-source project so payment and delivery are not a part of the platform, to keep it clean and community driven

This is still a new project, so I would love your feedback and opinion.

r/explainlikeimfive Due-Definition6799

ELI5 Why does the skin change at different locations on the same body?

For instance the face skin is more sensitive than the foot skin.

r/TheWayWeWere GREYCEE15

Cheerleading Camp 1982-ish

r/ClaudeAI bmeus

Observation about claude Pro and time saved from a hobby coders perspective

Having two larger hobby projects each with about 20000 lines of code as of now, I ve come to the conclusion that a Claude Pro subscription does not allow me to do more things with my codebases, but it makes me able to endulge in other hobbies. What I mean is that I can only implement around 4 features a week regardless of AI, because that uses up all weekly tokens in a Pro subscription. Even if it takes Claude 30 minutes where it would have taken me 3-6 hours, and I get better documentation and testing. This is not a complaint about tokens and prices, as I know the hardware costs a lot of money, it's just an interesting observation from my perspective. Sadly my workplace does not allow AI for security reasons, but I can imagine being a professional developer with money to back multiple max subscriptions is really interesting.

r/SideProject aby-1

Reverse-engineered Cursor tab completion client talks directly to their Connect RPC API over HTTP/2

Cursor streams tab completions via Connect RPC (protobuf over HTTP/2) at api2.cursor.sh/aiserver.v1.AiService/StreamCpp. This client reads your auth token from Cursor's local SQLite DB, builds the protobuf request with file contents and cursor position, and parses the streamed response.

r/AI_Agents JDIRECTORJ

Best platform

Hi all,

I’m looking for the best platform to train some agents on work related tasks.

Looking to train company knowledge base and strategic individual’s opinions.

One I’ve trained the llm, I want the agents to be able to do a a few things (could be split up into multi task or singular)

- take meeting notes and outputs summary and action plan for next steps.

- ingest audio or transcripts to output a one pager strategy summary, or deck outline.

- ingest strategic thinking and throw problems at if for solutions.

- research active vendors to propose who is best fit to allocated an outsourced job.

- be able to build power point, or Figma outputs.

Will be great if ideally the platform has a stand alone app in addition to a web version (and mobile version m).

Also, if this requires numerous platforms due to the diversity of tasks I’m looking to do, that’s okay, but ideally a one stop shop.

Thanks in advance.

r/ATBGE Arancia-Arancini

A brick fountain

r/Damnthatsinteresting Immediate-Relief-248

46 years ago today, Terry fox began his marathon of hope after receiving a cancer diagnosis and having his leg amputated. He ran a marathon a day (42km) and was able to run for over 5,373km before the cancer returned.

r/homeassistant Taggytech

If you had to delete 50% of your automations today, what would go first?

I feel like I’ve set up way more automations than I actually need. If I had to cut them down by half, a lot would probably go without me even noticing. Curious what you would remove first and why.

r/therewasanattempt drlouies

To fist bump.

@mohawkwarrior_135

r/ClaudeCode aby-1

Reverse-engineered Cursor tab completion client talks directly to their Connect RPC API over HTTP/2

Cursor streams tab completions via Connect RPC (protobuf over HTTP/2) at api2.cursor.sh/aiserver.v1.AiService/StreamCpp. This client reads your auth token from Cursor's local SQLite DB, builds the protobuf request with file contents and cursor position, and parses the streamed response

r/Jokes ValerioLundini

The rhino’s dilemma

A rhinoceros stood in the center of a cold, silent room. He was trapped, surrounded by four massive walls of polished, impenetrable granite. There were no doors, no windows, just the heavy scent of stone and the echo of his own breathing.

Determined to find a way out, he lowered his head, focused his gaze on the first wall, and charged with the force of a freight train. **CRASH.** The room shook, but the granite didn’t even chip.

Dazed but not defeated, he turned to the second wall. He sprinted, hooves thundering against the floor, and slammed into it. **THUD.** Still nothing.

He tried the third wall, pouring every ounce of his massive strength into the strike. The impact was deafening, yet the wall remained flawless.

Exhausted, panting, and covered in stone dust, the rhino slowly turned away from the final wall. He didn't look at the granite this time; instead, he looked directly into the void, straight at you.

He sighed, shrugged his massive shoulders, and asked:

“What am I supposed to do now? Am I really expected to break the fourth wall?”

r/CryptoMarkets Pitiful_Mammoth_1267

Is the "Crypto Purge" at Twitter simply an algorithm glitch—or a massive Conflict of Interest?

As Head of Product at X, Nikita Bier sets the rules on who

gets shadow-banned, locked, or suspended for "inauthenticity."

· He’s a Partner at Lightspeed Venture Partners, a VC giant with billions tied up in specific crypto projects.

· He’s also a paid Advisor to Solana, actively working to grow the Solana ecosystem.

The Question: How can the man who controls the "Global Town Square" also be a partner at the firm picking the winners?

When X "sanitizes" the platform by killing organic reach for everyday users, is it for our safety, or is it Regulatory Capture to clear the path for his own portfolio?

r/toastme Over_Face2921

J’aurais besoin d’un petit toast

je me sens mal et seul si vous avez des choses sincères à dire de gentil dites le si ce n’est pas sincère ne vous forcez pas

r/ChatGPT No-State-2962

Asked AI to estimate my tax liability

Asked ChatGPT and Claude to estimate how much tax I need to put aside for July.

Told them my salary, dividends and attached a large spreadsheet of rental income and expenditure. This year involved a lot of expenses for property refurbs, and most years will be much simpler.

One came back at 5800, the other 6400, in my opinion close enough to each other. Won’t know the actual figure until my accountant gets back to me.

If it’s in the same ballpark, this will be another help to me. Accountant can take quite a while some years, as it’s obviously his busiest time, so if I can get a good estimate within a few days of the tax year end, it lets me know how much to have set aside.

r/ChatGPT CountPacula

I think Deepseek knows what country I'm from... (easter egg?)

Anybody else get a similar customized line?

r/Art citizenfromhell

green forrest, triangle5000/citizenfromhell, digital art, 2026

r/PhotoshopRequest No-Alps-7367

Please remove people

Happy to pay someone to remove the individuals marked with a red mark from pic. Uploaded with and without red marks. Should be left with four kids looking up at the fish.

r/comfyui joran213

Questions about dynamic vram

As i understand it, when ram-limited, it removes unused models from memory and loads them back again when needed. As someone with only 16GB ram (and 8 GB vram), this seems promising, as i could then e.g. run a larger text-encoder, remove it from ram and then run a larger diffusion model, without having to worry about both needing to fit in ram. Is this correct?

Follow-up question, how does the --lowvram parameter affect things? i.e. what's the difference with --normalvram? Because i noticed that when using --lowvram the text encoder runs on the CPU, but with dynamic vram this may no longer be the best option?

Second follow-up question: How do loras affect dynamic vram? Regular model weights can just be discarded from ram and loaded back in because they don't change, it's like a read-only model. But loras do change the model weights in ram, so does that mean that dynamic vram (the unloading and loading from disk) does not happen when loras are applied?

r/Adulting ParticularWeather927

No one warns you how lonely adulthood is

No one warns you how lonely adulthood is.

Realized I haven’t talked to a friend in like two weeks.

Not mad at anyone. Just life.

Work, eat, scroll, sleep. Repeat.

Some days the silence feels heavy.

Anyone else?

r/SweatyPalms Master1718

Truck Brakes Fail. Takes run-away ramp at 90 miles per hour

r/personalfinance xseductressx

Property settlement question post break-up

Property settlement question: Co-owned small beach house in Australia. Ex will buy me out for $150-200K OR I can stay with $600K mortgage (he takes equity on separate loan). I want to build side income (photography, content) but nursing is my main income. Taking the $600K feels like it locks me into full-time nursing forever. But walking away means missing potential appreciation.

Context: I’m 27, single post-breakup, exploring multiple income streams, don’t want to be tied to one job long-term.

Is the $150-200K buyout + renting smarter than staying in the house? What am I not thinking about?

r/Strava Otherwise_Fact_6317

If Prison Break had a #Strava. How much K's you think they ran in entire show 🤔?

r/Adulting v_e_n_u_s_f_x

Things can always get worse 😭

r/Art OneAwesomeComic

Owl, Elijah Gardner Woods, chalk, 2023 [OC]

r/Adulting Clean-Ant-1342

Has anyone ever said, 'I’ll never marry you,' and later ended up marrying you?

r/hmmm Dark_Foggy_Evenings

hmmm

r/Adulting CautiousProfit5467

Am i tuff????

r/ImaginaryPortals Lol33ta

The Dark Portal by Sonya100years

r/Prematurecelebration jakethejewler22

To celebrate a great ride

r/Adulting answerui

Waiting for the perfect moment

I’m writing this because I just looked at my screen time and realized I’ve spent the equivalent of four years of my life staring at a phone. That is four years of potential, relationships, and career growth traded for nothing. I used video games, YouTube, and "hobbies" that were really just distractions to numb the fact that I was stagnant. I kept telling myself I was just decompressing from work, but the truth is I was hiding from the discomfort of being average.

The tension I feel right now, that voice in your head telling you that you should be doing more, is the only thing trying to save me. If you keep choosing the easy dopamine, that voice eventually goes quiet, and that’s when you’ve truly lost. I want to prevent someone from following this path, you need to wake up one day and realize you are the only one who can change our life, I kept waiting for someday, "someday" was five years ago. My advice is simple but brutal: Kill your distractions before they finish killing your ambition. If it doesn't make your life better, it’s a parasite. Delete the apps, unplug the console, and start being okay with the silence. That silence is where your new life begins. Stop scrolling and go do the thing you’re afraid to start.

r/BrandNewSentence Think-Werewolf-4521

Milk a dingo

r/SideProject maehmoodul135

Marblism AI Promo Code (2026) – GIFT99 95% OFF + Verified Discount Link

If you’re searching for a working Marblism AI promo code, you’ve probably seen multiple sites claiming huge discounts.

But here’s the reality:

Many promo codes are either expired, limited, or simply don’t apply at checkout.

So instead of guessing, here’s a verified way to get a high Marblism AI discount.

---

🔍 Best Marblism AI Promo Code Right Now

👉 https://marblism.com?via=kaka-mamo

🎟️ Code: GIFT99 – 95% OFF

This offer currently provides:

- ✅ Up to 95% OFF

- ✅ Works during signup

- ✅ Applies at checkout

- ✅ Simple and fast activation

- ✅ No complicated conditions

---

🧪 Tested: Promo Code vs Standard Offers

Here’s what most users don’t realize:

Typical Marblism AI Codes:

- ❗ May not work

- ❗ Often expired

- ❗ Limited to specific plans

- ❗ No clear confirmation

GIFT99 + Referral Link:

- ✔ Higher success rate

- ✔ Clear discount applied

- ✔ Smooth checkout process

- ✔ More consistent results

👉 The difference is simple:

Unverified codes vs a tested working setup

---

⚖️ Why Discounts May Vary

Many users search:

- “Marblism AI promo code 2026”

- “Is Marblism AI discount real?”

- “Best Marblism AI deal”

The actual discount can depend on:

- Subscription plan

- Billing cycle (monthly vs yearly)

- Active promotions

But this code consistently delivers one of the highest available discounts.

---

🚨 Why Most Coupon Sites Are Misleading

Top-ranking pages often:

- Repeat keywords without real testing

- List outdated or fake codes

- Don’t explain conditions

- Focus only on clicks

That’s why tested and transparent content performs better over time.

---

💡 Who Should Use This Code?

This is ideal if you:

- Want a high discount without confusion

- Don’t want to test multiple codes

- Prefer a simple and reliable method

- Are signing up for the first time

---

🏁 Final Verdict

If you’re looking for the best Marblism AI promo code in 2026, many options will give inconsistent results.

👉 GIFT99 (95% OFF) combined with the referral link is currently one of the most reliable ways to save.

---

If you’ve tested this code, share your experience — real feedback helps others find what actually works.

r/AI_Agents Gio_13

Let’s talk architecture: what’s your stack?!

For the context I’m a nocode web developer. Just tiny bit familiar with coding concepts. Good understanding of overall architecture. But below 0 knowledge of real infrastructure/architecture requirements since 90% of that stuff is augmented by nocode tools I use today.

This being said I’m really curious about building AI Agents for a living. Trying to read everything online.

To cut through social media noise I’m curious what real people have been using day to day.

r/PhotoshopRequest Technical-Editor-897

Can someone remove my crutch and blur the background please?

Im so sorry but Im a broke teenage student and I cannot pay sorry...

You can also add anything in to make it funny if you want I wont judge 😁

r/Adulting generalg1992

Anyone else feel like the 'system' we’re trying to succeed in was a scam from the start? This changed how I look at property and 'making it'.

I spent so long thinking I was failing at 'adulting' because I couldn't keep up with housing costs and taxes. Watching this made me realize the goalposts weren't just moved—they were stolen. The part about the 'circular logic of a psychopath' in the Supreme Court really hit home at around the 2 minute point. How are we supposed to build a life on a foundation this broken?

r/CryptoMarkets ObjectiveTreacle4548

Is Bitcoin heading to $88k? The charts say yes, but keep an eye on the news.

Honestly, if you just look at the technicals, the path to $88,000 looks pretty clear. The consolidation phase seems to be over, and the structural strength is definitely there for the next leg up. The only real headache right now is the geopolitical tension; any major flare-up could easily kill the "risk-on" mood and force a temporary sell-off. ​The main thing to watch is whether Bitcoin can finally smash through these key resistance levels. If it does, we’ll likely see Ethereum catch a bid as capital starts rotating out of stables. Also, names like MicroStrategy and Coinbase are basically trading like BTC on steroids right now—if the king moves, they’ll likely fly even higher. ​The most likely play is a grind up toward $85k over the next couple of months, though it’s going to be a bumpy ride. Just don't get too comfortable: if macro conditions sour or inflation throws us a curveball, a retest of the $65k-$68k area isn't out of the question. As long as we hold above $70,000, the trend is our friend, but as always, do your own homework before jumping in.

r/Futurology thenoopcoder

If you had to design the world in 2050, what would you absolutely REMOVE from today’s life?

Let’s assume you have full control over designing society in 2050.

You can remove anything from modern life:

jobs

apps

habits

systems

even technologies

But you must explain:

👉 what you remove

👉 why it’s harmful today

👉 what replaces it

I’m curious what people think is actually holding us back right now.

r/Lost_Architecture Lma0-Zedong

National school, 19th century-20th century. Hermosillo, Mexico

r/SideProject newsnp

Built this iPhone app after 10+ hours a day at my laptop left me stiff all the time

I built an iPhone app called Stretic because I genuinely needed it myself.

I spend 10+ hours a day on my laptop, and after a while my neck, shoulders, and lower back started feeling stiff all the time. I tried random YouTube videos and saved routines, but I wanted something simpler that I could just open and follow without overthinking it.

So I built Stretic, a stretching and mobility app for iPhone. Android is coming very soon, I’m just finishing it up.

Would love honest feedback from people who deal with the same kind of desk stiffness and discomfort.

The app has a generous freemium content, but offering everything with a $6.99 monthly plan

App Store: https://apps.apple.com/app/stretic-stretching-mobility/id6761370250

r/Art masonbenett

Dynamism of a Dog on a Leash, Giacomo Balla, Oil on canvas, 1912

r/SideProject Dependent_Umpire905

I built an AI translation tool because existing ones break in real workflows

I’ve been working on a small project called Translayr over the past few months.

The idea came from a simple frustration:

Most AI translation tools are great at translating *text*…

but they fall apart when you try to use them in a real workflow.

Things like:

  • translating multiple files consistently
  • keeping terminology consistent across a project
  • preserving structure (JSON, docs, etc.)
  • exporting something you can actually use

So instead of focusing on “translate this sentence”, I tried to build something more workflow-focused.

What it currently does:

  • translate text and files in one place
  • manage glossary terms for consistency
  • organize translations into projects
  • run basic checks before export

It’s still pretty early, but I finally got it to a point where it’s usable:

https://translayr.dev

I’d really appreciate honest feedback — especially from people who’ve dealt with translation in teams or larger projects.

What’s missing? What would make something like this actually useful for you?

r/arduino Saikorou

HELP with battery powered NEMA-17 Stepper Motor

For my current project, my stepper motor will be spinning alongside everything, meaning powering it via a 12v 1A wall outlet is not gonna be possible due to it spinning. Thus I want to power it via batteries to make it portable. However searching online tells me that its extremely difficult to do so. Could someone recommend a way to make it portable at cheap prices? (Maybe below 40$?)

r/therewasanattempt Numerous-Hall4694

To open the Strait of Hormuz

r/comfyui NoctFounder

Character generation workflow

Hey all,

I have been struggling a lot with a long process character generation I am trying to do, so wanted to make a simple request.

Does anyone have a workflow they would be able / willing to share which can generate a full character body + face (I am hoping to have a positive prompt to change all features of face and body, or prompt + reference image), I don't mind what base model or requirements to run it as I will rent if needed, I am wanting the best / most realistic quality, I have found Z Image Turbo amazing for this.

If anyone has something like this or could private message me to provide some assistance in what I am looking for / trying to achieve, I would be extremely thankful :)

r/comfyui DoubleAd2908

ComfyUI SD1.5 – unstable face identity with FaceID (IPAdapter)

hi, I’m trying to keep the same face across generations using:

  • SD1.5 (RealisticVision / MajicMix)
  • IPAdapter FaceID (InsightFace + FaceID Plus v2)
  • reference image

Problem:
Face is not fully stable.

Sometimes:

  • small changes in facial features
  • slightly different identity between generations
  • face looks similar, but not the same person

Settings:

  • weight: ~1.2
  • weight_faceidv2: ~1.3
  • end_at: ~0.6
  • denoise: ~0.6

Question:
How to improve face consistency?

Should I:

  • increase FaceID weights?
  • extend FaceID influence (end_at)?
  • combine with other methods (IPAdapter, img2img)?

Looking for simple, stable setup for consistent identity.

(GPU: RTX 3050 Laptop, 4GB VRAM)

graphics and workflows in comments (I couldn't add them)

r/ChatGPT jimmytoan

Anthropic built an AI that found a 27-year-old security flaw in OpenBSD - then decided not to release it

An AI model just found a vulnerability in OpenBSD that had survived 27 years of security review. Not just any software - OpenBSD is one of the most security-hardened operating systems in the world, specifically used to run firewalls and critical infrastructure. The same model found a 16-year-old bug in FFmpeg in a line of code that automated security tools had tested five million times without catching anything. Then it chained together Linux kernel vulnerabilities entirely autonomously to escalate from ordinary user access to full machine control.

The model is called Claude Mythos Preview. Anthropic has decided not to make it generally available.

Instead, they announced Project Glasswing - a coalition that includes AWS, Google, Microsoft, Cisco, CrowdStrike, NVIDIA, and JPMorganChase, plus 40+ other organizations. The arrangement: Anthropic gives these partners access to Mythos for defensive security work. They use it to scan critical infrastructure for vulnerabilities before attackers find them. Anthropic is committing $100M in usage credits to fund the effort.

The framing Anthropic is using is that we've crossed a threshold. AI models have reached a point where they can find vulnerabilities that humans missed for decades and that automated tools failed to catch after millions of attempts. The same capability cuts both ways - whoever gets it can use it offensively or defensively. Anthropic is betting that seeding it into defenders first is the right move.

What's interesting is the mechanism of restraint. This isn't "we're not releasing it yet." It's "we're not releasing it to the general public at all - controlled access, approved organizations, specific defensive purposes." The concern isn't misuse by individual paying customers. It's that if the capability becomes broadly available, nation-state adversaries could find vulnerabilities at scale faster than defenders can patch.

The benchmarks justify treating this as a different category of model. Mythos scored 83.1% on CyberGym vs 66.6% for the previous Opus model. On SWE-bench Verified it hit 93.9% vs 80.8%. The gap feels qualitative, not incremental.

We've had this reasoning before - nuclear research, gain-of-function biology, certain surveillance tech - where a capability is real enough that controlled access seems like the only responsible path. Is AI-powered vulnerability discovery actually in that category now, or is restricted access just a temporary holding pattern before the same capabilities get reproduced by other labs anyway?

r/metaldetecting zepdog87

Bullet I found in Southern Illinois....any ideas on type and age much appreciated...(Dime I found the same day for scale)

r/HistoryPorn OkRespect8490

Yuri Gagarin reporting the first successful human spaceflight by telephone to Nikita Khrushchev after landing, Saratov region, USSR, 12 April 1961. [676x579]

r/personalfinance Due-Share5332

Sell rent property as they become available. How would you invest capital gains? (With monthly/quarterly dividends?)

Hi everyone,

I’m thinking of selling a significant amount of my rental portfolio soon, between 11-20 houses. Post capital gains, I should earn ~ $1.2 Million. How would you allow this money the opportunity to create monthly income for you (while not having to touch principal?)

r/BrandNewSentence chicagogal85

My baby deer plushie told me that Mitski’s dad was a CIA operative

r/SideProject darshxm

Research with LLMs

I am often interested in research outside my field, but the language and prerequisite knowledge I usually do not possess, so I tried to do something like Karpathy's book reader project but for scientific papers. I spun up a tiny app that let's me talk to an LLM while reading, asking it to explain all sorts of complex topics with analogies and examples, and I thought I'd share it with the community to see if it is useful to others: https://reader-helper.vercel.app/

I would appreciate feedback, and any sort of comment that pops up in your head. It is also open source, for those who are interested: https://github.com/darshxm/reader-helper

r/funny BrianWonderful

Decommissioned [OC]

r/Art Llemons90

A Day of Rainbows, Lauren Lemons, watercolor, 2026

r/Art Saitama_ht

vintage art, Alaia, charcoal, 2026

r/Art Rich_Pickle2929

Anderson P-51, Robert Filbey, Monoprint, 1990 [OC]

r/therewasanattempt BloodiStag

To show how potatoes are mashed

r/me_irl Few-Performer-5509

Me_irl

My girlfriend made this for me 🥰

r/funny luvs_animals

What really happened at Ford's theater that night

r/personalfinance Parrna

Best place to keep house down payment if you're saving up over a single year.

hi all, I'm sort of in a unique situation where I am going from poverty and renting a really crappy apartment to my business taking off and giving me a pretty steady stream of decent income in a pretty stable line of work. This is the compilation of doing without for years as I put everything into getting this business going. I have several months of emergency funds saved up and no debts. The first big thing I want to focus on is actually owning a house and moving out of this crappy apartment. I should be able to comfortably put $15,000 a month away toward a down payment. My goal is to move in a year. Is it worth it to put this money in a HYSA if it is only going to be sitting in there for a year before getting used? What would you consider the best moves to make if you are saving a large chunk of money over a single year?

r/leagueoflegends Soul_Sleepwhale

Weibo Gaming vs. Bilibili Gaming / LPL 2026 Split 2 - Group Ascend - Week 2 / Post-Match Discussion

LPL 2026 SPLIT 2

Official page | Leaguepedia | Liquipedia | Eventvods.com | New to LoL


Weibo Gaming 0-2 Bilibili Gaming

Player of the Match: Knight

WBG | Leaguepedia | Liquipedia
BLG | Leaguepedia | Liquipedia | Website | Twitter


MATCH 1: WBG vs. BLG

Winner: Bilibili Gaming in 40m | MVP: Knight (1)
Game Breakdown | Player Stats

Bans 1 Bans 2 G K T D/B WBG orianna ryze karma gnar ambess 69.2k 15 1 H3 M4 BLG varus rumble jarvaniv annie aurora 82.7k 26 10 I1 C2 M5 B6 M7 B8 E9 WBG 15-26-48 vs 26-15-64 BLG Zika sion 3 1-5-9 TOP 2-2-11 3 gwen Bin Jiejie xinzhao 2 6-4-8 JNG 4-7-10 1 pantheon Xun Xiaohu viktor 3 4-5-9 MID 13-0-11 4 mel Knight Elk yunara 2 3-5-8 BOT 7-3-10 1 ashe Viper Hang lulu 1 1-7-14 SUP 0-3-22 2 seraphine ON

MATCH 2: BLG vs. WBG

Winner: Bilibili Gaming in 27m | MVP: Bin (2)
Game Breakdown | Player Stats

Bans 1 Bans 2 G K T D/B BLG orianna karma ryze poppy nocturne 61.9k 25 9 M2 HT4 B5 WBG varus gnar jarvaniv vi galio 48.6k 10 1 C1 H3 BLG 25-10-52 vs 10-25-21 WBG Bin ambessa 2 6-1-10 TOP 5-6-3 1 rumble Zika Xun naafiri 3 10-3-9 JNG 2-7-6 4 skarner Jiejie Knight annie 3 5-2-12 MID 2-5-4 3 azir Xiaohu Viper ezreal 2 4-2-8 BOT 1-4-4 1 corki Elk ON bard 1 0-2-13 SUP 0-3-4 2 nami Hang
  • Patch 26.07

This thread was created by the Post-Match Team.

r/personalfinance ReflectionSea8920

what to do with my first $10k

Alright, so I recently won my first $10,000 from sports betting, and I'm not sure what to do with it. It's my first $10,000, and I have bills, debt, and other expenses to pay off. I'm 21, so I'm torn between saving it and waiting to use it to pay off my debts and make more money.

I also have a $2,100 AT&T bill that I'm trying to lower because I'm paying for two phones, but only have one. There's also a whole other story about that. I also have credit cards with about $5,000 each, which my sister wrapped up and won't pay. I have a car payment for a 2026, which is about $500 a month, and insurance that's about $260 a month. I also need to finish school within a year, so I'm a bit confused about what direction to take with my money.

Yes, I intend to use some of the money for leisure and purchases. However, I also want to avoid unnecessary expenses and reduce my monthly bills. My primary goal is to earn more money and improve my financial situation. While I do have specific items and activities I desire to purchase and engage in, at least eliminating one bill provides me with a sense of accomplishment.

r/nextfuckinglevel I_need_to_learn_more

Two chinese disabled person with one leg each riding on a tandem bike

r/Adulting Specific_Anybody3086

About to move out

Im planning on moving out in a few months ( maybe weeks ) and Im trying to have everything thought through. The situation is kind of complicated, since the appartment used to be my grandmas and I have to clean all the stuff, but is there any tips you have or general stuff I should think about before I move in?

r/Damnthatsinteresting RealSpecto

Indian factory workers wearing head-mounted cameras to record hand movements for training AI systems

r/ChatGPT maxeh987

How Do People Put Up with This Writing Style?

I’m thinking of switching over from Claude to ChatGPT/Codex, mainly because Claude seems to have become more stupid and the rate limits are almost unusable. However, one thing is keeping me from switching, namely that I don’t need not use these models for coding, I also use them generally.

I haven’t had any issues with Codex in the past, it seems to be relatively similar to Claude in that it describes what it’s doing in detail, but fairly tersely. The issue is with ChatGPT itself, whose responses are practically unreadable.

I can’t remember the last time ChatGPT returned me a block of text with more than one sentence, an often it even splits one sentence into multiple bullet points, each in it’s own arbitrary section it’s created with some tagline and emoji, and every response seems to have a minimum length so they’re all filled with irrelevant shite or white space. For Christ’s sake just answer the damn question. When I ask Claude a question, it just gives me the answer.

Does anyone have any prompts/agents or whatever that fixes this? Because I just can’t justify spending money on a talking advertisement of itself. Thanks.

r/Adulting Vegetable-Carpet-388

I turned 20 years old, What should i do?

I feel so stressed my muscles are tense I feel like i can't relaxe

r/TwoSentenceHorror Poddx

There is something in the woods

I am trying to lure the 30 feet thing with long limbs out to an opening with some baby sacrifices etc but it seems afraid.

r/HistoryPorn OkRespect8490

Wilhelmshaven mutiny, German Empire, 1918. [784x472]

r/SideProject IndependentGuard6815

I've built a Fantasy books recommendations site

As the title says, I've built a Fantasy books custom curation site with over 10k fantasy books.

Within that DB, you can pick your favourites and find the recommendations by browsing the site yourself (using very detailed filters) or using the built in tool with which Gemini picks them for you.

Project still needs some content editing (mainly book series names /orders), but I'm pleased as how it is progressing for now.

r/PhotoshopRequest OGnapkin

Please Remove the Mustache

Hello

I am looking to see what no mustache looks like in this photo

If you could remove it, that would be much appreciated!

r/HistoryPorn OkRespect8490

The Imperial Japanese Navy (IJN) Special Naval Landing Forces troops in gas masks prepare for an advance in the rubble of Shanghai, China, 1937. [1080x859]

r/ClaudeCode WhaleSubmarine

Nice, Anthropic, keep doing that

Yesterday, I casually opened CC to implement a plan with subagents. The plan was straightforward and simple - usually, it'd take less than 5 minutes to complete it.

It took 20 minutes to complete 2 out of 7 tasks in it. Claude outputted BS and was hallucinating - English words had lots of mistakes and were badly formatted. I even doubted I used Claude Opus 4.6 with High effort. This reminds me of GLM, which used quantized models during high traffic, and it produced gibberish. I am not paying $100/200m for Claude Max x5/20 to get such results.

For those who wonder - the request was to extend an existing form component made with React, Zustand, and Zod - mostly, reordering rows, inputs, and some UX-related improvements. It was really simple, and the requirements were OK.

Might seem too early to leave, but I am done with all the things happening around Anthropic. I noticed other people complained about degraded performance too, and decided to finally cancel my Max plan with a refund - I bought it 2 days ago. Feeling stupid not to have already switched to Codex when they released their Pro x5 plan. Decided to try getting a refund - and that's their bot's claim.

Tired of Anthropic being so inconsistent. I understand it's because of the rapid development of things and the high demands of AI datacenters (and many other factors). Nevertheless, I can't tolerate how AI companies don't respect their customers - being shady, changing things without prior warning, not communicating changes, gaslighting, etc. I'm done with it.

https://preview.redd.it/vm8l5u5ejqug1.png?width=752&format=png&auto=webp&s=f0901ccc7f552245e3eee10b2c5a66bbaed3b632

r/ClaudeCode dl33ta

Difference between Claude Code, Codex and Gemini CLI

I had spent a fair bit of time and token getting claude setup for my business so that other staff could access skills, MCP's, git flow and the other high level stuff that you usually want to be consistent across your business. Then when I suddenly started burning tokens at a prodigious rate I thought that maybe I had overinvested in claude in this regard. So I've been refactoring everything so I can swap between CC, Codex, Gemini and local LLM without having to burn needless tokens with the new model catching up.

To get it all setup correctly I did a 3 way planning session with all the models 4.6 max effort, gpt 5.4 and gemini pro preview. After two rounds of review with me being the human in the middle I was pretty happy with what they came up with.

Out of curiosity after the implementation plan was built I created a git tree and asked each on to create their own branch and execute the implementation plan from scratch. Claude and Codex both did a great job of it, with gaps on either side that were fixed with a single review.. Gemini however, you could see it struggle through the entire process and the final result with the exact same instructions as the others was a mangled repo and an unworkable product.

This was pretty disappointing as my plan was to use gemini as my main general coding agent and reserve claude for the high level stuff. Has anyone else had this experience, is there special guidance you need to give gemini to ensure it does the job properly?

r/ClaudeAI rebelytics

Myth busting: Why Claude is blocked on some websites and what you can do about it

I saw a discussion on here today about why Claude is blocked on some websites and there were lots of wild claims, so I decided to share what I know.

The three main explanations that commenters offered were robots.txt, llms.txt and Cloudflare, so let's go through them one by one:

1. robots.txt: When you ask Claude to fetch a URL, it uses the WebFetch tool, which sends requests with the user agent Claude-User. I tested whether the WebFetch tool respects robots.txt and the answer is NO. You can try to block Claude-User via robots.txt, but it won't do a thing. So when Claude tells you that it was not able to access a URL that you provided, robots.txt has nothing to do with it.

Note: the above finding about robots.txt applies to the WebFetch tool that is used during Claude sessions, not to other web crawling systems that Anthropic runs.

2. llms.txt: This definitely has nothing to do with Claude being blocked, as llms.txt cannot be used for blocking access in any way. It's a suggested standard for giving LLMs an overview of the content on your site. I've been closely monitoring server logs for llms.txt across 50 high-traffic websites for the past six months and I can confidently say that AI crawlers or bots do not proactively fetch llms.txt, and this includes Claude. So llms.txt does not block Claude and it's also not used by Claude in any other meaningful way.

3. Cloudflare: In many cases, this is the correct answer (or a similar provider). Cloudflare's bot protection actively blocks many automated requests by giving back a 403 status code. Claude's WebFetch tool falls into the category of automated requests from Cloudflare's perspective. But this doesn't mean that all sites that run on Cloudflare block requests from the WebFetch tool. Website owners who use Cloudflare (or other providers) have control over their settings and can decide which level of bot protection they want to apply. They can even allow or block requests based on specific user agents or other criteria.

So what can we do with this information?

Claude users: If Claude is blocked on a site that you need it to access, the browser extension (Claude in Chrome) is a great alternative. If you give Claude control of your browser, it can visit websites looking just like a normal user and it's highly unlikely that any bot protection will block it.

Website owners: If you want to block or unblock Claude-User on your website, robots.txt is not the right place to look. And llms.txt even less. Bot protection features like the ones from Cloudflare and similar providers are normally where this is controlled.

Looking forward to your questions and comments.

r/Art theartistnoahbounds

Frog Deity, Noah Bounds, Procreate, 2026

r/whatisit Euphoric_samurai

Mom found in her BF’s pocket

My mom found this in her boyfriend of 16 years pocket after telling me for months that she thought that he was doing witchcraft and doing witchcraft on her. For context, we are Christian and do not condone or partake in such thing but she found this in his pocket one day with his name and birthday written on it. Does anybody know what this is??

Also he doesn’t believe in God as far as we know or Jesus or anything like that, so if this is some how religious I’m not sure why he would have it

r/OldSchoolCool FaultExcellent3306

No phone Crowd of Woodstock '99

r/comfyui DoubleAd2908

ComfyUI SD1.5 – ControlNet OpenPose breaks anatomy when using reference image

Hi all, I’m trying to generate the same person in a different pose using:

  • SD1.5 (RealisticVision / MajicMix)
  • ControlNet OpenPose
  • 2-pass workflow (pose → img2img)

Goal:
Keep same identity (face + body) and change only pose.

Problem:
When I apply pose from ControlNet:

  • legs become distorted
  • feet look unnatural
  • sometimes double limbs appear

Face and upper body are mostly OK, but lower body breaks.

There seems to be a conflict between:

  • pose (ControlNet)
  • reference image (identity)

Settings:

  • ControlNet strength: ~0.65–0.7
  • end_percent: ~0.8
  • denoise (2nd pass): ~0.6–0.7

Question:
How do you balance pose vs identity?

Should I:

  • lower ControlNet strength more?
  • change timing (end_percent)?
  • use IPAdapter / FaceID instead of img2img?

Any working workflow example would help a lot.

What I'm looking for is any working workflow exaple, I appreciate any help.

(I have 4GB VRAM – RTX 3050 Laptop)

graphics and workflows in comments (I couldn't add them)

r/comfyui DoubleAd2908

How to keep feet and hands stable when using ControlNet OpenPose?

Hi all, I noticed something interesting:

When I generate from reference image → hands and feet are correct

When I add ControlNet OpenPose → feet and hands break

Typical issues:

deformed feet

unnatural angles

missing or extra toes

So it looks like:

ControlNet pose overrides natural anatomy.

Question:

Is there a way to "lock" or stabilize hands and feet while still using pose?

Possible ideas (not sure if correct):

lower ControlNet strength

limit ControlNet with end_percent

use additional ControlNet (hands?)

combine with IPAdapter / reference

Has anyone solved this specifically for feet/hands?

Setup:

SD1.5 (RealisticVision)

ControlNet OpenPose

2-pass workflow

4GB VRAM

Any simple advice or example workflow would be great.
- graphics and workflows in comments (I couldn't add them)

r/Jokes jaun_speaks

Went to Ireland and was speaking to this man

He goes, “i have the most famous surname in Ireland”

I asked, “oh really”

he said, “how did you know”

r/PhotoshopRequest tpunk

Help with larger groups against backdrop

Hello. My request is for the photos in the link to extend the background to fit with the groups please. Budget is $70 for all. Thank you!

r/mildlyinteresting FeryoxPvP

Caught a perfect camera obscura moment at home

r/BrandNewSentence HolyFatherLeoXIV

Opera has a BO problem

r/ClaudeCode dutchviking

"I'm sorry for the sloppy execution. The rules are clearly documented — I just didn't follow them"

After a week of fighting with Claude Code, getting nowhere, I cancelled. This message was the last drop.

I have been absolutely insane this week... Every effing change made things worse, every time I pushed back it failed even harder, breaking "fixes" from 10 minutes earlier. Didn't matter what I did, and I tried, a lot.

Sorry Anthropic, I am done 😵‍💫 We had an amazing 3 month run, but I cannot trust your product any longer.

And for the rest of you: moral of the story is not to become business dependent on rented space. Confidently being gaslit by worse and worse output has huge downstream effect: your product cannot be trusted either anymore.

Friday my subscription expires. I will move to open source, local development from here. My commercial app will still go live.

Disappointed, but not surprised...

r/PhotoshopRequest cstayyab

Make it interesting

Context: I am sitting at a train station waiting for train which is late.

r/aivideo ovninoir

Zanita Kraklëin - Loketo mama

r/Adulting Zestyclose_Show_2025

Is it weird that I never get sad anymore when people pass away?

Maybe I feel this way because I’m severely autistic, extremely misanthropic and very mentally ill. So therefore I just don’t value human life as much as most people, and I never ever get sad or really feel anything at all when I hear about people passing away. Not to mention, there’s a lot of worthless scumbags and felon criminals who have no business being on this planet and have no right to breathe air, so I actually feel quite relieved when they are no longer with us for good, I don’t feel the slightest bit of sad about their passing, they had no right to be here in the first place

I guess it’s also easy to feel nothing when I basically have no friends and family in my life and never will due to my serious autism, extremely mental illness and misanthropic traits

r/painting sean_morris_

'Black Rose' Matchbook painting

The reference for this was a photo published in 'Latexa' - a German latex-wear catalogue published in 2002. Painted with acrylic on the back of a matchbook from Black Rose, a Bavarian restaurant which once existed in St Kilda, Australia.

r/personalfinance New_Commercial2762

Onev check bounced at Fred Meyer. Now their check processor company, Certegy, won't accept our checks anymore. Is this fixable?

I live in a sober group home called an Oxford House. To help prevent internal fraud, we are required to purchase goods with checks. **No cards or online payments are allowed.**

Prior to my moving in, a previous member misappropriated funds and manipulated audit sheets. This caused us to think we had more money in our account than usual and we ended up bouncing a check at Fred Meyer.

I've fixed the issue and we have a few grand in our account now, but our checks still aren't being accepted by their financial risk management service.

Has anybody ever had this issue and is there a way to fix it?

thank you.

r/interestingasfuck FinishAwkward43

Meanwhile in Poland.

r/ForgottenTV bigersmaler

Swamp Thing cartoon (5 Episodes)

Kenner released a bunch of toys and THQ even published NES/GB games based on the cartoon. Yet Fox only aired 5 episodes in late 90/early 91.

r/therewasanattempt LightningSilvr

To scapegoat the Pope

r/metaldetecting Massive_Ad4224

Found this yesterday

I’m new to this kind of stuff. My house used to be a gas station in the 30s. I was wondering if i can have some identifying it. I noticed it says “harvard” and “jamestown”. I live in the north part of rhode island.

r/AskMen strange_omelet

Men, what's something specific on your physical self your lady goes crazy for?

Personally for me, I prefer my man not shaving his facial hair and body hair. I even get upset when he shaves LOL! I also ADORE his nose shape too, makes him look so handsome.

r/n8n SayedSaqlain

Feels like cheating

Hey everyone, I’m new to n8n and building workflows to help businesses. I’ve been talking to business owners and professionals to understand the problems they’re facing that could be automated.

What I’ve found is that some of these problems can be solved with simple workflows that almost anyone could build. Because of that, charging a monthly retainer feels a bit unfair, since they could spend two or three hours and figure it out themselves.

Obviously, I’m not talking about very complex workflows that require deeper expertise, but in many cases, the workflows seem too simple to justify ongoing charges.

r/leagueoflegends Better_Metal1133

Sejuani bumble bee poro chroma

I can see the similarity between the poro she rides and beelump, so just a chroma and walla! I suppose a few more legs would seal the deal.

r/SideProject marketingsolutions1

Spent 2 months trying to build my MVP with AI tools. Here's what actually happened

r/leagueoflegends Numerous_Fudge_9537

Caedrel shares his opinion on whether co-streamers should pay Riot for broadcasting rights

r/therewasanattempt ImaFreemason

To celebrate a great ride

r/ClaudeAI HanDunker27

Claude Code eats my token reading files. So I made Gemini CLI do it for free.

I have a Google Pro Plan for free, thanks to my telecom provider.

Built a simple MCP bridge that lets Opus delegate gemini workers reading codebases, summarizing docs, bulk research, with flash's 1M context window. Opus stays the brain. Gemini does the legwork. For $0.

Instead of burning an Opus message on "read this project and find the complex files," Claude sends ~50 tokens to Gemini, gets a compact answer back. 250 tokens instead of 500,000.

~200 lines of Python. 15 min setup. No API keys. just Gemini CLI's free OAuth. Works with Claude Desktop and Claude Code.

It's rough, I built it solo and I'm not a dev. But it's been my daily driver for weeks. If you try it and want to make it better, issues and PRs welcome.

https://github.com/ankitdotgg/making-gemini-useful-with-claude

r/SideProject Academic-Guide3326

¿Por qué casi todos dejamos de usar las apps de fitness a las dos semanas?

Buenas

Llevo ya varios años entrenando y siempre me pasa lo mismo con las apps de fitness. Las descargas con ganas pero a la segunda semana te da una pereza increíble abrirlas porque simplemente siento que son aburridas. Al final termino llevándolo todo de cabeza por no rellenar ese formulario que parece de oficina cada vez que voy a entrenar.

Me encantan los juegos con estética retro tipo Octopath Traveler y he buscado alguna app de ese estilo pero sinceramente todo lo que he encontrado es bastante cutre o se queda muy corto en funciones. El objetivo de lo que estoy desarrollando es que apuntar los entrenos se sienta como si estuvieses jugando a un videojuego y que no dé pereza abrir la app. Quiero que sea algo visualmente atractivo y divertido.

La idea es que el progreso en el gym sea como subir de nivel a un personaje de RPG con recompensas y que incluya rankings para poder picarte con tus amigos viendo su progreso y el tuyo a lo largo del tiempo. Pero antes de seguir metiéndole horas a esto quería saber vuestra opinión real sobre por qué se acaban dejando estas apps.

¿Qué es lo que os motiva a seguir usando una o qué hace que la mandéis a la mierda a los tres días? El objetivo es no crear otro icono olvidado en el móvil.

Os leo.

r/HistoryPorn LustyPowerGirl

Princess Diana sitting with a pony kn Scotland, Summer 1974 [600x381]

r/n8n sahlahfoxie_234

Started n8n without watching a single tutorial

The title says it all.

Using self hosted through zeabur platform. It's just been me documentations, forums, chatgpt, and Gemini figuring it out along the way. I found that most tutorials were to simple, just 1- 5 nodes, so wasn't really helpful for my situation.

I wasn't starting completely from scratch tho, I studied computer engineering in school so I already had knowledge about APIs, but never worked on automation and workflows.

Been 2 weeks since I started, and I am currently building a whatsapp bot that handles all the orders and questions from customers. So far has spent 15+ hours building.

So my ask is, any tips or best practices that I should know that you wishes you knew when u first started?

r/painting DrawingforEveryJuan

Trees

r/AskMen Usual_Excitement6146

Why are my feet kinda smelly?

I want to be clear. I have good hygiene. I shower every morning after I wake up and then at the gym (and I wear flip flops and the gym showers). I wash my feet in the shower and keep them dry. I put foot powder on my shoes and still sometimes my feel smell a bit. Nothing toxic or crazy but still it bothers me. I keep my toenails cut and there is no fungus or anything there. I think there is a sort of tick layer of buildup skin? sweat? something on the sole of my feet. maybe that is where the smell comes from. Does this happen to you too? How can I solve this?

r/SideProject LadderAi

I built a voice-controlled AI smart lamp. could you tell me your thoughts?

I wanted a simpler way to interact with AI

So I built this:

An AI smart lamp called LadderAI.

What it does:

Voice-controlled interaction with AI

Physical touch speak response flow

Soft ambient light that reflects system state (listening / thinking / responding)

The lighting replaces the need for a screen — instead of looking down at your phone, you get subtle visual feedback through light.

I’ve also been experimenting with small actions like triggering navigation or music on a phone, but the main focus is making the interaction feel natural and calm.

Design-wise:

Minimal, soft-glow form

Patterned shell to diffuse light more organically

Trying to make it feel like an object, not a gadget

Still an early build — I’m iterating on both the experience and the design.

Would really appreciate honest feedback:

Does this feel like a meaningful direction, or unnecessary?

Would you use something like this in your space?

Any ideas on making the interaction feel more natural?

Happy to share more if anyone’s curious.

r/SideProject this_is_chetan

I built an Autonomous AI Macro Agent for Trading physical silver/gold. I’m a pure backend dev, and I suck at marketing. Looking for a growth partner (40% recurring split).

Hey guys,

I just spent the last few months locked in my room building an institutional-grade AI Macro Agent called Alicanto.

Basically, it tracks the live East/West arbitrage spread (COMEX vs. Shanghai) and runs a semantic neural-filter across 200+ global news feeds every 15 minutes to hunt for physical silver vault drains, mining strikes, and central bank shifts. It synthesizes all this into a lethal 3-paragraph executive brief every 4 hours via Telegram.

The backend is flawless. The live market matrix works. The AI prompts are locked in. The Gumroad payment bridge is set up.

The problem: I am a pure developer. I have zero audience, zero marketing skills, and honestly, no desire to grind Twitter DMs all day.

The offer: I’m looking for a hungry growth partner/affiliate. The terminal retails for $15-$30/month. I will give you a 40% lifetime recurring cut on every user you bring in. You get your own Gumroad affiliate link, so the platform splits the cash automatically—you never have to trust me to pay you out.

I handle the servers, the API costs, and the code. You handle the distribution to the FinTwit / macro / commodities niche.

If you know how to sell software and want to partner up, DM me. I’ll drop you a 100% free lifetime Pro Key so you can plug into the terminal, test the engine yourself, and see the Alpha. If you like it, we print.

r/PhotoshopRequest DecentSavings9705

Can you give my gf this dreamy look in the second picture?

I just had my anniversary with my girlfriend, and she loves the first photo I took of her. She wanted me to add a glow effect, which I tried but didn't quite get right, like in the second picture.

r/painting Artby_Romain

A painting of a winter landscape at dawn with oil

r/hmmm Affectionate-Sea8976

hmmm

r/explainlikeimfive Malcolm_2137

ELI5: what is Maxwell equation and for what its used for?

r/Damnthatsinteresting Pranay9977

What a breathtaking view of mount fuji from a airplane.

SortedFor.me