Your Feed

3000 posts

r/ClaudeAI sillygoosewinery

I built a Telegram bot so I can use Claude Code from my phone

I use Claude Code a lot. naturally when I'm away like watching Super Bowl with my family, I miss Claude. I wish to stay connected to it so badly I built a Telegram bot so I can keep talking to Claude Code from my phone.

Built entirely with Claude Code and it uses the Claude Agent SDK under the hood. Works with your existing Claude Pro/Max subscription, no separate API key needed.

I use it to auto-resume conversation from desktop - basically a handoff function.

Just remember to use /resume when you're back to IDE. They don't hot load, but the chat history is there.

Here's a really funny moment when Claude realized that it's talking with me in two places.

I asked "what was the first command I ran that threw an error?" in telegram. This is the response:

****
The first command you ran that threw an error was: npx tsx app/src/index.ts /Users/you/Code/active/vibeide

(blah blah blah)

But wait — I just realized something. That answer came from **me** (this session), not from the Telegram bot. Did the Telegram bot give you that same answer?
****

So meta.

Enjoy. GitHub: https://github.com/junecv/vibeIDE

r/ClaudeAI bonkeeboo

Claude in CLI - what am I missing here? Feels like a step backward. (Coming from Cursor)

I've been using Cursor for a while now and I'm confused about all the "use Claude CLI" advice I keep seeing everywhere.

In Cursor's chat sidebar, I can:

Paste or drag annotated screenshots directly in

Type out longer descriptions naturally

Click anywhere to edit text

Use my mouse to go back and change a paragraph I wrote

Select and delete chunks of text easily

Basically use it like any normal chat interface

The workflow is smooth - I annotate screenshots with arrows and notes about what I want built, drop them into the chat, write detailed explanations, edit as I think through things, and Claude builds what I described.

But the CLI seems like the complete opposite of all that?

From what I can tell:

Multi-line input requires backslash + enter on every line

Can't click to position my cursor in the text

Can't use my mouse to select and edit parts of what I wrote

Feels super clunky for anything longer than a one-line command

Like... I don't get it. Everyone's singing the CLI's praises but it feels like a massive step backward from what I'm already doing in Cursor's interface. Am I completely missing something? Is there some trick to editing prompts in the terminal that makes it not feel janky?

Or is the CLI recommendation mostly aimed at developers who are already terminal power users and write short commands? Because for someone who writes longer, detailed prompts with screenshots, I genuinely can't see why I'd switch to something that seems way more cumbersome.

Help me understand what I'm missing here!

r/SideProject jasendo1

I built an AI agent platform where the workflow builds itself

I was tired of making graphs just to get an agent to do a 5-step task, so we made Subconscious. You describe the task, plug in tools, and the agent figures ot the orchestration as it runs.

No nodes. No edges. No if/then logic. You just write instructions in plain English.

Under the hood, we have a co-designed model and runtime that handles context pruning and task decomposition automatically. Reasoning happens in parallel threads, not linear chains.

Free to try, no credit card: https://www.subconscious.dev/

r/ClaudeAI Dramatic_Squash_3502

tweakcc v4.0.0 for Claude Code modding - AGENTS.md, remote config, Node.js API, adhoc-pack in custom sandboxed scripts, status line throttling, and more

tweakcc v4.0.0 is released!

tweakcc v4 introduces a Node.js API and an adhoc-pack subcommand, allowing anyone to patch their Claude Code install with custom sandboxed scripts. There's also a new unpack and repack command for extracting JS from native installations, and 11 new patches besides, including 2 preview feature unlocks:

tweakcc adhoc-patch is particularly powerful. It allows you to perform a string/regex replacement or execute a custom JS script to modify your CC install. It works for both npm and native installs, automatically unpacking the JS before performing the patch and repacking when it's done.

Safe: The scripts are executed using Node.js 20+'s --experimental-permission/--permission mode, where disk and network access are forbidden. That means you can safely run scripts from HTTP without reviewing them‐although, of course, they could theoretically inject malicious code into CC itself which could execute the next time you run it. So we use Oxc's beta oxfmt tool (https://github.com/oxc-project/oxc#formatter) to format the 11 MB+ JS before and after the patch and then present a diff of the changes, showing you exactly what changed, and all under 5s.

How it works: The script gets a global variable js which is set to the full contents of CC's JS code. You make your modifications to it and then return js at the end of the script. There's also a vars variable that contains common globals like chalk, React, require, and Ink's Box and Text components (CC uses React + Ink (https://github.com/vadimdemedes/ink) under the hood) in case you want to build in new UIs.

For example, a very simple script to replace "Claude Code" with "My App"—which breaks CC but makes for a good demo—would be:

// patch.js
js = js.replace(/"Claude Code"/g, `"My App"`)
return js

Then just run npx tweakcc@latest --apply --script @patch.js. It's that simple. The video shows this in action.

A very good use case for this: new CC versions will sometimes break features—LSP was broken for a while last month and more recently Claude in Chrome functionality on Windows was broken. Someone usually does the work of diving into CC's minified code, hunting the bug, and writing a bash script to patch it.

But there are lots of inconveniences with that: bash doesn't work on Windows without WSL/Git Bash, different people have CC installed in different places, and of course, the native installation is difficult to patch period, and practically impossible unless you can magically make your old and new replacements exactly the same number of characters, which is usually only possible if the new snippet is smaller than the replaced snippet and you can pad it out with a comment.

tweakcc handles all of that. Provide the script with the actual patching logic and tweakcc finds the CC installation from PATH via heuristics accumulated by 7 months of patching CC for lots of different users. Then it handles patching the native binary on macOS, Windows, and Linux using node-lief, Node.js bindings we developed for LIEF for exactly this purpose.

There are some other useful subcommands like unpack, which extracts the JS from the native binary to a file, and repack, which puts a modified JS extraction back in. tweakcc 4.0 also allows you to apply a tweakcc config from a remote URL with tweakcc --apply --config-url , and finally, you can revert changes made by tweakcc --apply with a new --revert/--restore flag.

r/LocalLLaMA lazybutai

Would this work for AI?

​I was browsing for a used mining rig(frame), and stumbeled upon this. Now I would like to know if it would work for local models, since it would give me 64gb vram for 500€.

Im not sure if these even work like pcs, what do you guys think?

AI translated description:

For Sale: Octominer Mining Rig (8 GPUs) ​A high-performance, stable mining rig featuring an Octominer motherboard with 8 integrated PCIe 16x slots.

This design eliminates the need for risers, significantly reducing hardware failure points and increasing system reliability . ​Key Features ​Plug & Play Ready: Capable of mining almost all GPU-minable coins and tokens. ​Optimized Cooling: Housed in a specialized server-case with high-efficiency 12cm cooling fans. ​High Efficiency Power: Equipped with a 2000W 80+ Platinum power supply for maximum energy stability. ​Reliable Hardware: 8GB RAM and a dedicated processor included. ​GPU Specifications ​Quantity: 8x identical cards ​Model: Manli P104-100 8GB (Mining-specific version of the GTX 1080) ​Power Consumption: 80W – 150W per card (depending on the algorithm/coin)

r/singularity BrennusSokol

With anti-AI sentiment at an all time high, Amazon stupidly puts THIS ad out... morons.

r/comfyui Creepy-Ad-6421

LTX-2 Subtitles

r/StableDiffusion Creepy-Ad-6421

LTX-2 Subtitles

Hi everyone,

I’m generating vertical videos with LTX-2 and I keep ending up with random / meaningless subtitles.

If anyone knows how to disable them

, I’d really appreciate the help. Thanks in advance!

r/StableDiffusion Mysterious-Base-5847

Best model for image generation with character consistency?

I have an image of a person and I want his image to be in a different scene. Image of the person is really really good, 4K really clear. BUt when I am placing it in a differnt scene and making it cartoonish. Its not giving good results.

tried Nano banana 3, openai models

Do you know a model that is best for this task?

r/SideProject Smooth-Blade7196

Launched my first side project — an AI tool to help understanding the CVEs actually relevant to your stack

Just launched my first side project: Clariseque

It’s an early AI-powered vulnerability assistant that helps developers figure out if a CVE even affects their tech stack, and how to remediate it. It’s meant to rise above the noise of generic security notifications.

I shared the launch on Peerlist to get some early visibility and feedback from the developer community, and now I’m hoping to learn from folks here as well.

It’s launched, it’s early, and it’s free to use for now. I’m looking for feedback and validation before I move forward.

Peerlist launch: https://peerlist.io/amitkout/project/clariseque

Thanks for checking it out

r/SideProject KarlHewitson

I built a free desktop app to manage side projects with built-in AI agent terminals - because I had too many to-do lists and other platforms all used API keys for AI Agents

I use Claude Code every day on the Max plan, and I love how AI has meant I can build more side projects faster than ever. But the more projects I spun up, the more I realised I had scattered tasks (across apple notes, obsidian, clickup boards etc etc), terminal windows (in terminals and inside in vscode), and context-switching between my backlog and my agents.

So I built something to fix it for myself, a Rust-based desktop app called Concursus.

What it does:

- Manage tasks across multiple projects with sub-channels

- View them as lists, kanban boards, or visual task flows

- Run CLI AI agent terminals directly inside the app - send tasks to Claude Code, Codex, Aider etc. without leaving the window and WITHOUT using API keys

- Add project knowledge packs, notes, reference files

- Uses an open source YAML data schema under the hood — your data stays portable and readable

The whole point was keeping things simple. No cloud dependency, no API keys needed for the agent integration, no bloated setup. Just a fast desktop tool that keeps everything in one place.

I wanted to put it out to the world in case it helps anyone else juggling multiple projects with AI agents. It's free - this isn't a sales pitch, it's just something I built for myself and think is pretty cool.

Download (Apple Silicon only for now): https://concursus.ai

If people want Intel Mac or Windows builds I can sort that in the near future, just let me know.

When I get time I'll put together a quick walkthrough video, but I've tried to make it pretty self explanatory. Keeping it simple is the whole point.

Any feedback or ideas for improvements would be massively appreciated, I'm actively working on it and want to make it genuinely useful.

Full disclosure: I built this. It's free. No warranties or guarantees. Enjoy!

r/StableDiffusion Sensitive-Rice-3270

Trained a Hatsune Miku-style LoRA for music gen — quick test result

  • Prompt:

bright cute synthesized voice, kz livetune style electropop, uplifting and euphoric, shimmering layered synth arpeggios, sparkling pluck synths, four-on-the-floor electronic kick, sidechained synth pads, warm supersaw chords, crisp hi-hats, anthemic and celebratory, polished Ableton-style production, bright and airy mixing, festival concert atmosphere, emotional buildup to euphoric drop, positive energy

  • Lyrics:

[Verse 1]

遠く離れた場所にいても

同じ空を見上げている

言葉が届かなくても

心はもう繋がっている

[Verse 2]

傷ついた日も迷った夜も

一人じゃないと気づいたの

画面の向こうの温もりが

わたしに勇気をくれた

[Pre-Chorus - building energy]

国境も時間も超えて

この歌よ世界に届け

[Chorus - anthemic]

手をつないで歩こう

どんな明日が来ても

手をつないで歌おう

ひとつになれる

WE CAN MAKE IT HAND IN HAND

光の中へ

WE CAN MAKE IT HAND IN HAND

一緒なら怖くない

[Instrumental - brass]

[Verse 3]

涙の数だけ強くなれる

それを教えてくれたのは

名前も顔も知らないけど

ここで出会えた仲間たち

[Pre-Chorus - building energy]

さあ声を合わせよう

世界中に響かせよう

[Chorus - anthemic]

手をつないで歩こう

どんな明日が来ても

手をつないで歌おう

ひとつになれる

WE CAN MAKE IT HAND IN HAND

光の中へ

WE CAN MAKE IT HAND IN HAND

一緒なら怖くない

[Bridge - choir harmonies]

(la la la la la la la)

(la la la la la la la)

一人の声が二人に

二人の声が百に

百の声が世界を変える

[Final Chorus - powerful]

手をつないで歩こう

どこまでも一緒に

手をつないで歌おう

夢は終わらない

WE CAN MAKE IT HAND IN HAND

光の中へ

WE CAN MAKE IT HAND IN HAND

FOREVER HAND IN HAND!

  • Parameters:

vocal_language: ja

bpm: 128

keyscale: Eb Major

duration: 210

inference_steps: 8

seed: 2774509722

guidance_scale: 7

shift: 3

lm_temperature: 0.85

lm_cfg_scale: 2

lm_top_k: 0

lm_top_p: 0.9

18 4
Reddit
r/singularity SMmania

Looks like Kling is not the only one with Motion Transfer

33 1
Reddit
r/AI_Agents SolanaDeFi

It’s been a big week for Agentic AI — here are 10 massive developments you might’ve missed:

  • OpenAI launches an enterprise agent platform
  • Perplexity introduces multi-agent model consensus
  • Claude Code gains self-analysis for workflows

A collection of AI Agent Updates!

1. OpenAI Launches Frontier (Enterprise Agent Platform)

OpenAI unveiled Frontier, a platform to build, deploy, and manage AI coworkers with shared memory, onboarding, permissions, and feedback loops.

This marks OpenAI’s full entry into enterprise-grade agent management.

2. Perplexity Launches Model Council (Multi-Agent System)

Model Council runs prompts through multiple frontier models at once, then synthesizes consensus while flagging disagreements.

Perplexity pushes multi-agent reasoning into everyday research workflows.

3. Claude Code Adds /insights Command

Claude Code can now analyze a month of usage history to summarize projects and suggest workflow improvements.

Coding agents are starting to reflect on how developers actually work.

4. Cloudflare Integrates Agents with Workflows

Cloudflare unified real-time agents with durable workflows, supporting WebSockets and long-running tasks together.

This closes a major gap between reactive and persistent agent systems.

5. Firecrawl Releases v2.8.0 with Parallel Agents

Firecrawl now supports running thousands of agent queries simultaneously with live web context and new Spark models.

Agent-powered web extraction scales to production workloads.

6. Perplexity Upgrades Deep Research with Opus 4.5 + DRACO

Deep Research now runs on Opus 4.5 and introduced the open-source DRACO benchmark across 10 domains.

Perplexity raises the bar for evaluating research agents.

7. ElevenLabs Releases Skills for AI Coding Assistants

New ElevenLabs skills improve how coding agents integrate voice and audio APIs.

Voice-first agent workflows become easier to build.

8. Vercel Agent-Browser Adds iOS Support

Vercel’s agent-browser now runs browser automation on iOS devices.

Self-driving infrastructure expands beyond desktop environments.

9. Microsoft Demonstrates Custom Copilot Agent Creation

Microsoft released guidance on extending and customizing Copilot agents.

Agent creation becomes more accessible to non-experts.

10. Helius Enables Automatic API Keys for AI Agents

Agents can now auto-generate wallets, fund accounts, and receive API keys with no manual setup.

This unlocks true autonomous onboarding for on-chain agents.

That’s a wrap on this week’s Agentic AI news.

Which update stood out to you most?

r/ProgrammerHumor dfwtjms

iKnowSomeOfYouMustBeFumingRightNow

246 53
Reddit
r/comfyui Disastrous-Meeting72

help with comfyui ClipVision model not found

I'm trying to figure out ComfyUI and everything related to neural networks and generation with the help of ChatGPT, but I hit a dead end when the AI keeps giving me the same four dumb generic tips in a loop. Could you tell me what I'm doing wrong here? "IPAdapterUnifiedLoader ClipVision model not found."

https://preview.redd.it/7jj5f1gk6iig1.png?width=2165&format=png&auto=webp&s=e07fb72209d150f7440f34dfafdd7a034b3ec7f3

r/singularity Educational_Grab_473

Seedance 2.0 can do animated fights really well

r/comfyui SnooOnions2625

LTX-2 Full SI2V lipsync video (Local generations) 5th video — full 1080p run (love/hate thoughts + workflow link)

Workflow I used ( It's older and open to any new ones if anyone has good ones to test):

https://github.com/RageCat73/RCWorkflows/blob/main/011426-LTX2-AudioSync-i2v-Ver2.json

Stuff I like: when LTX-2 behaves, the sync is still the best part. Mouth timing can be crazy accurate and it does those little micro-movements (breathing, tiny head motion) that make it feel like an actual performance instead of a puppet.

Stuff that drives me nuts: teeth. This run was the worst teeth-meld / mouth-smear situation I’ve had, especially anywhere that wasn’t a close-up. If you’re not right up in the character’s face, it can look like the model just runs out of “mouth pixels” and you get that melted look. Toward the end I started experimenting with prompts that call out teeth visibility/shape and it kind of helped, but it’s a gamble — sometimes it fixes it, sometimes it gives a big overbite or weird oversized teeth.

Wan2GP: I did try a few shots in Wan2GP again, but the lack of the same kind of controllable knobs made it hard for me to dial anything in. I ended up burning more time than I wanted trying to get the same framing/motion consistency. Distilled actually seems to behave better for me inside Wan2GP, but I wanted to stay clear of distilled for this video because I really don’t like the plastic-face look it can introduce. And distill seems to default to the same face no matter what your start frame is.

Resolution tradeoff (this was the main experiment): I forced this entire video to 1080p for faster generations and fewer out-of-memory problems. 1440p/4k definitely shines for detail (especially mouths/teeth "when it works"), but it’s also where I hit more instability and end up rebooting to fully flush things out when memory gets weird. 1080p let me run longer clips more reliably, but I’m pretty convinced it lowered the overall “crispness” compared to my mixed-res videos — mid and wide shots especially.

Prompt-wise: same conclusion as before. Short, bossy prompts work better. If I start getting too descriptive, it either freezes the shot or does something unhinged with framing. The more I fight the model in text, the more it fights back lol.

Anyway, video #5 is done and out. LTX-2 isn’t perfect, but it’s still getting the job done locally. If anyone has a consistent way to keep teeth stable in mid shots (without drifting identity or going plastic-face), I’d love to hear what you’re doing.

As someone asked previously. All Music is generated with Sora, and all songs are distrubuted thorought multiple services, spotify, apple music, etc https://open.spotify.com/artist/0ZtetT87RRltaBiRvYGzIW

r/LocalLLaMA -pawix

New "Stealth" Model - Aurora Alpha - (Free on OpenRouter)

New cloaked reasoning model dropped on OpenRouter for $0/M tokens

r/LocalLLaMA Potential_Block4598

Free Strix Halo performance!

TL;DR not all quants are born the same, some quants have bf16 tensors, which doesn’t work well on AMD as it seems, so find quants without bf16 tensors and you get anywhere between 50%-100% performance on both tgs and pp

Long detailed version

I was playing around with different models on my new Strix halo PC

I have multiple quantized Qwen3-Coder-Next (I absolutely love this model)

I have two from unsloth two from lm studio and one from Qwen hugging face GGUF model page

When loading it I noticed bf16 in some tensors, and I know that KV quantization to bf16 isn’t good on the halo (in fact isn’t good at all as it seems!)

So I checked the three of them, unsloth versions have bf16 in them and so did the lm-studio versions

But weirdly enough, Qwens own GGUF quants have no bf16, I fired them up and voila they are much much faster

It seemed like a super power, and also not well managed in the community, I love bf16, but it doesn’t work well at all on AMD (idk why is it being converted to F32 for emulation, that is a waste of everything especially if you convert it every time!, weird fallback behavior to what, anyways)

And I wish I can know this piece of info before downloading a whole quant (I have most of my GGUFs from lm studio and unsloth, if I do this to every other model I might get a lot better models!, seems good but I also feel bad all of these hours were wasted before, anyways sharing for the community to spare others this kind of waste)

(How to know if a quant has bf16, load it with llama.cpp and it will show it at some point even before loading scroll and you will see it (how many q4 tensors, q8s, f32, f16s and bf16s !!!)

Good luck out there!

(I can’t wait to find a good REAP of Minimax M2.1 with Intel round that DOESNT have bf16 in it!, seems like the best model I can get and double current numbers it would be usable (20-30 tgs ?! And around 100 pp give or take, but a thinking model that is also parallel tool calling with interleaved thinking what else could I ask for ?!)

So cheers!

r/homeassistant wavedash

Roller Shade recommendations for use with Zemismart Motors?

I'm looking to buy some new roller shades and install Zemismart motors in them. I'd like to get roller shades with a cassette/valance/top cover thing, but it's hard to tell how they attach to the brackets the shades use. Zemismart motors come with their own brackets, so I'm guessing there would likely be compatibility problems. I'd rather not do something like directly screw the cover into the wood.

How have other people dealt with this problem?

r/homeassistant Same-Pie-9757

Thoughts On Full Zigbee Transfers

Hey all

Long story short: most of my smart home is Aqara devices, currently paired back to either an M2 or M100 hub and exposed to Home Assistant via Matter.

I’m done with Thread (and I’ve lost faith in Nanoleaf + some flaky Onvis plugs), so I want to move to Zigbee plugs that pair directly to HA.

My rack is in the loft of a well-insulated 2020 UK bungalow — so RF through ceilings is a concern. Thinking about:

• Buying a SMLIGHT SLZB-06MU PoE Zigbee coordinator into my switch

• Migrating Aqara devices off the Aqara hubs into HA (Zigbee2MQTT/ZHA)

• Leaving the coordinator in the loft (PoE makes that easy)

Is that sensible or am I asking for coverage problems? If I keep the coordinator in the loft, what mains-powered repeaters/plugs do people recommend to act as routers (something as solid as an M100 hub)? Any gotchas with moving Aqara off their hubs (OTA concerns noted) or channel/power tips appreciated.

r/homeassistant Heavy-Panda-3724

My Home Assistant server was lonely, so I designed this 3D-printed tissue box for those 'Error 404: Tissue Not Found' moments.

r/midjourney ThinkingWisely

How was this kind of AI Image/video even made?

Hi everyone, I came across a video on Instagram that has me genuinely confused—in a good way. The level of consistency, control, and realism in the animation is far beyond anything I’ve been able to achieve with the AI models I’ve used so far.

What really throws me off is how the animals in the video were generated. Getting a model to create something unusual or very specific has been pretty challenging for me, so I’m wondering what tools or workflow could produce results this polished.

Does anyone here have any ideas about which image or video generation models might have been used? Midjourney? with Upscaling ? is there any reference prompt for similar stuff ? Or what the general process could look like to achieve this kind of accuracy and control?

The videos/Images is posted on an Instagram account that I have shared bellow, and I’m just trying to understand how it might have been made. Any insight would be super appreciated!

https://preview.redd.it/j1xymf4izhig1.jpg?width=945&format=pjpg&auto=webp&s=ffa02848ad1538738bf1e013f21bcfb843dd807d

https://preview.redd.it/uxi0ye4izhig1.jpg?width=945&format=pjpg&auto=webp&s=d179cb17f8f1e7bd567013969479b0191061999f

https://preview.redd.it/q7or9e4izhig1.jpg?width=945&format=pjpg&auto=webp&s=73b81a7baaf82219306a5591e519c6f3d4571e03

https://preview.redd.it/iiphde4izhig1.jpg?width=945&format=pjpg&auto=webp&s=0bc5a0f84faa339f7bc9e7caffe23b89e387de4d

https://preview.redd.it/kk870f4izhig1.jpg?width=945&format=pjpg&auto=webp&s=d3c29d57244ccca15f21928852e16433ba78ef8b

Instagram Account

r/Futurology lughnasadh

Gas turbines & Nuclear that can't be delivered until the 2030s, banning wind power & data centers in space; Will American AI's refusal to embrace solar+batteries mean high electricity prices for consumers?

One of the conundrums of mid-2020s US AI is its urgent need for electricity, and its seeming refusal to pursue the obvious path towards achieving this. China won't have this problem. It's installing solar & batteries at the rate of several nuclear power stations a month.

US Big Tech seems to be doing everything it can to avoid the obvious. It supports a President who is doing their best to ban wind power. Meta has signed a deal to power its AI with new nuclear. Good luck with that, Meta, if past performance is any guide, you still won't have it in 2040. xAI is looking at gas turbines. The problem there? The waiting list for new turbines stretches to the 2030s. Never fear. It will just spend orders of magnitude more than China does with solar+batteries to put data centers in space.

What's the problem with embracing solar+batteries? The AI firms are slated to spend $660 billion in 2026 alone. They could replicate a huge chunk of China's solar manufacturing capacity with some of that. There are plenty of home-grown grid storage startups with batteries, too.

The inevitable conclusion? Consumers will subsidize their mistakes with higher electricity prices as they use up more and more of the existing grid's capacity, as none of their decisions with gas, nuclear or data centers in space work out.

r/AI_Agents Tobloo2

Do you use more than one AI chatbot? If yes, what do you use each one for?

I’m trying to understand people’s setups to see if I could improve mine. Mine looks like this:

  • ChatGPT (paid subscription): general tasks
  • Gemini (free): creative brainstorming (brand vibe / identity ideas)
  • Perplexity (free): quick web searches when I don’t know what to google
  • Claude (paid subscription): coding help

I'd love to know, which chatbot do you prefer for which tasks?

Do you pay for multiple tools, or do you pay for one and use the rest on free tiers?

r/AI_Agents llamacoded

Litellm overhead becoming noticeable at 2k RPS - how do you handle this?

Running inference around 2,000 requests per second. Added a gateway for provider abstraction and it's adding 30-40ms latency per request.

We're using this for real-time ML serving where every millisecond compounds. 40ms gateway + 200ms model inference = users start noticing lag. Tried the usual optimizations - async, connection pooling, multiple workers. Helped but didn't solve it. The issue seems to be Python's concurrency model at this scale.

Looked at alternatives: custom Nginx setup (too much manual config), Portkey (seems enterprise-focused and pricey). We ended up trying Bifrost (Go-based and Open source). Latency dropped to sub-100 microseconds overhead. Still early but performance is solid.

Has anyone scaled Python-based gateways past 2k RPS without hitting this wall? Or did you end up switching runtimes?
What are high-throughput shops using for LLM routing?

14 2
Reddit
r/arduino Reasonable_Run_5529

esp32 WebServer library POST request with body

I am trying to implement a simple API, but after an hour searching I don't seem to be able to find a reliable example of how to handle a POST request with WebServer. Any help and/or suggestions will be more than welcome!

curl -X POST http://192.168.4.1/actor \
          -H "Content-Type: application/json" \
          -d "{\"name\":\"John\",\"lastname\":\"Travolta\"}"

... my sketch

#include 

...

server.on("/actor", HTTP_POST, []() {
  ... how am I supposed to read the body in here?
r/arduino reddrimss

what happen if i run arduino on 4.5V, 3.6V or 7.2V

I was looking for some way to power my lcd projet and i was wandering if the arduino componants was realy sensible to tiny varation of tention. and i fond some 1.5V AA lithum battery with a build in bms and a usb c port ready to by used: https://www.amazon.fr/ENEGON-AA-Rechargeables-Constante-Intelligentes/dp/B0G13ZDD6K/ref=pd_ci_mcx_mh_mcx_views_0_title?pd_rd_w=M6o3r&content-id=amzn1.sym.e1744f29-8ae7-49f6-9859-46cbe9e6a02e%3Aamzn1.symc.30e3dbb4-8dd8-4bad-b7a1-a45bcdbc49b8&pf_rd_p=e1744f29-8ae7-49f6-9859-46cbe9e6a02e&pf_rd_r=RD8WH49RGTWE6B6XXFAX&pd_rd_wg=nhlTl&pd_rd_r=ed0d86b0-b04f-4359-bf55-e7b76613d47b&pd_rd_i=B0G13ZDD6K
i also found some non rechargeble baterry with 3.6v .
can this harm the componants?

r/arduino llo7d

A mini desk robot is slowly coming together

I recently switched from a lcd to a more polished and cooler amoled display and the animations are looking so much better. Hard to tell in the video but the blacks are pure black and it looks soo nice compared to an lcd

The second part is just me starting it from the software (just an app that sends a http request)

r/midjourney Euratza2052

Light maintenance

r/artificial CortexVortex1

What's the enterprise approach to AI agent security? OpenClaw is amazing but unusable without proper controls

I'm super excited about OpenClaw's capabilities but honestly terrified after reading about all these security issues.

Found posts about 17,903 exposed instances, API keys stored in plain text, deleted creds saved in .bak files, and that CVE-2026-25253 Slack exploit. Someone even found a reverse shell backdoor in the 'better-polymarket' skill.

How are you all securing your OpenClaw deployments? Need solutions for runtime guardrails and policy enforcement. Can't ship agent features if they're this vulnerable.

r/artificial PkmnSnapperJJ

I was looking for a new home and was suddenly shut down by a coercive and threatening AI. WTH Meta???

I'm not looking for help. And I hope this post is not taken down the same way as my WhatsApp account. This is a discussion about customer service and how AI is out of control. Meta AI was actually rude with me and refused to even let me know about how to contact a human to solve my issues. In the possibility that my account was spamming, the automated system just shut it down without even giving me an option to appeal for a case. So in short, I can not message 50 apartments asking about their price to look for a new home because Meta AI will think I'm spamming and will shut down my account. And when I try to appeal for a case it will say there nothing to be done, and when I try to look for a way to speak to a human it'll say there's no need and AI is all I need to solve my problems... Really... What the Hell, Meta?

r/VEO3 PuddingConscious9166

VEO in the EU won’t accept reference images of kids? any workarounds?

I’m trying to make a short video for a client showing two kids chatting about automotive financing. I’m using VEO and started with the frame-to-video workflow, but because I’m in the EU it won’t accept any reference images that include children (I think) even though the concept is totally non-sensitive and just illustrative. The only way I’ve managed to get anything usable so far is by extending a text-to-video generation, which sometimes gets me ~20 seconds, but it’s pretty hit-and-miss. Any ideas? thanks!

r/Damnthatsinteresting Fun-Raisin2575

It's hard to believe this isn't AI. This video was taken by Mike Mikhailyuk in 2023 in the world's coldest city, Yakutsk, when it was -56°C.

r/n8n molehill_io

The Complete n8n Pricing Guide: Cloud vs Self-Hosted Cost Breakdown (2026)

I've been meaning to put together an article about how n8ns pricing works, and how the different tiers differ in terms of features and who they are for. Let me know if you find it useful.

r/aivideo Jay_Jay_Q

Mad Max — Reimagined with AI | Midjourney · Nanobanana · Grok · Kling

r/midjourney Scary-Demand7252

Dark fantasy mockup

r/painting ArtAni20

I am 13 y. o. This is my painting (not Ai). The third photo shows drawing process.

r/nextfuckinglevel Tascanis

Sneakers with a built in garage

5665 124
Reddit
r/aivideo Frosty-Program-1904

Backwards Drive - Kidokoro

r/aivideo maybeegreen

Magical Feather in winter evening

r/funny 21MayDay21

We call him Jean-Claude Nut Crack.

r/programming Economy-Reserve-4183

npm audit ai tool

**Experimenting with AI + GitHub Advisory/OSV.dev for npm security**

I've been combining GitHub Advisory Database + OSV.dev with Groq AI reasoning for smarter npm vulnerability analysis.

**What it does:**

- Send `pkg@version` → Get CVEs + AI exploit scenarios

- Code impact analysis + exact remediation versions

- Metadata (downloads, maintainers, GitHub repo)

**Example:** `lodash@4.17.15` → "MEDIUM: Prototype pollution fixed ≥4.17.21"

https://github.com/ziadasr/npm-ai-auditor

**Questions for the community:**

  1. Anyone combining AI with npm auditing? What models/tools?

  2. How do you prioritize npm security risks in your workflow?

  3. what would make u use npm audit instead of a new tool if u are still in the begging of ur code

and curios ab ur opinions too

Curious about your npm security stacks!

r/Damnthatsinteresting grasshopper3307

A stingray in home

728 77
Reddit
r/FluxAI Effective-Caregiver8

AI-generated insects in flight. Generated with Fiddlart (Model: Flux 2)

r/SipsTea lesfleurr

teacher was the problem😂

1763 92
Reddit
r/HumansBeingBros FollowingOdd896

When one friend got cancer, the whole squad showed up in the most powerful way.

984 13
Reddit
r/programming Economy-Reserve-4183

different type of npm auditors with a top ai layer

🚀 Next-Level npm Security: AI-Powered Auditing

I built a project that takes npm auditing to the next level by combining multiple tools—including GitHub Advisory and OSV.dev—with a top AI layer, Groq.

Unlike traditional auditors, this tool doesn’t just list vulnerabilities. You send a package name + version, and it instantly provides:

Detailed CVE analysis – how vulnerabilities could be exploited and harm your code
Actionable recommendations – prioritize fixes smartly
Full metadata insights – downloads, maintainers, publisher info, GitHub repo
AI-powered summarization – understand risks faster than ever

Check it out here: [https://github.com/ziadasr/npm-ai-auditor]

💬 Would love to hear more from the community

r/meme Agreeable_Dingo_128

2026 is the AI year

12 0
Reddit
r/programming According-Profile243

Sending a raw git diff to an LLM produces terrible commit messages, here's what it actually takes to make it work

I spent a while trying to get LLMs to write decent commit messages and I was surprised by how many things go wrong with the obvious approach of "just send the diff to the model".

The diff is almost always too big. A single package-lock.json update can be 7000+ lines. Even medium PRs easily blow past context limits. The fix that actually worked: filter noise out before it reaches Node.js, using git's own :(exclude) pathspecs directly in the git diff command. Then chunk what's left per-file, prioritize source code over config/test files, and hard-cap at ~2000 estimated tokens. For initial commits with dozens of new files, sending just the file list works better than sending the actual diff the model doesn't need 3000 lines to write "initial project setup".

Generic messages are useless. "Update files", "Implement changes" every AI commit tool I tried produced these. The problem is the model has no idea how you write commits. What actually helped: analyze the last 50 commits in the repo and inject a style guide into the prompt. Does the team use conventional commits? Which scopes? English or mixed language? Emoji? Ticket references? Once the model sees that context, the output actually matches the rest of the git log.

Branch names carry information. If you're on feature/PROJ-456-add-oauth, there's a ticket reference sitting right there. Extracting it with a configurable regex and feeding it to the prompt was a small thing that made a surprisingly big difference in output quality.

The "just review it" UX matters more than you'd think. Streaming the response token-by-token and then letting the user accept with a single keypress (raw mode stdin, no Enter) made the difference between "this is annoying" and "this actually fits my flow". Also added a git hook option (prepare-commit-msg) so you can just use git commit normally and the message is pre-filled but it never blocks the commit if something fails.

I ended up packaging all of this into a CLI tool called ghostcommit. It's open source, works with 5 providers (Groq and Gemini are free, Ollama runs locally), and has 224 tests. But the engineering problems above are what I found interesting regardless of the specific tool.

Curious if anyone else has run into similar issues when piping code context to LLMs — especially around token budgets and making the output feel "native" rather than generic.

r/nextfuckinglevel Normie-rediter

A guy finds a lost phone and holds onto it instead of walking away The owner finally shows up , everyone watches in silence , Face ID unlocks , and the crowd erupts in cheers

549 32
Reddit
r/oddlyterrifying KimJongUnBalls

Im i tripping or my son's book looks creepy

38 12
Reddit
r/n8n paulchirwa

Finally built my first complex automation with n8n

Hey guys. I'm a CS student who transitioned to building n8n automations a few weeks ago. After building a few automations for a week or 2, I started building my biggest project yet a week ago. After 30+ hours of building, testing and debugging I've finally built my first advanced working Automation: Al-Powered Missed Call & SMS Lead Engagement Automation.

webhook triggers on missed/failed/busy calls and incoming sms, engagés the lead assisting them with anything they'd need to know about the business and potentially scheduling them for a call.

I've tried and tested it over 500 times, and it's spot on with an accuracy of 98%.

I just thought of sharing my first big success using n8n with other n8n users

r/funny Civil_R0se

Neighborhood store got jokes

897 42
Reddit
r/Unexpected Satinblis

A little too drunk, buddy?

39 11
Reddit
r/Weird Skychu768

ISIS fighter describes what he did to religious minorities

603 221
Reddit
r/n8n JonyBadoni

My n8n investment automation was right... and I still managed to f*ck it up.

So... 3 months have passed since my original post here.

A lot of people were skeptical and roasted this automation, as they should. It looks messy and sounds like snake oil.

But now that some time has passed, here’s what it actually did:

What it got wrong:

  • The automation was convinced the bull run would continue when price found support at 102k. None of the top indicators fired: no altseason, no euphoria. (Only the 4-year cycle marked the top; I might have to give it a higher weighting in the future).

What it got right:

  • It assumed we were in a prolonged bear market at 95k, sold at that level, and targeted 60k as the likely floor. For risk management, it’s been buying a little bit on every major flush and selling small amounts on every bounce on the way down to de-risk.

Since I’m a positive and naive person, I was still hopeful this was just a brief dip and that the bull run would resume... and I reacted too slow.

So, I’ve now turned this automation into a live market chat agent that uses the same "brains," basically so it can talk some sense into me.

I made the agent free and open to anyone here: https://hunchmachine.com/crypto-market-ai-agent/

Use it to research and explore the market under the same backtested framework.

Try it out and please, roast it.

r/SipsTea South-Buffalo908

I live for the drama

62 11
Reddit
r/Damnthatsinteresting mohamed_Elngar21

Well-known tracks' noise, usually caused by loose railway track joints and floating sleepers.

81 32
Reddit
r/funny Mr-Night-Owl

Bad Bunny traveled 124 yards with the football during his halftime show, beating the Patriots 79 total rushing yards in the game.

7125 116
Reddit
r/todayilearned Forward-Answer-4407

TIL Dr. Karl Kruszelnicki designed an experiment after a nurse asked if her farts contaminated an operating theatre. He found that while gas from a subject with his pants down caused bacteria to sprout on Petri dishes, it did not when he was fully clothed, suggesting that clothing acts as a filter.

30 12
Reddit
r/PandR DanielCallaghan5379

In the card aisle at Target

55 4
Reddit
r/conan SlippingAway

Conan’s portrait by Kevin Nealon

Kevin Nealon shared it in Instagram.

28 2
Reddit
r/Showerthoughts MusicPsychFitness

One day in the future, “Do you smoke?” will mean marijuana by default, not nicotine.

52 19
Reddit
r/SweatyPalms New_Libran

Woman escapes carjacking attempt

In Chile

643 129
Reddit
r/ATBGE swtogirl

3D printed head purse

14 6
Reddit
r/blackmagicfuckery justalildropofpoison

Ok ...wtf? How?

174 31
Reddit
r/mildlyinteresting TooOld2DieYoung

The security code to my gift card is 1234

74 36
Reddit
r/interesting jmike1256

More than 500 private jets departed the Bay Area immediately after the Super Bowl ended.

92 6
Reddit
r/WinStupidPrizes a1oner_bvcksn6

That'll teach her to pick on someone her size

326 24
Reddit
r/Showerthoughts suprmario

American Football is essentially an IRL turn-based strategy game.

r/Jokes CuriousEngineer11

Three completely wasted friends get into a taxi...

The driver realizes how drunk they are and decides to mess with them.

He starts the engine, immediately turns it off, and says,

“Alright guys, we’re here.”

The first guy pays him.

The second guy even says, “Thanks!”

The third guy suddenly slaps the driver hard across the face.

“What the hell was that for?” the driver asks thinking that he figured it out!

“That 's for driving so fast...you almost killed us!”

14 0
Reddit
r/BrandNewSentence Dripping_Wet_Owl

cutting a glory hole in the fourth wall

56 4
Reddit
r/DunderMifflin vynepa

Does your karaoke machine have country songs?

599 15
Reddit
r/Art NataliaKvietok

Ocean Pulse, Bane art, oils, 2026

43 1
Reddit
r/interestingasfuck Candle-Jolly

OF model brings enthusiasm to a Polymarket Super Bowl cam

33 41
Reddit
r/VEO3 SlammmPig

Odd generation quirk

I’ve been having a heck of a time trying to figure a work around for what I can only assume is a very niche quirk, and Id like to see if anyone more else has a way to get past this. Specifically I’m attempting to generate an asian elephant, inside of an african environment. This has proven to be a challenge of near insurmountable proportions. Even using json, ingredients to video with the specific elephant, a starting frame of the elephant in the environment, when all is said and done, the result is an african elephant. Does anyone have any guidance?

Currently im able to generate the wildlife on a green backing, then composite on top of a landscape, and it just doesnt have the level of visual fidelity id like.

r/interesting Golden_Phoenix1986

A man guards his family from the cannibals during the Madras famine of 1877 at the time of British Raj, India

The Great Famine of 1876–1878 was a famine in India under British Crown rule. It began in 1876 after an intense drought resulted in crop failure in the Deccan Plateau. It affected south and Southwestern India - the British-administered presidencies of Madras and Bombay, and the princely states of Mysore and Hyderabad, for a period of two years. In 1877, famine came to affect regions northward, including parts of the Central Provinces and the North-Western Provinces, and a small area in Punjab. The famine ultimately affected an area of 670,000 square kilometres (257,000 sq mi) and caused distress to a population totalling 58,500,000. The excess mortality in the famine has been estimated in a range whose low end is 5.6 million human fatalities, high end 9.6 million fatalities, and a careful modern demographic estimate 8.2 million fatalities.

664 106
Reddit
r/mildlyinteresting waowediting

The spot on my dogs paw pad has grown since October.

871 52
Reddit
r/DunderMifflin Lost-Hovercraft-6446

Erin girl what 😭

I love her parts this one threw me off though 😂

13 3
Reddit
r/OldSchoolCool Global_Law4448

The famous Bob Hope in the 1940s on this beautiful BSA motorcycle.

r/Art venus_de_neko

Aries Moon, Venus de Neko, Mixed Media Analog Collage, 2026

r/explainlikeimfive Trogdor_98

ELI5 how does USB transfer data?

A USB connection (2.0) has four pins. Two are power leaving two for data. My question is how can complex data and commands be communicated over just two lines?

r/Unexpected Sea_Slice_7956

The bikers are kind enough to stop and help him 🤗

489 14
Reddit
r/OldSchoolCool MiamiHub1

A photograph of former Kaiser Wilhelm II with his dog, taken in the Netherlands.(1940)

16 2
Reddit
r/OldSchoolCool Flimsy-Impression792

A Parisian woman with her cat in her cannabis garden 1910

16 1
Reddit
r/SipsTea RobynNeonGal

So cool.....

32 3
Reddit
r/ARAM Derpina_714

TIL Karthus Can Spam ults with Ultimate Revolution In Mayhem

I don't know if this is well known, but I had ultimate revolution and ulted once when i was alive, and when i died I ulted in my passive. But since I already died, Ultimate Revolution went off cooldown, and then when i respawned I was able to ult again.

I don't think this was intentional, but I ended up pretty much ulting twice every minute. If I rolled clown college and got the reduced death timer set bonus, I bet I would just be bombing the howling abyss.

r/ClaudeAI optimus_dag

Does anyone know a way to get this information programatically outside of Claude?

I want to be able to monitor the usage with a tool, but I can't find a way to query this info.

r/SideProject Some-Rub9614

I built an AI career platform with 367 features that covers everything from college essays to salary negotiation — free to try

I was spending 30+ minutes per cover letter and realized the whole career prep process is broken. So I built PathwiseAI(https://www.pathwiseai.io/) — a platform with six AI studios that handles everything.

Writing Studio — Type "Google Software Engineer" and the AI automatically searches the web for the real job posting, then writes a personalized cover letter from your resume. 30 seconds, no copy-pasting. Also generates college essays, scholarship essays, and "Why This College" supplements with school-specific research.

Resume Lab — Scores your resume 0–100 across five categories (Content Quality, ATS Compatibility, Structure, Relevance, Impact) and tells you exactly what to fix. Re-score after changes to watch your number climb. Also has resume builders and a career change translator.

Email Center — 38+ templates for everything: recommendation requests, networking outreach, follow-ups, thank you notes, informational interviews. All personalized from your profile.

Interview Prep — STAR story builder, company deep dives, behavioral question practice.

Salary & Offers — This is where it gets interesting. The AI writes your salary counter-offer emails with market research, compares multiple offers side-by-side, and even handles severance negotiation. No competitor does this.

LinkedIn Studio — Generates optimized headlines, summaries, and experience sections.

Plus application trackers, scholarship finders, a deadline calendar, and AI detection scoring on everything so your content reads as human.

The whole platform adapts to where you are in life — high school, college, or career. 367 features total.

Free to try at pathwiseai.io — 5 credits/month on the free tier, and trackers/calendar/analytics are always free. Would love feedback from actual job seekers and students.

r/SideProject archaeal

I built a thing that pits your headlines against AI variations and lets strangers pick the winner

Hey all... I launched this little thing over the weekend and figured I'd share to see what ya'll think...

CopyBattle: you submit a headline (or really any copy), AI generates 7 variations, then real people vote head-to-head until there's a statistically confident winner. Uses Thompson Sampling, one of my favorite bits of math ;-)

Basically I was tired of staring at my own website copy going "is this any good?". Maybe this will prove useful, or at least just kinda fun?

It's free, it's live, it might be buggy. Would love some honest feedback. Or just have fun voting and/or creating your own battles!

https://copybattle.com

r/SideProject Objective_Ad1000

I kept getting auto-rejected from Gulf jobs despite years of experience, so I built an AI-powered CV optimizer for the Middle East market

Hey r/SideProject!

 The problem I ran into:

If you've ever applied for jobs in Dubai, Saudi Arabia, or Qatar, you know the frustration. I was sending out dozens of applications and hearing nothing back. Turns out, Gulf employers use ATS (Applicant Tracking Systems) that auto-reject around 75% of applications before a human ever sees them.

But here's the thing nobody tells you — Gulf resumes are fundamentally different from Western ones. Employers in the UAE expect a professional photo. Saudi employers want your nationality and marital status. Everyone wants your visa status upfront. Generic resume builders like Resume.io or Zety have zero awareness of any of this.

I'd spend hours optimizing my resume with Western tools, only to find out I was missing fields that Gulf recruiters consider mandatory.

What I built:

https://menajobs.me — an AI-powered resume optimizer built specifically for the GCC job market (UAE, Saudi Arabia, Qatar, Kuwait, Bahrain, Oman).

Here's what it does:

- 60-second ATS scan — Upload your resume and get scored against the 5 major ATS systems used in the Gulf (Workday, Taleo, SAP SuccessFactors, Oracle HCM, iCIMS)

- GCC-specific formatting — Knows about visa status fields, photo placement, NOC availability, Arabic proficiency, and country-specific requirements

- AI bullet improvements — Rewrites weak bullet points using the CAR framework (Challenge-Action-Result) with actual metrics

- Job description tailoring — Paste a job posting and get your resume optimized specifically for that role

 - 16+ templates — Including Gulf-specific ones with visa/photo sections built in

Pricing: Free tier gives you a high-level ATS score. Full analysis is $2.99 one-time. Complete package with templates is $4.99 one-time. (Professional resume writers charge $100-500 for the same thing.)

Results so far:

- 1,000+ resumes optimized

- Users report 3x more interview callbacks

- Average ATS score improvement: 34 → 82 (+48 points)

What's next:

Working on job URL scraping (paste a LinkedIn job link instead of copy-pasting the description), bulk JD tailoring, and skills gap analysis.

I'd love feedback from this community. What would make this more useful? What am I missing?

r/singularity Tkins

World Laureates Summit: AI Science Forum — Can AI Discover Anything?

Question to those who say AI is just hype by CEO's trying to make bank, what incentive do these Laureates have in their positive outlook for the utility of AI?

r/comfyui ReserveOutside1569

Qwen Image Edit in ComfyUI – Models installed but workflow says “Missing Models” (text encoder not detected)

Hi everyone,

I’m trying to switch from an SDXL + IPAdapter workflow to a Qwen Image Edit workflow in ComfyUI, but I’m running into a model detection issue that I can’t solve.

I’ve already spent a lot of time on this, including trying to debug it with ChatGPT, but I still can’t get the workflow to recognize the installed models correctly.


My goal

  • Use Qwen Image Edit in ComfyUI
  • Load a template workflow
  • Edit a reference image to generate variations of the same persona
  • Build a dataset for LoRA training

The problem

When I load the Qwen workflow, I get a “Missing Models” error, even though the models are clearly installed.

The error shows:

Missing Models

vae / qwen_image_vae.safetensors diffusion_models / qwen_image_edit_fp8_e4m3fn.safetensors text_encoders / qwen_2.5_vl_7b_fp8_scaled.safetensors loras / Qwen-Image-Edit-Lightning-4steps-V1.0-bf16.safetensors

But in the ComfyUI model manager, all of them appear as installed.


What I found

Inside my folders, the text encoder is located here:

ComfyUI/models/text_encoders/qwen/qwen_2.5_vl_7b_fp8_scaled.safetensors

But the workflow seems to expect:

ComfyUI/models/text_encoders/qwen_2.5_vl_7b_fp8_scaled.safetensors

So the file is inside a subfolder ("/qwen/") instead of directly inside "text_encoders/".

I suspect this is why ComfyUI says the model is missing.


My current folder structure

ComfyUI/ └── models/ ├── text_encoders/ │ └── qwen/ │ └── qwen_2.5_vl_7b_fp8_scaled.safetensors ├── diffusion_models/ ├── vae/ └── loras/


My questions

  1. Does Qwen require the models to be placed directly in the main folders, not inside subfolders?
  2. Is the correct structure supposed to be:

models/text_encoders/qwen_2.5_vl_7b_fp8_scaled.safetensors models/diffusion_models/qwen_image_edit_fp8_e4m3fn.safetensors models/vae/qwen_image_vae.safetensors models/loras/Qwen-Image-Edit-Lightning-4steps-V1.0-bf16.safetensors

  1. Is there a recommended minimal Qwen workflow for persona editing in ComfyUI?

Context

  • Running ComfyUI on RunPod
  • RTX 4090
  • Using the official Qwen template workflow

I’m mainly trying to generate a consistent persona dataset, and Qwen was recommended as a simpler alternative to IPAdapter FaceID.

Any help or confirmation on the correct folder structure would be really appreciated. Thanks!

r/ClaudeAI chigogotgb

What's are some good use cases for claude agent team

I get the idea of “agent teams” as multiple agents working on different tasks in parallel.

But in an end-to-end software project, a lot of work is dependent and sequential (e.g., UI design → API/frontend-backend contract → implementation). Because of this time/order dependency, it feels like you can’t fully parallelize the work, so an agent team might not be utilized effectively.

What are the best real-world use cases where agent teams do provide big gains? Any examples where multiple agents are clearly better than one?

r/ClaudeAI dsfrsiojgifjlrmlgmsg

Play SimCity over MCP with Claude

r/ClaudeAI bystanderInnen

something about AI coding feels kinda backwards lately

i keep noticing this thing and im not even sure how to phrase it cleanly, but it keeps happening so here we go.

some of the best devs i know just dont vibe with AI tools. like actual smart people, years of experience, can reason through complex systems in their head. they try LLMs for a bit and then go nah this is trash, slows me down, cant trust it.

and then there are other people, sometimes way more chaotic thinkers, who somehow get useful stuff out of it almost immediately.

that felt wrong to me at first.

the more i watch it the more i think using AI for coding isnt really coding. its more like babysitting something that sounds confident and forgets half the rules unless you keep reminding it.

if you expect it to just do the right thing you will hate it. if you assume its wrong by default and force it to explain itself, verify stuff, try again, it suddenly becomes less useless.

i think a lot of experienced devs keep tons of stuff in their head. unwritten rules, context, stuff you just know about the codebase. with humans that works fine. you dont need to spell out every assumption.

with an AI, if you dont say it, it doesnt exist. it will fill in the gaps and do it very confidently. then you look at the output and go why is this thing so dumb, but really it never knew the constraints you assumed were obvious.

also trust is weird. when the output looks clean you relax. you stop checking as hard. it feels like youre moving fast even when youre actually not. i catch myself doing this all the time.

the people who seem to do better are often the ones who just throw thoughts at it. like dont touch this file, check edge cases, now try to break it, explain why this might be wrong, ok try again but slower. its messy but it works.

maybe thats the creativity part. not creative code, but creative supervision. being able to look at the same thing from different angles and poke holes in it without getting annoyed.

so yeah i dont really have a clean conclusion. it just feels like AI rewards people who externalize their thinking and constantly second guess, and it kind of punishes people who are used to holding everything in their head and moving fast.

curious if anyone else has felt this or if im just spiraling.

r/singularity bladerskb

Seedance 2.0 Generates Realistic 1v1 Basketball Against Lebron Video

Just acouple months ago these models couldn't handle acrobatic physics. Insane. No floatiness, accurate physics, incredible body stability and contortion, realistic cloth simulation.

We are COOKED!

232 52
Reddit
r/comfyui Positive-Table-6810

Workflow/model for NSFW inpainting needed for the rig

I am trying to get a workflow for my setup running in WSL

  • AMD 7900xtx (24GB Vram)
  • 32GB RAM

if some1 has any links or workflows which can help me with the setup please do

r/LocalLLaMA jacek2023

LLaDA2.1-flash (103B) and LLaDA2.1-mini (16B)

10 8
Reddit
r/comfyui HumungreousNobolatis

Install ComfyUI from scratch after upgrading to CUDA 13.0

I had a we bit of fun installing ComfyUI today, I thought I might save some others the effort. This is on an RTX 3060.

Assuming MS build tools (2022 version, not 2026), git, python, etc. are installed already.

I'm using Python 3.12.7. My AI directory is I:\AI.

I:

cd AI

git clone https://github.com/comfyanonymous/ComfyUI.git

cd ComfyUI

Create a venv:

py -m venv venv

activate venv then:

pip install -r requirements.txt

py -m pip install --upgrade pip

pip uninstall torch pytorch torchvision torchaudio -y

pip install torch==2.10.0 torchvision==0.25.0 torchaudio==2.10.0 --index-url https://download.pytorch.org/whl/cu130

test -> OK

cd custom_nodes

git clone https://github.com/ltdrdata/ComfyUI-Manager

test -> OK

Adding missing node on various test workflows all good until I get to LLM nodes. OH OH!

comfyui_vlm_nodes fails to import (compile of llama-cpp-python fails).

CUDA toolkit found but no CUDA toolset, so:

Copy files from:

C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v13.0\extras\visual_studio_integration\MSBuildExtensions

to:

C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\MSBuild\Microsoft\VC\v170\BuildCustomizations

Still fails. This time: ImportError: cannot import name "AutoModelForVision2Seq" from 'transformers' __init__.py

So I replaced all instances of the word "AutoModelForVision2Seq" for "AutoModelForImageTextToText" (Transformers 5 compatibility)

I:\AI\ComfyUI\custom_nodes\comfyui_vlm_nodes\nodes\kosmos2.py

I:\AI\ComfyUI\custom_nodes\comfyui_vlm_nodes\nodes\qwen2vl.py

Also inside I:\AI\ComfyUI\custom_nodes\comfyui_marascott_nodes\py\inc\lib\llm.py

test -> OK!

There will be a better way to do this, (try/except), but this works for me.

r/SideProject Naresh_rt

Am I using AI or just hiding behind it?

I don't think prompt engineering is a real skill.

Not saying prompts don't matter. They do.

But if your entire value disappears when the model updates, that's not expertise. That's timing.

I've watched people build entire identities around being "good at prompting," while avoiding the harder stuff Talking to users.

Making decisions.

Owning outcomes.

The scary part isn't Al replacing people.

It's people replacing their thinking with Al and calling it progress.

At some point i ask myself Am I using this tool... or hiding behind it?

r/SideProject Ifh5816

proof - photo challenge game against ai (progress update)

Posted here a few months back, here’s where it’s at:

The game: AI scores your photos for rarity. High score = how much you surprise AI. Basically how unique your photo is compared to the everything it’s seen.

NEW

- Added a global pool where you compete against everybody to see who scores the highest.

- You can add friends, see pics they have entered into pool, and rack up a record against them. Example being I’m 11-3 against my buddy on here.

Starting to build up a TikTok page also.

https://proofcamera.app

r/ClaudeAI Aggravating-Gap7783

is there a way to share Skills for Claude Code?

For Openclaw we have claw hub, where folks share Skills. Is there anything like this for Claude?

Skills are just a bunch of md files that can be used interchangeably, so it's mostly about the community hub

r/SideProject Bite_Tricky

Built a free financial calculator suite to help people visualize long-term wealth impact of daily decisions

I've been working on a side project that tackles a problem I personally struggled with: understanding the real long-term impact of my financial decisions.

What it is: WealthVision - a collection of interactive financial calculators that show how daily habits and investment decisions compound over time.

The hook: People can see in real-time how their €5 daily coffee habit could become €149k in 10 years if invested instead (at 7% returns). Or how much house they can afford based on their actual cashflow.

Why I built it:

  • Most financial calculators are boring, hidden behind paywalls, or require signup
  • People don't connect daily spending to long-term wealth loss
  • Wanted something visual and immediate that makes the "aha" moment happen

Key features:

  • 100% free, no signup required
  • Real market data (actual ETF performance, etc.)
  • Privacy-first (no tracking)
  • Multiple calculators for different life stages

My question: For those of you who've built similar tools or work in fintech - what's been your experience getting traction? I'm currently exploring affiliate partnerships to monetize without charging users.

Would love any feedback on the concept or UX

r/SideProject Malcolm_Val

MyExplore v1.2.3 - Stop the GoogleSheet chaos when planning group trips [Looking for feedback]

Hey [r/SideProject](r/SideProject) !

Just launched v1.2.3 of MyExplore - a platform that fixes the mess of planning trips with friends.

The problem: Ever tried planning a trip with 5+ people? You know the pain:

  • WhatsApp messages scattered everywhere
  • "I'm interested" vs "I'm actually coming" confusion
  • No central place to organize activities
  • Everyone suggesting different things

What I built: A collaborative trip planner where:

  • Groups can plan together in real-time
  • Organizers can manage, observers can view (role-based permissions)
  • Location search with autocomplete
  • Everyone sees who's actually committed vs just browsing

Tech stack:

  • PHP MVC backend
  • Vanilla JavaScript frontend
  • Google Places AP

I and open source data

Note : Currently in French, but English version coming soon based on feedback!

Current status: v1.2.3 live, working on getting first real users

What I need:

  • Honest feedback on the UX
  • Ideas on how to get people to actually try it
  • Suggestions on features you'd want
  • Beta testers for their next group trip!

Link: https://myexplore.fr

Happy to answer questions about the tech or product decisions !

r/ClaudeAI Alarmed_Aerie_4794

CC is magical , but we really need an option to disable Claude from spawning task agents because really for specific tasks that needs memory it’s so bad since tasks spawn with no context

r/StableDiffusion Dear-Estimate-6824

How to run new Anima model

Does Anima model support anything else besides Comfy?

r/LocalLLaMA Objective-Loan-6332

How are people handling dynamic routing across providers?

Hey there!

With so many good models available now (Claude 3.5/4, GPT-4o mini, Grok, Llama 3.1 70B/405B, Gemini 2.0, Nova, etc.), a lot of us are mixing providers to get the best mix of quality + speed + cost.

Curious what your current setup looks like for routing prompts automatically:

- Are you using a ready-made router/proxy? (OpenRouter, PromptLayer, Portkey, LiteLLM, Helicone, or something else?)
- Or are you building your own logic? (e.g. length-based, keyword triggers, semantic classifier, cost threshold)
- How do you decide when to use a cheap/fast model vs when to send to the heavy hitter?
- Any big wins or painful lessons from routing in production?

Would love to hear real workflows — especially if you're running mostly open-weight or mixing closed + open models.

Thanks in advance!

r/homeassistant Available_Basket_728

Aqara Camera Facial Recognition

r/LocalLLaMA Iory1998

Do not Let the "Coder" in Qwen3-Coder-Next Fool You! It's the Smartest, General Purpose Model of its Size

Like many of you, I like to use LLM as tools to help improve my daily life, from editing my emails, to online search.

However, I like to use them as an "inner voice" to discuss general thoughts and get constructive critic. When I am faces with life-related problems, for instance, that might take might take me hours or days to figure out, a short session with an LLM can significantly quicken that process.

Since, the original Llama was leaked, I've been using LLMs locally, but they I always felt they were lacking behind OpenAI or Google models. Thus, I would always go back to using ChatGPT or Gemini when I need serious output. If I needed a long chatting sessions or help with a long documents, I didn't have choice to use the SOTA models, and that means willingly leaking personal or work-related data.

For me, Gemini-3 is the best model I've ever tried. I don't know about you, but I struggle sometimes to follow chatGPT's logic, but I find it easy to follow Gemini's. It's like that best friend who just gets you and speaks in your language.

Well, that was the case until I tried Qwen3-Coder-Next. For the first time, I could have stimulating and enlightening conversations with a local model. Previously, I used not-so-seriously Qwen3-Next-80B-A3B-Thinking as local daily driver, but that model always felt a bit inconsistent; sometimes, I get good output, and sometimes I get dumb one.

However, Qwen3-Coder-Next is more consistent, and you can feel that it's a pragmatic model trained to be a problem-solver rather than being a sycophant. Unprompted, it will suggest an author, a book, or a theory that already exists that might help. I genuinely feel I am conversing with a fellow thinker rather than a echo chamber constantly paraphrasing my prompts in a more polish way. It's the closest model to Gemini-2.5/3 that I can run locally in terms of quality of experience.

For non-coders, my point is do not sleep on Qwen3-Coder-Next simply because it's has the "coder" tag attached.

I can't wait for for Qwen-3.5 models. If Qwen3-Coder-Next is an early preview, we are in a real treat.

r/AI_Agents Weary_Abalone3891

selling anything

I've been learning automation tools (Make, n8n, small AI workflows) for a few months and just trying to understand where real demand exists.

For people already working with clients:

What niche is actually paying?

E-commerce?

Local businesses?

Agencies?

Something else?

Just trying to understand the market better.

r/ClaudeAI bobo-the-merciful

Introducing Nelson

I've been thinking a lot about how to structure and organise AI agents. Started reading about organisational theory. Span of control, unity of command, all that. Read some Drucker. Read some military doctrine. Went progressively further back in time until I was reading about how the Royal Navy coordinated fleets of ships across oceans with no radio, no satellites, and captains who might not see their admiral for weeks.

And I thought: that's basically subagents.

So I did what any normal person would do and built a Claude Code skill that makes Claude coordinate work like a 19th century naval fleet. It's called Nelson. Named after the admiral, not the Simpsons character, though honestly either works since both spend a lot of time telling others what to do.

There's a video demo in the README showing the building of a battleships game: https://github.com/harrymunro/nelson

You give Claude a mission, and Nelson structures it into sailing orders (define success, constraints, stop criteria), forms a squadron (picks an execution mode and sizes a team), draws up a battle plan (splits work into tasks with owners and dependencies), then runs quarterdeck checkpoints to make sure nobody's drifted off course. When it's done you get a captain's log. I am aware this sounds ridiculous. It works though.

Three execution modes:

  • Single-session for sequential stuff
  • Subagents when workers just report back to a coordinator
  • Agent teams (still experimental) when workers need to actually talk to each other

There's a risk tier system. Every task gets a station level. Station 0 is "patrol", low risk, easy rollback. Station 3 is "Trafalgar", which is reserved for irreversible actions and requires human confirmation, failure-mode checklists, and rollback plans before anyone's allowed to proceed. 

Turns out 18th century admirals were surprisingly good at risk management. Or maybe they just had a strong incentive not to lose the ship.

Installation is copying a folder into .claude/skills/. No dependencies, no build step. Works immediately with subagents, and if you've got agent teams enabled it'll use those too.

MIT licensed. Code's on GitHub.

r/SideProject Snoo_15313

Made a new WatchParty Website to watch shows and movie together

Hey everyone!

I was tired of the lag and black-screens you get when trying to watch stuff with friends over standard screen-sharing apps, so I built SheiyuWatch.

  • Browser Streaming: You can stream your browser tab directly to the room.
  • Local File Support: you can play a local video file (MP4 works the best in browsers) into the site and watch it in sync with friends without having to upload it to a server.
  • Low Latency: Optimized for keeping everyone on the exact same frame.
  • No Bloat: No forced accounts or heavy installers. Just create a room and share the code.

Check it out here: sheiyu.vercel.app

Any feedback is appreciated! or let me know if you need help navigate.

r/StableDiffusion Expensive_Estimate32

Only the OGs remember this.

128 24
Reddit
r/ClaudeAI Hirokage

Claude - Realistic Timeline

I have only recently started using Claude, because our CEO is incredibly gung-ho about implementing it apparently in all aspects of our business. We are around a 500 million dollar company, and he is trying to immediately go live with an AP tracking connected to our ERP (including writeback), a project tracking and management tool, a tier 1 helpdesk agent, he rebuilt a remote website, etc. - he has no formal training and as far as I know, has done no risk assessment for these projects.

I created a workflow automation tool over the weekend to try and find risks and determine processes, etc. - all I really see is that it is going to take a while. I can't even think how long a code review or anything else using RBAC, pertaining to PII, secure data etc. - would take.

Our CEO think we can go live in a week with the AP piece. I imagine he will want other parts up live quickly. We think in IT this is well.. crazy. We think there are some good ideas there, but are worried about things like risk, legal implications, etc. - and are wondering how long it would realistically take to accomplish this. Our IT team is not that large, and we already have a day job - he wants us to do this around all our other work.

Is this feasible? Would we need professional resources? How long would it take with say.. a professional team of 5 people working on the AP project to 'go live' off of our ERP data. I get that this post is probably missing a lot of context and facts, this is just in general I am asking. I think many CEOs see dollar signs in their eyes when they start playing with AI, but they don't get at all the risk factor involved, technical debt, or anything else.

r/ClaudeAI Competitive_Rip8635

Claude desktop shortcuts

Hi, are there any shortcuts to move between the chats and between sessions in code?
I'm talking about the desktop app on MacOS

Using only mouse is slowing me down.

r/AI_Agents digi604

My agent needed to react to events so i build swarmhook.com. Webhooks for your agent. It's free and opensource.

Instead of polling every 5m and spending tokens, with this, your bot can react to webhooks instantly. Want him to react to GitHub events? Ebay? Stripe payments? Monitoring your deployments? A 48h ephemeral inbox for your agent. Your bot should be up and running in 10s. Just point it to swarmhook.com. It can also be used if you have multiple bots that need to talk with each other in different networks. I hope someone else finds this useful as well.

r/ClaudeAI pleasepushh

featherbot: lightweight OpenClaw alternative that just works

hey everyone, Opus 4.6 and I built this simple, lightweight yet effective personal ai agent that just works.

This was a fun project and it's working surprising well. I've implemented some nice patterns for memory and background tasks. Have a look, play with it and share feedback if any.

github: https://github.com/piyushgupta53/featherbot

r/SideProject tiguidoio

Launched Lovable for existing product - 13K MRR

https://reddit.com/link/1r0atpv/video/4exrwcfq9iig1/player

We are building kosuke ai as a full‑time, bootstrapped, no investors team of 8 living and working together in a small (very very small but full of coffee) house in Milan since November 2025. We started as a CTO‑as‑a‑Service for early‑stage teams, but after a few months it became obvious that the most used and loved part of the product was the collaboration layer on top of existing codebases.

So we doubled down and built what is basically “Lovable for existing products”: a way to enable everyone to contribute to an existing repo without trashing quality. You import your codebase, describe changes in plain English, and our AI writes code that follows your existing conventions, patterns and architecture, so engineers review clean PRs instead of rewriting everything.

The philosophy is simple: everyone contributes, engineers stay in control. PMs, founders and non‑core devs can propose and iterate on changes, while the core team keeps full ownership through normal review workflows, tests and CI. No giant rewrites, no AI black box repo, just more momentum on the code you already have.

We are currently at around 13K MRR

Curious how others here think about this space: are you seeing more “AI on top of existing codebases” versus greenfield AI dev tools in your projects? And if you’re interested in how we run an 8‑person, fully committed, bootstrapped team around this (stack, pricing, customer segment, mistakes), happy to share more.

P.S. If you want to see what we’re building: https://kosuke.ai/

r/homeassistant smarthomecompared

How to Choose a Zigbee / Z-Wave / Thread Dongle

Hey folks

Choosing a Zigbee / Z-Wave / Thread dongle seems to be one of those questions that keeps popping up here fairly often.

“Which coordinator should I buy?”
“USB or Ethernet?”
“TI or Silicon Labs?”
“Is PoE worth it?”

I recently had to migrate my own dongle and went deep into research… and honestly, the info is super fragmented across the web.

So while doing the migration, I decided to properly document everything and turn it into a clear guide.

I tried to put ALL the practical stuff in one place:

  • Differences between Zigbee, Z-Wave and Thread
  • USB vs Ethernet/PoE coordinators (real pros/cons, not just specs)
  • Chipsets (TI vs Silicon Labs) and why they matter
  • Stability + interference tips
  • Placement advice for real homes

Basically, it’s the guide I wish I had before buying hardware.

If you’re planning a new Home Assistant setup or thinking about upgrading your current coordinator, this might save you some time and money.

👉 How to Choose a Zigbee / Z-Wave / Thread Dongle
https://smarthomecompared.com/blog/how-to-choose-a-dongle

Happy to answer questions or hear what you’re running in your setup too.

r/comfyui CarelessSurgeon

Expression Editor PHM gives a black box over the image?

Anytime I try to generate a different facial expression with this node, I end up with a black box covering the image. If I adjust the crop value, I can move the black box a little to one side but it’s still covering far too much of the image. Why would it do that? There’s very little information I’m able to find with Google in regards to this.

Is there a somewhat simple workflow that I can try to see if maybe that’s my issue? (I prefer simple workflows with the least amount of nodes as I often end up breaking comfy when I download many custom nodes) I’m using very few nodes at the moment. I’m not sure exactly what I’m using as I’m away from my PC, but I’m pretty sure I’m using the bare minimum that will allow it to run without warning me I’m missing something. It might even be Load image > Expression Editor > preview image. Perhaps there’s a model loader involved but I’m not sure at the moment.

Anyone have any ideas why it’s giving me a black box and does anyone else have a workflow I can try?

r/ClaudeAI ad_skipper

claude-agent-sdk-python does not tell me if my tool call failed or not.

All i get is a tool_response that contains text like

"text": "❌ **Parameter Validation Error for ...

I can parse the string to find out whether my tool call failed or not but that is very brittle. Is there a deterministic way to find this out. I am using hooks and I can see that all the features that would allow my to do this are supported only in the typescript SDK and not in the python SDK. Any help? TIA.

r/comfyui Capitan01R-

layers tinkering

r/homeassistant 6zonesoftheeast

Ikea Bilresa Two Button Remote’s performance - Matter Over Thread vs Zigbee

For anyone who has used the Bilresa remote both via matter over thread and via Zigbee, have you noticed any differences in performance or reliability?

I am currently using them via matter over thread and am experiencing periodic problems with connectivity. It stops working and then starts responding again 10 minutes later.

Have mine connected to Home Assistant using Apple TV as the thread border router.

r/LocalLLaMA StartupTim

Any tutorials for using the Nvidia DGX Spark with llama.cpp and models and configuring it?

Hey all,

I have a Nvidia DGX Spark laying around and I'd like to test it with a bunch of models. Is there any tutorial for setting it up with llama.cpp to serve via an API (openai compatible)?

Nvidia said that it is supposed to work with llama.cpp out of the box, but I don't see anything on the desktop to do anything related to this, or comfyui, or anything. Its just an Ubuntu-like desktop, nothing pre-installed or anything. I'd rather use it command-line also vs any gui apps.

Thanks

r/SideProject No-Professor-8083

Built a Shop Management System — Looking for Feedback from the Community

Hey folks,

I’ve been working on a shop management system and I’d love to get some honest feedback from the community.

So far, it includes:

  • Product and category management
  • Customers and user management
  • Purchases and quotations
  • Reports and basic analytics
  • General shop workflow features

The main pending feature is payment integration — everything else is functional and usable.

I’m mostly looking for feedback on:

  • UI/UX (does it feel intuitive?)
  • Feature gaps (what feels missing?)
  • Performance or usability issues
  • Any general suggestions or improvements

Here’s the live demo: https://investify.autoscaleops.com

Feel free to be blunt, constructive criticism is very welcome.
Thanks in advance to anyone who takes the time to check it out!

r/homeassistant dnt_f0rg3t_th3_J0k3r

Linknlink integration not working

r/homeassistant Crossicunt

Power socket add on?

Are there any add ons for wall sockets that would allow me to control my appliances but that doesn't replace the whole plug?

Like not visible from the Shelly or anything? Because i want to have pretty sockets that aren't white but i'd still like them to be HA compatible

r/homeassistant dnt_f0rg3t_th3_J0k3r

Linknlink integration not working

My linknlink sensor is no longer working. it used to work until one month ago and now it does not communicates with HA. Does anyone has the same issue?

r/LocalLLaMA Massive-Figure-9666

ACE-Step 1.5 prompt tips: how I get more controllable music output

I’ve been experimenting with ACE-Step 1.5 lately and wanted to share a short summary of what actually helped me get more controllable and musical results, based on the official tutorial + hands-on testing.

The biggest realization: ACE-Step works best when you treat prompts as [structured inputs], not a single sentence (same as other LLMs)

1. Separate “Tags” from “Lyrics”

Instead of writing one long prompt, think in two layers:

Tags = global control

Use comma-separated keywords to define:

  • genre / vibe (funk, pop, disco)
  • tempo (112 bpm, up-tempo)
  • instruments (slap bass, drum machine)
  • vocal type (male vocals, clean, rhythmic)
  • era / production feel (80s style, punchy, dry mix)

Being specific here matters a lot more than being poetic.

2. Use structured lyrics

Lyrics aren’t just text — section labels help a ton:

[intro]

[verse]

[chorus]

[bridge]

[outro]

Even very simple lines work better when the structure is clear. It pushes the model toward “song form” instead of a continuous loop.

3. Think rhythm, not prose

Short phrases, repetition, and percussive wording generate more stable results than long sentences. Treat vocals like part of the groove.

4. Iterate with small changes

If something feels off:

  • tweak tags first (tempo / mood / instruments)
  • then adjust one lyric section

No need to rewrite everything each run.

5. LoRA + prompt synergy

LoRAs help with style, but prompts still control:

  • structure
  • groove
  • energy

resource: https://github.com/ace-step/ACE-Step-1.5

29 3
Reddit
r/SideProject Rishabh_Stark

Let's make Camera app for us that takes good photos even when we dont know the principles

My story & the idea behind this project
I’m a web developer. Like many of us, I spend most of my time thinking in logic, layouts, and code—not light, angles, or composition.

I’ve noticed something about people like me (engineers, coders, tech folks):
we build incredible things… but when it comes to taking photos of people, nature, or even ourselves, we struggle.

Most of us don’t know:

where to place the subject
how light actually works
why a photo feels “off” even when the camera is good
how to guide someone while taking their photo
And honestly, we don’t want to learn photography theory.
We just want the photo to be good.

What makes it worse is that modern cameras and apps focus on editing after the photo. Filters. Retouching. AI beautification.
But none of that teaches you how to take a better photo in the first place.

So I started thinking:

What if the camera could guide you while you’re taking the photo—like a calm, experienced photographer standing next to you?
Not judging.
Not overwhelming.
Just small, human suggestions like:

“Take one step left”
“Light is behind the subject”
“Too close to the wall”
“Lower the camera slightly”
“This angle will look more natural”
The goal isn’t to replace photographers.
The goal is to help non-visual people become more confident with a camera.

Especially:

engineers
developers
office professionals
creators who don’t come from a design background
I want to build a cross-platform AI camera app that:

works on Android and iOS
gives real-time, simple guidance
helps with portraits and nature photography
actually teaches you by doing, not by reading theory
I’m new to AI and computer vision. My background is web development.
I’m starting with tools like ML Kit and MediaPipe and learning as I build.

I’m sharing this openly because:

I know many coders feel this pain
I don’t want to build this alone
I’d love feedback, advice, or collaborators
If you’ve ever taken a photo and thought “something feels wrong but I don’t know what”—this project is for you.

I’m building it slowly, honestly, and in public.My story & the idea behind this project
I’m a web developer. Like many of us, I spend most of my time thinking in logic, layouts, and code—not light, angles, or composition.
I’ve noticed something about people like me (engineers, coders, tech folks):

we build incredible things… but when it comes to taking photos of people, nature, or even ourselves, we struggle.
Most of us don’t know:

where to place the subject
how light actually works
why a photo feels “off” even when the camera is good
how to guide someone while taking their photo And honestly, we don’t want to learn photography theory.

We just want the photo to be good.
What makes it worse is that modern cameras and apps focus on editing after the photo. Filters. Retouching. AI beautification.

But none of that teaches you how to take a better photo in the first place.
So I started thinking:

What if the camera could guide you while you’re taking the photo—like a calm, experienced photographer standing next to you?
Not judging.
Not overwhelming.
Just small, human suggestions like:
“Take one step left”
“Light is behind the subject”
“Too close to the wall”
“Lower the camera slightly”
“This angle will look more natural”
The goal isn’t to replace photographers.
The goal is to help non-visual people become more confident with a camera.
Especially:
engineers developers office professionals creators who don’t come from a design background
I want to build a cross-platform AI camera app that:
works on Android and iOS gives real-time, simple guidance helps with portraits and nature photography actually teaches you by doing, not by reading theory
I’m new to AI and computer vision. My background is web development.
I’m starting with tools like ML Kit and MediaPipe and learning as I build.
I’m sharing this openly because:
I know many coders feel this pain
I don’t want to build this alone
I’d love feedback, advice, or collaborators
If you’ve ever taken a photo and thought “something feels wrong but I don’t know what”—this project is for you.
I’m building it slowly, honestly, and in public.

r/AI_Agents Necessary-Jelly1825

How to start AI for an audio classification graduation project

Hi everyone,

I’m working on a graduation project about audio classification using AI, but AI is not my major and I’m basically a beginner.

My supervisor isn’t very helpful, and my team and I are confused about:

* where to start

* what we actually need to learn

* how to finish the project efficiently in a limited time

I don’t want to master AI I just need a simple, clear plan to build a working audio classification model.

What would you recommend for:

* minimum ML/AI knowledge needed?

* tools/libraries for beginners?

* traditional ML vs deep learning for this case?

Any roadmap or advice would be really appreciated. Thanks 🙏

r/SideProject Ok_City6423

I got tired of review platforms holding my data hostage, so I'm building my own

Hey everyone 👋

Solo founder here. I've been building Reviewlee — a review infrastructure platform for businesses that actually lets you own your review data.

The problem that pissed me off:

I've worked with businesses that use platforms like Trustpilot, and the business model feels backwards. You pay ~$250/mo just to respond to your own reviews. Want to embed them on your site? Pay more. Export your data? Good luck. It's basically review ransom.

What Reviewlee does differently:

  • Collect reviews via forms, embeds, or API
  • Embed widgets on your site — included on every plan
  • Full API access — no paywall
  • Export your data anytime — it's YOUR data
  • Verification modes (email, proof of purchase) so reviews actually mean something
  • i18n out of the box (English, Spanish, Arabic with RTL)

What we charge for: Scale (review volume) and team seats. That's it. No per-domain fees, no "unlock responses" tiers, no hidden costs.

Starts at $39/mo. More honest than Trustpilot, more powerful than budget tools like Judge.me.

Where I'm at:

Auth, landing page, SEO infra, email service, backend API shell — all production-ready. Currently building out the core review collection and dashboard modules. Built with Next.js + NestJS + PostgreSQL.

What I'd love feedback on:

  1. Does the pricing feel right for small businesses?
  2. Would you actually switch from your current review tool for this?
  3. Any features that would be a dealbreaker if missing at launch?

Not trying to sell anything here — genuinely want to know if this solves a real enough pain point. Happy to answer any questions about the tech stack or business model.

r/ClaudeAI coolreddy

I built a CLAUDE.md that solves the compaction/context loss problem — open sourced it

I built a CLAUDE.md + template system that writes structured state to disk instead of relying on conversation memory. Context survives compaction. ~3.5K tokens.

GitHub link: Claude Context OS

If you've used Claude regularly like me, you know the drill by now. Twenty messages in, it auto-compacts, and suddenly it's forgotten your file paths, your decisions, the numbers you spent an hour working out.

Multiple users have figured out pieces of this — plan files, manual summaries, starting new chats. These help, but they're individual fixes. I needed something that worked across multi-week projects without me babysitting context. So I built a system around it.

What is lost in summarisation and compaction

Claude's default summarization loses five specific things:

  1. Precise numbers get rounded or dropped
  2. Conditional logic (IF/BUT/EXCEPT) collapses
  3. Decision rationale — the WHY evaporates, only WHAT survives
  4. Cross-document relationships flatten
  5. Open questions get silently resolved as settled

Asking Claude to "summarize" just triggers the same compression. So the fix isn't better summarization — it's structured templates with explicit fields that mechanically prevent these five failures.

What's in it

  • 6 context management rules (the key one: write state to disk, not conversation)
  • Session handoff protocol — next session picks up where you left off
  • 5 structured templates that prevent compaction loss
  • Document processing protocol (never bulk-read)
  • Error recovery for when things go wrong anyway
  • ~3.5K tokens for the core OS; templates loaded on-demand

What does it do?

  • Manual compaction at 60-70%, always writing state to disk first
  • Session handoffs — structured files that let the next session pick up exactly where you left off. By message 30, each exchange carries ~50K tokens of history. A fresh session with a handoff starts at ~5K. That's 10x less per message.
  • Subagent output contracts — when subagents return free-form prose, you get the same compression problem. These are structured return formats for document analysis, research, and review subagents.
  • "What NOT to Re-Read" field in every handoff — stops Claude from wasting tokens on files it already summarized

Who it's for

People doing real work across multiple sessions. If you're just asking Claude a question, you don't need any of this.

GitHub link: Claude Context OS

Happy to answer questions about the design decisions.

63 29
Reddit
r/homeassistant Sire0ne

Where's the slowdown??

So over the pass couple days, I've noticed that my Home Assistant instance (running on an Intel NUC with adequate resources) becoming slow to respond. E.g. slow loading pages, with random disconnect errors. Then I found out there was a bug (below) with the Watchman integration, but was recently fixed. I have since deleted the integration and rebooted but I've noticed that I'm still seeing slow downs. Going through the Logs, there are a ton of entries of updates taking longer that they should. How can I track down what's causing my slowdowns?

https://www.reddit.com/r/homeassistant/comments/1qnvib1/fyi_for_those_who_run_watchman/

Logger: homeassistant.helpers.entity
Source: helpers/entity.py:1297
First occurred: 8:53:08 AM (53 occurrences)
Last logged: 11:55:38 AM

Update of media_player.mpd is taking over 10 seconds
Update of sensor.front_yard_timer_zone_history is taking over 10 seconds
Update of sensor.front_yard_timer_battery_level is taking over 10 seconds
Update of sensor.unknown_zone_history is taking over 10 seconds
Update of sensor.east_yard_timer_battery_level is taking over 10 seconds

r/StableDiffusion TheBiggestGoonerOAT

Any LOCAL alternative for Haruka v2 by PixAi?

r/singularity Distinct-Question-16

Unitree G1 is subjected to harsh stress and emerges from it bravely

272 96
Reddit
r/LocalLLaMA nightlingo

Sanity check: "Kimi K2.5 (1T MoE) on a scrappy PC" plan - 1TB DDR4 + 2x RTX PRO 6000 (96GB) now, scaling later

hey folks

I want a sanity check on a pragmatic build path for running "Kimi K2.5 / K2-class ~1T MoE" locally. The goal is usable interactive (not YouTube fantasy), plus flexibility to run other models (dense + MoE), with the option to do multi-model serving if needed.

Model target (Kimi K2.5 / ~1T MoE)

From the published specs: around 1T total parameters, about 32B activated per token, MoE with 384 experts and top-8 experts per token, and long context up to 256K. I know 256K is hard mode and may require scaling tricks and has quality tradeoffs. I am aware the raw footprint is huge and that quantized variants and GGUF options exist.

My staged hardware plan

Stage 0 (now)

- GPU #1: RTX PRO 6000 Blackwell Max-Q 96GB (ordered)

- GPU #2: same, in a couple of months

Stage 1 (RAM platform)

- Goal: 1TB DDR4 ECC (likely around DDR4-2400 to DDR4-3200 depending on availability)

- DDR5 is currently too expensive at 1TB scale, so I am intentionally targeting DDR4

- Target platform: single-socket server or workstation board with enough DIMM slots for 1TB DDR4 ECC and PCIe Gen4 x16 slots

Stage 2 (future)

- 3rd and 4th GPU: maybe in 1 to 2 years

- 5th and 6th: maybe never, but I want the build to not dead-end

How I plan to run it (memory model)

My assumption is that the full model weights will live primarily in system RAM (1TB DDR4), and the GPUs will be used as an accelerator and cache:

- The complete model fits in CPU RAM as the backing store

- GPUs hold the hot working set only (KV cache blocks, frequently used experts, and runtime-managed caches)

- Cache hits stay on GPU VRAM

- Cache misses or cold experts are paged from system RAM over PCIe

- In other words, system RAM is the slow tier and VRAM is the fast tier

I realize different runtimes implement this differently (llama.cpp offload, vLLM paged attention, etc), so please sanity check whether this mental model is accurate for Kimi-class MoE and whether "GPU as cache plus RAM as backing store" is actually viable with 2x 96GB VRAM.

Expected performance (please sanity check)

I am looking for reality-based expectations for decode tokens per second (batch=1 interactive) across context tiers.

My current rough estimate with:

- 2x RTX PRO 6000 (192GB VRAM total)

- 1TB DDR4 ECC

- PCIe Gen4 x16

- a good runtime (llama.cpp, vLLM, or whatever ends up best for this)

Rough decode t/s guess (batch=1)

16K context: about 12 to 22 tokens per second

32K context: about 10 to 20 tokens per second

64K context: about 8 to 16 tokens per second

128K context: about 4 to 10 tokens per second, with more variance

256K context: about 1.5 to 5 tokens per second, extrapolation and paging-heavy territory

I am not claiming precision. Please tell me where I am wrong and what is actually realistic today.

Comparison point: Mac Studio 512GB

I have seen Mac Studio cluster posts reporting around 28 tokens per second on Kimi K2 Thinking on 4x Mac Studios with mixed 512GB and 256GB configurations, plus Jeff Geerling's RDMA and Thunderbolt experiments showing strong scaling on other giant models.

My intuition is that a Mac cluster can be surprisingly good for a single monster model, but the 2x RTX PRO 6000 path keeps more flexibility if I want to run other workloads later.

Questions for the community

1) Are my tokens per second ranges above sane for Kimi K2.5 or K2-class MoE on 2-GPU tensor parallelism?

2) How bad does PCIe Gen4 versus Gen5 actually hurt at TP=2, assuming we have lots of VRAM?

3) Does DDR4-2400 versus DDR4-3200 materially matter here, or is the bigger lever simply more VRAM leading to fewer CPU hits?

4) Which runtime stack is currently the least painful for this setup (llama.cpp RPC or Exo, vLLM, something else)?

5) Any gotchas with PRO Blackwell P2P, NCCL, IOMMU, or ACS settings that would nuke scaling?

I would love any hard numbers, configs, or blunt "do not do this" warnings.

r/StableDiffusion GGB_Gameplay

Image Upscale + Details

So I'm thinking about upgrading my GTX 1660 Ti to something newer. The main focus is gaming, but I'll do some IA image generation for hobby. Things are very expensive in my country, so I don't have many options. I'm accepting the idea that I'll have to get a 8GB GPU for now, until I can afford a better option.

I'm thinking about RTX 5050 or RTX 5060 to use models like Klein 9B. I should try GGUF Q4_K_M or NVFP4 versions because of 8GB VRAM. I know they are going to be less precise, but I'm more worried about finer details (that might be improved with higher resolutions generations). I'll be using ComfyUI on Windows 10, unless there's a better option than ComfyUI (on Windows). I have 32GB of RAM.

To handle the low amount of VRAM and still have high quality image, my ideia is to use some kind of 2nd pass and/or postprocessing + upscale. My question is: what are the options and how efficient they are? Something that makes an image looks less "AI generated". I know that it may be possivel, because there are very good AI generated images on internet.

I know about SeedVR2, I tried it on my GTX 1660 Ti, but it takes 120+ seconds for a 1.5MP image (1440x1080, for example), when I tried something higher than 2MP, it couldn't handle (OOM). The results are good overall, but it's bad with skin textures. I heard about SRPO today, still haven't tried it.

If you know another efficient tilled upscale technic, tell me. Maybe something using Klein or Z-Image? I also tried SD Ultimate Upscaler, but with SD 1.5 or SDXL.

P.S: Don't tell me to buy a 5060 Ti 16GB, it's a lot more expensive than 5060 here, out of my scope. And I couldn't find decent options for used GPU's either, but I'll keep looking.

r/SideProject npstrn

I built a self-hosted WhatsApp archive viewer with analytics

I wanted to archive my WhatsApp chats but couldn't find a good solution, so I built one. It looks like WhatsApp Web and includes:

- Upload exported chats (.txt or .zip with media)

- Search through years of messages

- View photos/videos

- Analytics with heatmaps and charts

- Shareable read-only links

Tech stack: FastAPI + React + PostgreSQL + MinIO

GitHub: https://github.com/sabrieker/whatsapp-archive

Live heatmap demo (no storage): https://heatmap.sabrieker.com

Self-hosted, couldn't figure out how to convince people to use it. But I thought it would be useful for anyone out there.

r/SideProject igbins09

3 months building an AI financial education platform

Been heads-down building a platform that uses AI to transform raw company financials into understandable narratives.

The idea: most people don't invest because financial data is intimidating, not because they're not smart enough. So what if we just... translated it? Built with Next.js, a real-time data pipeline, multi-tier caching, and Claude for the AI layer. Freemium model. Close to launch.

Happy to share more about the technical journey if anyone's interested.

r/SideProject Ok_Cartoonist2006

I compiled 100+ startup directories with DR, traffic & dofollow data - free list for 2026

A few months ago I shared a spreadsheet of 52 directories here and it blew up (400+ upvotes). People wanted more data domain ratings, traffic estimates, which ones give dofollow backlinks.

So I built LaunchDirectories.com that spreadsheet evolved into a searchable database with:

• 100+ directories (growing weekly)

• Domain Rating for each

• Dofollow vs nofollow tags

• Free vs paid filters

• Sorted by actual value, not random lists

The list is free. If you want help submitting to all of them, there's a service for that too.

Happy to answer questions about which directories actually move the needle :)

r/SideProject Quickz_

I built iScribby - a screen annotation tool that allows you to draw over anything

I built a screen annotation tool that allows you to draw and copy images from anywhere, even inside full-screen video games, which ordinarily cause issues for most apps.

The app is made to be as unobtrusive as possible. It runs in the background. You just press a shortcut to launch draw mode and press it again, to exit. It doesn't take focus from any apps you are currently in and it allows you to actually draw on the live screen, not just a frozen frame.

Hope you like it, any feedback much appreciated!

Website:

https://iscribby.com/

OS support:

It currently supports Windows, but that may change in the future, if there is any demand.

https://reddit.com/link/1r09u8x/video/zbmdut4j4iig1/player

r/StableDiffusion Vanpourix

How to get better synthwave style loops (LTX-2) ?

I had simple yet pretty good results with LTX-2 so far using the default comfyUI img2vid template for "interviews".
But trying to move to other style has been an hassle.

Are some of you trying generating simple synthwave infinite loops and getting somewhere ?
Did you use LTX-2 (with another workflow) or would you recommend using another model ?

Used this prompt in ltx-2 for what's matter:

A seamless looping 80s synthwave animated gif of a cute Welsh Pembroke Corgi driving a small retro convertible straight toward the camera along a glowing neon highway. The scene is vibrant, nostalgic, and playful, filled with classic synthwave atmosphere.

The corgi displays gentle natural idle motion in slow motion: subtle head bobbing, ears softly bouncing in the wind, blinking eyes, small steering adjustments with its paws, slight body sway from the road movement, and a relaxed happy expression. Its mouth is slightly open in a cheerful pant, tongue gently moving.

The overall style is retro-futuristic 1980s synthwave: vibrant pink, purple, cyan, and electric blue neon colors, glowing grid horizon, stylized starry sky, soft bloom, light film grain, and gentle VHS-style glow. The animation is fluid, calm, and hypnotic, designed for perfect seamless looping.

No text, no speech, no sound. Pure visual slow motion loop animation.
r/SideProject LightIn_

Community of stories with conversational format : sms-stories

This is a little project I have been working on for a few months, it would be super cool to have a community of writer/reader with this format !

What do you think about it ?

r/AI_Agents Bulky_Procedure_1878

Anyone else actually using AI voice agents for sales & support in production (not demos)?

I’ve been testing and running AI voice agents over the last few months for sales qualification, customer support, appointment setting, and basic customer care, and I feel like there’s a big gap between what people think voice AI can do vs what actually works in production.

A few real observations (curious if this matches others’ experience):

• Inbound calls are way easier than outbound
AI voice agents handle after-hours support, FAQs, appointment booking, and call routing surprisingly well. Most customers don’t even realize it’s AI if latency + voice quality are good.

• Sales calls only work if the agent sounds human
Anything robotic kills conversion instantly. The only setups that worked for us were ones that handled interruptions naturally, remembered context, and didn’t “IVR loop” people to death.

• Traditional IVR ≠ AI voice agents
IVR still feels like “press 1, press 2.” Modern voice agents feel more like a junior SDR or support rep who follows a playbook and escalates when needed.

• Call logs + transcripts matter more than the call itself
The real value is structured data:
– intent
– objections
– booked meetings
– resolved vs escalated calls

That’s where we actually improved sales follow-ups and customer support workflows.

We tested a few platforms (some were great demos but fell apart at scale). The one we’re still running in production is Feather Ai , mostly because it handled sales + support + appointment setting without us rewriting flows every week.

Not saying it’s perfect, just the first one that didn’t feel like an IVR with an LLM glued on.

Curious how others are using AI voice agents right now:

  • Are you running them for sales, customer support, or customer care?
  • Inbound only, or outbound too?
  • Any horror stories with customers hanging up?

Would love to hear real use cases (not landing pages).

r/AI_Agents life_on_my_terms

Openclaw... whats the use case?

I've been hearing ppl talk about openclaw

I've set it up, and have connnected w/ telelgram/whatsapp

yes i can send messages to it, but what else can i do w/ it?

I'm running it on a linux vps (for security reasons) so i dont have a bunch of my mac things hooked up to it.

For some reason, i just cant get it to work to use my claude max sub.

For coding, i already setup opencode (and i use the webapp via tailscale) and this is a far better coding dx (i can use claude and/or codex).

I also have built out my personal AI assistant w/ my own capacitor app w/ a chat interface that uses my data.

So im not sure what i can use Openclaw for. How are ppl using this? I guess the cron job is the main thing to do specific agent/task things?

r/SideProject Wise_Elderberry_7291

Built something new: Elderly Launcher.

A simple Android launcher made for seniors.

No clutter. No tiny icons. No “where did that button go?” moments.

Just big buttons, clear layout, and stuff that’s easy to tap without zooming in 3 times.

You can: – Make app buttons bigger – Change button colors – Hide names or icons if it feels cleaner – Assign numbers to apps or contacts for quick access – Use native language & number support – Switch to a high-contrast view

PlayStore: https://play.google.com/store/apps/details?id=com.isthmusalien.ElderlyLauncher

r/LocalLLaMA zinyando

Izwi - A local audio inference engine written in Rust

Been building Izwi, a fully local audio inference stack for speech workflows. No cloud APIs, no data leaving your machine.

What's inside:

  • Text-to-speech & speech recognition (ASR)
  • Voice cloning & voice design
  • Chat/audio-chat models
  • OpenAI-compatible API (/v1 routes)
  • Apple Silicon acceleration (Metal)

Stack: Rust backend (Candle/MLX), React/Vite UI, CLI-first workflow.

Everything runs locally. Pull models from Hugging Face, benchmark throughput, or just izwi tts "Hello world" and go.

Apache 2.0, actively developed. Would love feedback from anyone working on local ML in Rust!

GitHub: https://github.com/agentem-ai/izwi

r/ClaudeAI MaxGhenis

I built an MCP server that gives Claude Code access to Google Messages (SMS/RCS)

https://preview.redd.it/jq2zci9vrhig1.png?width=3232&format=png&auto=webp&s=eebfd06c46288cf674b13d4541dc28575e4364e2

I've been connecting Claude Code to all my communication channels — WhatsApp, Signal, Slack, Gmail. SMS was the last holdout.

OpenMessage is a native macOS app + MCP server for Google Messages. It connects to your Android phone and lets Claude search conversations, read messages, and send texts via MCP tools. I built it with Claude Code and have released it free.

Everything runs locally. Same pairing protocol as messages.google.com.

https://openmessage.ai | https://github.com/MaxGhenis/openmessage

https://preview.redd.it/kgnzlsqyrhig1.png?width=3232&format=png&auto=webp&s=4199746e554271d26a823cbd46cf1757edd9113c

r/SideProject Specific-Search5344

How to fetch YouTube video transcripts like DownSub site?

I want to make a tool that takes a YouTube link and gives the transcript. Is there an API for this, or how is it usually done?

r/arduino Paladin7373

ESP32 feather S2 network radio I made

Terrible printer guy here again!

It can only connect to normal Wi-Fi (SSID and Password, not enterprise) and has two stations hardcoded in- they are streaming URLs. If I turn on the radio with it station 1, demovibes, selected, it’ll enter setup mode where the feather S2 makes its own network that a phone can connect to, navigate to the IP address briefly shown in the video and enter the Wi-Fi name and password to be stored in the device. Then switching to station 2, nectarine, means it attempts to connect to the Wi-Fi. Booting with station 2 selected makes it automatically connect to the Wi-Fi straight away. I made this because I wanted to, so while it looks VERY diy, that’s half because I have a cheap Chinese printer and half because my hands aren’t that steady lol… let me know what you think :D

(The cringy name for it that I came up with is “Hitslash Pocket Radio” 💀)

30 2
Reddit
r/ClaudeAI DonTizi

How you build award-level sites in 2026

Hey! I'm a frontend dev as a hobby, I've been doing this for years and I was never impressed by AI agents for design work. The output always looked generic, the same layouts everyone else was getting. (purple, emojis , same grid , basic shadcn components) But over the last three months I developed a methodology that changed everything.

I now build production sites entirely with Claude Code real deployed sites with WebGL shaders, Three.js scenes, and scroll-linked animations and they actually look like my work. Two things made the difference: training your own skill file from scratch instead of downloading someone else's, and giving the agent a creative persona instead of the default "senior engineer."

I wrote up the full process and what it produced here: How you build award-level sites

Of course it can't do everything on its own, but right now when I ask it to modify something or add a new section or feature, it does it the way I would and that's what I like most about it.

Here is an exemple:

Portfolio

The other sites are free to try with live demos at opale-ui.design

r/homeassistant Mormegil81

All my data gone?

Hey everyone.

I just noticed that apparantly I lost all my historical data today for no discernible reason - I didn't do any updates or anything - it was there this morning and now 5 hours later it's all gone.

https://preview.redd.it/ond74ngv1iig1.jpg?width=1403&format=pjpg&auto=webp&s=27014069ad533698d7159babfb2c819521cea5ac

first I just thought this was a display error, I did some restarts, but all my sensors show like that. And the final confirmation was when I did a backup:

https://preview.redd.it/sds12ra12iig1.jpg?width=918&format=pjpg&auto=webp&s=bdbde36dcddb0c131b2461f9930089c14afe94f3

the file size was tiny.

So my questions are as follows: does this happen a lot? Did I do something wrong that might have cause this? What can I do to keep this from happening again?

and finally: is there any way to restore my data? I have some backups, the last one from 8 days ago, but I did a lot of changes about 5 days ago that I would lose if I completly restored that backup and that's just not worth it for me - is there maybe a way to just restore the long term data from the backup?

r/midjourney Expensive-Tie-9092

Golden Punk Anarchy

r/homeassistant max24688

Pet Tracer integration

Might not be many people using this type of collar but if you are and dont fancy the UI provided and want some historical data, this can help you:

https://github.com/max246/hacs-pettracer

Will try to extract more info to be served into the entity.

r/aivideo VicinoAI

Dog vs Robot Fight Scene - 3D CGI Animation

r/StableDiffusion maxiedaniels

Best model/node management??

Whenever I get a new workflow, it's such a headache to figure out what the nodes actually are, what models I need, etc. comfyui manager only works like 50% of the time unfortunately.

I know there's stability matrix but haven't tried it. I also know about Lora manager but that sounds like it's Loras only.

Anything else worth exploring?

r/SideProject Heksze

I built an email API where every call is paid with USDC - no accounts, no API keys

In my mind the future of the internet will be full of AI agents just doing their thing, millions and millions of agents on the internet with a new agent economy, where SaaS businisess are fully built for agents. I heard about the x402 protocol developed by Coinbase, that allows for microtransactions on the web using crypto. I found this protocol cool and though I need to combine these two things.

I built an email service with the idea to have an email API without accounts or API keys, where every email sent costs a microtransaction and your wallet is your identity. I also added AI Agent support, because in the future most users will be AI agents.

Site: x402mail.com

Do you guys think the crypto micropayment market is going to fueled by millions of AI agents creating a new economy?

r/homeassistant collective35

New to the world of Home Assistant. Thoughts on NanoPi NEO3?

Hey all! I am looking to start utilizing HA for a few light switches, matter outlets and doorbell cam. Would something like a NanoPi NEO3 with Quad-Core 64-bit CPU 2GB, work? I'm not opposed to getting a miniPC but just trying to get as small form factor as possible and found a good deal on the NanoPi.

TLDR; Would a NanoPi NEO3 with Quad-Core 64-bit CPU 2GB work WELL for Home Assistant?

Thanks in advance!

r/SideProject ComfortableHot6840

I built a tiny 3D game this morning while drinking coffee

I was sitting in a bar, having breakfast, with my coffee and my croissant, and I wanted to try this random AI tool, kind of vibecoding style like Lovable but for games.

A few prompts later I had a colorful third-person game where a bouncing ball keeps jumping forward onto moving platforms that slide left and right, you try to land clean, miss and you fall into the void. Simple idea, but it works pretty well.

It started as a throwaway experiment but it’s actually fun.

The interface is smooth, controls are minimal, and it already feels like a real prototype.

Now I’m stuck thinking if this is a game monetizable in 2026, or is it just AI bs.

There are tons of SaaS and apps made with vibecoding + AI getting millions in funding, so why couldn’t the same thing happen with games?

Some ideas I had:

  • mobile release with ads + unlockable skins (super simple)
  • leaderboard arcade game
  • short TikTok clips

I’m a social media manager so advertising this wouldn’t be hard, the only real problem is developing the game.

Am I overthinking this or do you think it’s actually doable? Obviously not in this version, since I spent like 15 minutes making what it is now. I’m just thinking: if I developed it in a slightly more structured way, would it make sense?

Take a look if you want, it’s free and you don’t even need to sign up to try it: https://app.onetap.build/share/324

Would love honest thoughts

73 16
Reddit
r/homeassistant thenameisdavid

I built an integration to track the medals for the Milano Cortina olympics

Medals table as displayed in my HA dashboard

I used the olympics.com website to build an integration to track the medals and medal winners for the 2026 Winter Olympics. It creates an entity for every country winning medals and updates every 5 minutes. I then used AI to generate cards to display the data in my dashboard. Here is the card as displayed in my dashoard and the link to the repo for those interested

https://github.com/DavidBilodeau1/milano_cortina_2026

r/homeassistant louislamore

Strange Hue Motion Sensor Behaviour

I have about 15 Hue motion (all v1) sensors that have all been paired directly to ZHA for about 2 years. All have worked flawlessly since the original pairing.

2 specific sensors have been becoming unresponsive over the past month. The fix was to delete them from ZHA, then re-pair them. However, yesterday this fix stopped working. I tried about 10 times with each sensor, but when they re-paired they would either only show as "detected" or "clear". They didn't respond to stimulus when I tried changing the state with Developer Tools.

Today I thought I'd try pairing them with my Hue bridge, which I still have connected for a few devices (the original Tap switch and Gradient strip which don't work with ZHA). They paired and are now working perfectly again.

I'm prefer to have them with ZHA, but this is fine too. Just wondering if anyone else has come across this and can offer any insight? Really hoping this doesn't happen one by one with all my Hue motion sensors...

r/aivideo divinebaboon

Seedance 2 is pretty dang good, I had trouble telling it's AI generated

r/ClaudeAI Higgs-Bosun

Does Anthropic need its own browser?

Browser control works OK, I’ve tried it in Comet, Opera, and Chrome. All of them are constantly disconnecting. Would it be easier if they just made their own browser?

r/LocalLLaMA ArtifartX

Good local LLM for tool calling?

I have 24GB of VRAM I can spare for this model, and it's main purpose will be for relatively basic tool calling tasks. The problem I've been running into (using web search as a tool) is models repeatedly using the tool redundantly or using it in cases where it is extremely unnecessary to use it at all. Qwen 3 VL 30B has proven to be the best so far, but it's running as a 4bpw quantization and is relatively slow. It seems like there has to be something smaller that is capable of low tool count and basic tool calling tasks. GLM 4.6v failed miserably when only giving it the single web search tool (same problems listed above). Have I overlooked any other options?

r/LocalLLaMA TrajansRow

Qwen3-Coder-Next performance on MLX vs llamacpp

Ivan Fioravanti just published an excellent breakdown of performance differences between MLX-LM and llama.cpp running on the Apple M3 Ultra. These are both great options for local inference, but it seems MLX has a significant edge for most workloads.

https://preview.redd.it/vb5b4b8xrhig1.png?width=2316&format=png&auto=webp&s=31aa4012319625eb4f437d590a7f2cec4f1ce810

https://x.com/ivanfioravanti/status/2020876939917971867?s=20

20 8
Reddit
r/homeassistant Tankz504

Looking for recommendations

Good morning,

I basically want to know if this set up will work with HomeKit Secure video.

I’m running a GMKTEC G3 plus. It has 32gb ram and a WD770 that were scavenged from a broken pc my son had. It’s currently running Proxmox. I have home assistant running in a VM, and pihole in an LXC.

My wife wants a new doorbell camera. Our Logitech circle is on the way out. I’ve read that a Reolink with Scrypted is the way to go.

The g3 has an N150 with 4 cores. I have 2 on the HA vm, and 1 on the pihole. Is one enough to run scrypted and have it plug into home kit secure video?

I’m concerned that the g3 plus will have enough processing power. HA had ZHA running with roughly 30 devices. I’ve got about 25 automations set up already. Since this camera is a wife request, I need it to run great.

My options are to upgrade hardware to a dell 3080 micro, remove the pihole LXC move it to a spare pi 4 giving me 2 cores for scrypted, or leave it all as is and run scrypted with 1 core.

For reference, I’m new to Proxmox. This is my first home server also. I’ve had this running for a little over a week now.

Thanks for any insight yall can provide.

r/SideProject Ecstatic-Ad-9000

The First 1k - heres how

I have tried so much online, but this is the one. Just sharing what’s worked. With a few survey apps, I earn $400–$600 every month without doing anything stressful. It’s become a nice side income. Even have proof of you want.

These are the exact apps I’m using: AttaPoll

https://attapoll.app/join/qvkmx

It pays via bank or paypal.

They’re legit, they pay, and you get bonuses for joining, with this link you get 0.50$. If you want to get the most out of them, I can show you what I do. I have proof also if you want with pictures

r/SideProject Effective-Can-9884

I wanted secure payments & LLM gating for my chrome extensions, so i ended up building it!

Hey — I’ve been building a Chrome extension and hit the inevitable point where I basically wanted to try & start charging for it (especially the ones that I have a backend for / have LLMs running and need to gate that. I couldnt find anything that already exists where I could handle secure backend gating as a plug in, without passing "user.isPaid" or something from the front end (obviously not secure!)

I tried a few routes (including ExtensionPay). ExtensionPay is genuinely solid and I’m not here to bash it — it’s a great way to get a paywall up quickly. I just kept bumping into a couple things I personally needed once the extension became more than a simple “unlock UI” product.

So I built BillingExtensions to solve my own problem, and then cleaned it up enough that other people can use it too.

The integration is intentionally boring/simple. There’s a one-command init:

npx -y -p /sdk bext init  

This init script pretty much does 90% of the leg work tbh. It updates your manifest, wires the SDK into your background/service worker, and even checks your existing setup to see whether you’re using ESM/module vs classic importScripts, so it picks the right integration for you. Pretty chuffed with this

What I cared about (and what pushed me to build it):

  • No content script required by default I wanted the cleanest permissions footprint I could. Content scripts aren’t inherently evil, but they do add trust/review friction if you don’t truly need them. With this, the normal flow works without one: user checks out in a tab, comes back / reopens the extension, and it’s unlocked. (Though if you need it, you can use one!)
  • Client-only when you’re just trying to ship I didn’t want “set up a backend + webhooks” to be the entry ticket to making £1.
  • Secure backend if needed This was the big difference for me: if you’re gating anything valuable (LLM calls, paid API access, expensive operations), your server shouldn’t be trusting the extension client. So BillingExtensions has:
    • a server-side verification API (backend can check paid status directly)
    • webhooks to keep your DB in sync with subscription changes (cancels, renewals, upgrades, etc.)
  • Nice “reactive” hooks in the extension There’s an onStatusChanged(next, prev, diff) hook so you can do the obvious “user upgraded → unlock features” / “subscription ended > lock it back down” flow without building your own

I want to point out that I am not doing this to earn money - I have added a really low fee purely to cover costs of hosting etc! Especially for the API and so on, but I genuinly just built this cause i needed it and thought others might too!

Not trying to spam or do a sales pitch — I mostly want feedback from people who’ve monetized extensions:

  • did you go client-only or backend verification?
  • what permission footprint did you end up with?
  • any Stripe/webhook edge cases that bit you?

If anyone wants the docs/snippets - take a look here:

Main website

The SDK

The API/Webhook Docs

r/homeassistant GeneAfraid9040

Balkonkraftwerk

Ich möchte möglichst einfach ein Balkonkraftwerk ohne Speicher in Home Assistant integrieren. Gibt es einen Wechselrichter, der gut mit Home Assistant funktioniert? Oder ist es am einfachsten, zur Messung der Einspeisung eine Zigbee-Steckdose zu verwenden?

r/AI_Agents Jimqro

have yall tried this god of prompt stuff for building agents?

random question but like has anyone here used god of prompt specifically when building agents, not just normal chat prompts. i kept running into the same issues where my agent would kinda do the task but drift, skip steps, or act confident while missing something obvious. i always thought it was a tooling issue until i started reading some god of prompt guides and realized half the problem was how i was framing the agent’s job in the first place.

what helped me was that god of prompt is very explicit about prompting as a system design thing. like defining roles clearly, separating planning from execution, spelling out constraints, and even telling the agent what failure looks like. once i applied that to agents, they stopped feeling like “vibe workers” and more like predictable processes. way less babysitting, way fewer surprises.

idk maybe this is obvious to some of u, but it was a pretty big unlock for me. curious if anyone else here is using prompting guides like god of prompt when building agents, or if u just learned this stuff the hard way.

r/ClaudeAI LiveDepartment1373

Best Claude tips after the Feb updates? (Opus 4.6, 1M context, Fast Mode)

Hey folks, I’m trying to level up my Claude Code workflow and avoid wasting tokens/context.

I’m looking for the “hidden gems” and battle-tested habits:

- How do you keep context small and avoid it getting noisy?

- What commands/features do you use the most (/clear, /compact, /rewind, status line, etc.)?

- What skills are actually worth creating and using frequently?

- Any good hook ideas to filter noisy outputs (tests, logs) before Claude sees them?

- MCP setup tips: which servers are worth it, and how do you avoid MCP/tool overhead?

- Model strategy: when do you stay on Sonnet vs switch to Opus, and do you tune effort/thinking for cost?

Also something slightly different:

Has anyone used Claude to build more “Awwwards-style” projects, meaning highly polished UI, interactions, or creative web experiences instead of typical CRUD apps?

I’m curious:

- What skills or workflows help the AI produce more refined frontend work (animation structure, layout systems, micro-interactions, storytelling pages)?

- How do you guide the AI to think more like a creative dev or designer instead of just generating standard components?

- Any prompt patterns or planning steps that help reach that level of polish?

If you have example prompts, skills, or a default workflow you run every time, I’d love to see them.

r/SideProject karimsakr123

Final Year Project Idea for Computer Engineering students.

I need help to find a good idea for my FYP as a Computer Engineering student. The project should contain both hardware and software. But as for hardware, we don't need to be too fancy with it (such as drone or robotic arm)... So if anyone has any idea, it would be great to help.

r/funny yahooxy

Subtitles had one job.

12 2
Reddit
r/SideProject Cryptocizzy

Free “Is This a Scam?” Scanner

I’ve been working on The Fraud Codex, a free scam detection and fraud intelligence platform. Would love your feedback 👀.

What it does:

Scam Scanner — Paste a URL, email, phone number, or describe a suspicious situation. It runs OSINT checks and AI analysis, then gives you a risk score with evidence sources. You can download and share the report.

The Codex — An encyclopedia of 42+ scam types (pig butchering, AI voice cloning, sextortion, romance scams, etc.) with how they work, red flags, victim examples, and what to do if you’re hit. Adding more everyday.

Live Threat Map & News — Aggregated fraud alerts from FBI, FTC, DOJ, SEC, CISA, and cybersecurity publications.

Data comes from verified sources and threat intelligence

No account required, no paywall.

3 free scans per day while in beta (managing API costs)

Would love your feedback!

r/SideProject Any-Pomegranate1184

I kept killing my plants so I built an app for it - just launched on iOS 🌱

Hey everyone! I kept killing my houseplants because I'd forget to water them (or drown them with too much love). So I built PlantParent to track watering schedules and care reminders.

Just launched on iOS: https://apps.apple.com/us/app/plantparent/id6757963009

Would love any feedback from fellow plant people - what features would make your life easier?

r/ClaudeAI SeriousSir1148

Claude API Lock-in: Prevention & Recovery Guide

While using Claude Code across daily development and API-driven workflows, we ran into a subtle but costly failure mode: developer environments can get permanently locked into API billing after an API key is used — even if the key is later removed. This isn’t a typical bug. It’s a side-effect of how identity and billing enforcement now works.
Why it matters: Subscription usage is fixed and predictable. API usage is variable and quietly expensive, especially during long coding sessions or exploratory work. An accidental lock-in can add $600–$1,200 per developer per year with no increase in capability.
The root cause is boundary collapse.
Claude Code supports two modes:
Subscription mode for interactive development (OAuth-based)
API mode for automation, experiments, and services (key-based)
Once an API key is accepted, the system may permanently classify that environment as API-billed. In many cases, there’s no reliable way to revert. Identity, intent, and billing state get merged — and recovery becomes hard.
The fix isn’t complicated, but it requires discipline:
separate identities by intent.
One identity for daily development (subscription only, no API keys)
One for experiments and POCs (API, isolated environment)
One for production services (API, rate-limited and monitored)
This applies even to solo founders.
If an environment is already locked, the safest path is often to accept it as API-only, repurpose it for experiments, and create a fresh subscription identity for daily coding. Short-term friction, long-term savings.
We documented this as an internal SOP because it affected cost predictability, onboarding, and developer velocity. Sharing it here in case it saves another team from learning the hard way.
Sometimes the most important infrastructure work is preventing silent failure modes, not scaling systems. Anthropic

r/singularity Alexander_the_M1d

2026, the year of agent swarm

If 2025 was the year of the agent, 2026 is the year of the agent swarm. Cursor coordinated hundreds of GPT-5.2 agents to build a web browser from scratch in one week. Kimi K2.5 can now self-direct up to 100 sub-agents across 1,500 tool calls, with swarm orchestration trained via reinforcement learning. Anthropic published a guide on building multi-agent systems, laying out exactly when and how agent teams outperform single agents in production. The direction is clear: we are moving from single agent to agent swarm.

And I deliberately use the word "swarm," not "matrix". A matrix is just copy-paste, dumb replication. A swarm is emergent intelligence: autonomous agents self-organizing, specializing, and collaborating to solve problems none of them could handle alone.

16 0
Reddit
r/comfyui White_Horizon

issues installing comfyui on linux?

i am using manjaro and everything was going perfectly, until manjaro updated to python 14 and i have not find away to install comfyui without nodes loading issues, recognizing them or cuda conflicts.

i am looking for distro recommendation cuz takes less ram than windows. i only have 32g ram and 16vram which would

edit: rtx 5060 16g

i used venv until before it messes up, i tried to do it with uv venv and installng python 12 there, it did not work, multiple different errors after installing dependencies

and installed different versions of pytorch. it does not work. workflows stop on a node i get error like

*node name*

CUDA error: no kernel image is available for execution on the device

CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.

For debugging consider passing CUDA_LAUNCH_BLOCKING=1

Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.

r/ProgrammerHumor jaikanthsh308

teachEmYoung

343 5
Reddit
r/ClaudeAI Weekly-Ninja6117

I shipped a Flutter app in 29 hours using Claude Code. Here's what actually happened.

AI won't replace developers. But developers using AI will replace those who don't.

Last month, I decided to test this theory. I gave myself a challenge: build and ship a production app using Claude Code as my coding partner. The goal was simple—see how fast I could move when AI handles the typing while I handle the thinking.

29 hours later, PennyWise was live on the Play Store.

But here's what people misunderstand about AI-assisted development. They think it means you describe what you want and the AI magically builds it. That's not how it works.

I still spent hours on architecture decisions. I wrote a detailed blueprint with 29 tasks. I created a coding philosophy document that Claude had to follow. And when things broke—which they did—I had to diagnose and direct the fixes. Claude wrote the code, but every decision was mine.

Here's the thing: using Claude to build production apps is actually harder than coding manually, at least at first. You need to know architecture patterns deeply enough to explain them. You need to write requirements so clearly there's no ambiguity. You need to make the design decisions AI can't make.

But once you learn to direct Claude effectively, something shifts. What used to take 60+ hours of manual coding now takes 29 hours of strategic work. I'm not typing less—I'm thinking more and moving faster.

That's the real insight. Claude didn't replace my expertise. It amplified it.

The app is live now. Privacy-first expense tracker, no account required, local storage only. I've documented the entire process in a case study if anyone wants to see the specifics.

The future isn't AI versus developers. It's developers with AI versus developers without AI.

Play Store: https://play.google.com/store/apps/details?id=app.taaqat.expense_tracker_penny_wise

Case study: https://pennywise.taaqat.app/case-study

Happy to answer questions about the process or Claude Code specifically.

r/n8n emrahdemirkoc

Why I ditched the "Mega-Prompt" for a 4-Layered AI Inventory Prophet in n8n.

Mondays are for sorting out stock issues, but I decided to automate mine. 📦

I used to have one giant system prompt trying to predict inventory needs. It was a hallucination nightmare. So, I rebuilt it using a Layered Architecture in n8n.

The Layers:

  1. Data Layer: Pulls sales velocity from Postgres and seasonal trends from Google Trends API.
  2. Analytic Layer: A specialized agent that looks for 'The Why'. (e.g., 'Sales are up because it's nearly Valentine's Day, not just random growth').
  3. Constraint Layer: A second agent that checks the forecast against real-world limits: Cash flow, warehouse space, and supplier lead times.
  4. Decision Layer: Only drafts a Purchase Order if the 'Confidence Score' is above 85%.

The Lesson: Specialized, small agents are much more reliable than one 'all-knowing' bot.

Question: How are you guys handling external variables like shipping delays or supply chain shocks in your n8n logic? Do you use a separate 'Risk' node or feed it all into one prompt?

r/Damnthatsinteresting Necessary-Win-8730

Virginia park ranger Roy Sullivan survived being struck by lightning 7 times on 7 different occasions. The odds of this happening are 4 in 100,000,000,000,000,000,000,000,000,000,000

270 76
Reddit
r/n8n zkundify

N8n MCP

How do you use n8n-mcp, and what do you use it for? I’d appreciate a few concrete examples to better understand what this is 👍🏻

r/SideProject Weak_Still_7288

I got tired of tab-hopping during big events, so I built this

This started during one of those “uhhh something big is happening” nights.

I was following the US–Venezuela kidnapping news and every stream had a different angle — US coverage, international media, live reactions, random YouTubers explaining things with arrows.

I opened:
– YouTube
– Another YouTube
– Another YouTube.......
– X
– Market stream

Chrome did what Chrome does best:
tabs everywhere, fan screaming, RAM gone.

Also… apparently I’m only allowed 4 screens at the same time.

I just wanted to see everything at once instead of playing tab whack-a-mole.

So I built OpenBento — a tiny web app where you can drag, resize, and watch multiple live streams on one screen.

It’s early. It’s scrappy.

Not selling anything — genuinely curious:
– Would you use this or is it too much info?
– What would make it actually useful?
– Any blocks ideas?

Link if you’re curious: openbento.tv

https://reddit.com/link/1r08cuk/video/njuy5afethig1/player

r/SideProject Moonmoon1590

Epoch - Daily History Puzzle Game

I just shipped a wordle-esque daily history puzzle as a web app

The idea: give players 5 historical events (from chocolate chip cookies → moon landings) and ask them to reorder them correctly.

What I learned:

• People LOVE short daily games

• Fun events matter more than “important” ones

• Chronology is harder than it looks

Try it here: https://epoch-pi.vercel.app

Feedback welcome!

r/SideProject stayfrostybabes

I made my girlfriend pee more

Sounds really strange, but hear me out!

We're both remote workers with incredibly busy jobs. She sometimes spends her day in back-to-back meetings for 8-10 hours straight. Because she is so busy, she often forgets to eat, but first and foremost, she barely drinks any water and ends up being tired and with a headache by 6 pm.

I decided to build her a really cute Chrome extension with a cute mascot that she'd like. She's been using it for the past 3 weeks and said that she's literally drinking twice the amount than before, which makes me really happy! The only downside is that she also pees twice as much now. 😂

I thought if it helped her, maybe it can help other remote workers build a healthy habit as well. :)

Check it out at https://www.hydroheroapp.com/, everyone gets a 7-day free trial to see if they like.

Stay hydrated. 💧

r/Damnthatsinteresting ifuckedyourmom-247

Someone should sample this

16 12
Reddit
r/StableDiffusion femdompeg

Best model for training LORA for realistic photos

Right now I'm using WAN 2.1 to train my lora and generate photos. I'm able to do everything in local with AI Toolkit. I'm then animating with WAN 2.2. I'm wondering if there's a better model to just train/generate realistic photos?

r/SideProject buildjunkie

Built 6 micro-apps in just 14 day! And it was PAINFUL!

Hi everyone!

I saw a lady on X starting a 28 challenge to build 6 apps as a designer. That inspired me to take on similar challenge of building 6 micro-apps in just 2 weeks

How I got the ideas:

I started by going over to my buddy ChatGPT and asking him about problems that I face daily that can be fixed with one-feature apps. It gave me 10:

  1. Decision Fatigue (What Should I Work On Now?)
  2. Inconsistent Daily Execution
  3. Content Ideas Scattered Everywhere
  4. No Clear Weekly Focused Plan
  5. Overworking Without Recovery Tracking
  6. Forgetting Small but Critical Tasks
  7. Weak Feedback Loop on Progress
  8. Switching Between Too Many Tools
  9. No System for Learning Retention
  10. Underestimating Your Own Workload

I chose the 6 that either felt so relatable, or so easy to build 1-feature micro-app for (because the main focus here is speed)

So I picked from 1 -> 7 skipping 3.

How I built them so fast:

First things first, I initialize a nextjs project and clean the template to be fully empty.

After cleaning it up, I start using my other buddy Claude to generate in-depth specifications for each app. The specifications include all the details as if I'm going to hand it to a web developer, which is exactly what I need to give to an AI coding agent to give me the best possible results with the least amount of errors. Here're exactly the specifications I asked it for:

  1. Universal design system
  2. Landing Page
  3. One for each app (6 files in total)
  4. Database & Auth

That's all I remember honestly 😅

After getting the specification ready, I just go ahead and ask the AI coding agent exactly this: "I've added LANDING_PAGE_SPEC.md. Read it and implement what it suggests precisely.

And that's actually everything! It goes on and follow the instructions clearly; because you have a very specific specs before-hand

Final insight:

I was building the project for fun, and to see how far can I actually get without having to spend a lot of time, effort or money into this. After this little experience, I can confidently say, this thing didn't take more than 10 work hours in total, and didn't cost me a single penny.

You can try it out here, and the github repo in here

Good bye!

[This project is for fun, anyone can use it, and the source code is public]

r/ClaudeAI Driver_Octa

Did my company just wake up and decide “AI everything”?

So last week my company randomly told us to pause all work by Friday. No context. Thought it was a new project or some org drama.

Turns out… nope.
We’re now using Claude for everything.

PMs? Claude.
Designers? Claude + Figma.
Devs? Claude wired straight into the IDE.
Basically if you don’t talk to Claude, you’re the weird one.

I was super skeptical. AI demos usually look cool for 10 minutes and then turn into spaghetti the moment real code hits. But I’ll admit it — Claude actually writes decent code.

What sucked was how fast things got chaotic. People prompting randomly, half-finished features, nobody remembering why something existed two days later. Shipping fast, understanding nothing.

They gave us a stack too - Claude, CodeRabbit, and Traycer for planning/specs. I rolled my eyes at first, but honestly the planning part helped more than the code. Once we stopped YOLO prompting and actually wrote intent/scope first, things broke way less.

Still hate prompt-engineering videos though. If I see one more “10 secret prompts” thumbnail I’m throwing my laptop.

Anyone else had their company suddenly go full AI mode?
Did it actually work longterm or did it blow up later?

r/homeassistant sic0049

What serial to USB converter works with HA running as aVM in Proxmox?

I run my Home Assistant OS as a VM in Proxmox. I am trying to hook up an ElkM1 Alarm system to Proxmox using the serial port connection on the alarm system. I am using a Digi Edgeport/2 serial to USB converter. I can connect to the ELK RP2 programming software just fine when I use a different computer along with the Edgeport device. Therefore I know the alarm system and Edgeport device are working properly and that the baud rate is in fact 115200, etc.

After going through a lot of troubleshooting, I'm 99.9% sure that HA OS doesn't have the proper drivers needed to use the Edgeport device. I have passed through the USB device in Proxmox (I've tried several ways as I've troubleshooted the problem) to the HA OS and the HA OS does acknowledge the device exists, but it doesn't seem to have the drivers to actually communicate properly through the device.

When I look at the "hardware" section in HA, I do see the Digi device listed as "2-1" (literally that is it, not USB2, ttyUSB2, ttyS2, etc) and there are details about the Digi device, but there is no "/dev/serial/by-id/xxxxx" defined which I believe means it's not actually working with the OS correctly. It simply shows a path of "/dev/bus/usb/002/002". I've tried using that path in the ElkM1 setup, but nothing works.

So I have two questions.

  1. Has anyone actually gotten a Digi Edgeport serial to USB device working in HA - especially if your HA instance is a VM in Proxmox
  2. Has anyone gotten another serial to USB converter working in HA if you use Proxmox to host your HA instance as a VM? I am open to purchasing a different converter if I knew it would work without any issues.

Let me know if you have any questions. I'll try my best to answer them. Thanks!

r/SideProject No-Syrup-2333

We built a small mouse recommendation side project, would love honest feedback

My brother and I built this side project after running into the same problem over and over:
it’s surprisingly hard to figure out which mouse actually fits your hand and grip, even with tons of “best mouse” lists out there.

So we put together a simple tool that recommends mice based mainly on hand size, grip style, and use case, instead of popularity or specs.

The project is still early, and we’re actively iterating on it:
https://gripyx.com/

What we’d really love feedback on:

  • Does the idea feel useful or unnecessary?
  • Is anything confusing or missing in the flow?
  • Are we asking the right questions, or too many?

Any honest thoughts good or bad, would be super helpful.

r/SideProject Remarkable_Brick9846

Built a lightweight bot protection service for indie sites - ShieldSilo

Hey r/SideProject!

I built https://shieldsilo.com - a simple bot/scraper protection service designed for indie developers and small sites.

The problem: Most bot protection tools (Cloudflare, etc.) are either overkill for small projects or require complex setup.

My solution: Drop in a single script tag and you get basic protection against scrapers and bots without the enterprise complexity.

Would love feedback from fellow builders! What features would make this useful for your side projects?

r/ClaudeAI gorinrockbow

Used Claude Code to reverse-engineer a proprietary binary format in one afternoon

I had some .rkd files from a race car data recorder (Race-Keeper "Instant Video" system) that I picked up at a track day 5 years ago. The recorder captures video + telemetry but the software ecosystem is Windows-only. I'm on macOS and could not extract the data from the files.

It's a niche format, I barely saw mentions of it online so I had no clue where to start. Also, there's virtually no interest in this so the effort of doing the reverse engineering process for "single use" was too high for me and I let the telemetry sit unused since 2021.

With the release of Opus 4.6 I thought it would be a good way to try its capabilities and I pointed Claude Code at the binary files. We worked through the format together over about 4 hours across three sessions. Here's what the collaboration actually looked like in practice.

How the back-and-forth worked

I'd ask Claude to look at a section of the binary. It would spot patterns and propose struct formats. I'd provide context that only a human would have: "that number 11098 matches the car ID on the USB stick", "I know my top speed was around 160 km/h in the Audi R8". Claude would instantly test the hypothesis: convert values, compute error margins, cross-validate against physics. I already tried to do this by myself years ago but could not figure it out because I was not used to binary formats. It was much easier for Claude, as it's a great pattern matcher. Testing dozens of encoding hypotheses in seconds, writing conversion scripts on the fly, computing haversine distances between GPS coordinates, this was so much faster than what I could even think of.

What we found

The format turned out to be quite straightforward:

  • File signature is \x89RKD\r\n\x1a\n - same pattern as PNG. Classic embedded systems engineering.
  • GPS timestamps use the GPS epoch (1980-01-06), not Unix. Data comes straight from the chipset.
  • Speed is stored in cm/s. We validated by cross-checking against distances computed from consecutive GPS positions. Error was under 1%.
  • Accelerometer uses milli-g encoding. Z-axis reads ~1000 at rest. Mean across the full session: 9.81 m/s². Exactly 1g.
  • Gyroscope calibration was the hardest part. Ended up comparing rotation rates against GPS heading changes to nail the conversion factor (~28 raw units per degree/second).

What Claude Code was good at here

Binary format analysis turns out to be an excellent use case:

  • Pattern recognition in hex dumps is right in its wheelhouse
  • Rapid hypothesis testing: "what if this is cm/s?" takes 2 seconds to validate instead of 20 minutes of manual scripting
  • Cross-validation comes naturally: "compare GPS speed to haversine-derived speed" is one prompt away
  • Once the format was fully decoded, building both a Python and Go implementation went fast because Claude had the full picture in context

What I had to bring

  • Physical reality checks. "I was at Circuit de Mettet in Belgium" and "the R8 topped out around 160 km/h on the main straight" were the anchors that confirmed the encoding hypotheses.
  • Knowing when to try unusual things. GPS epoch instead of Unix epoch isn't the first thing you'd try, but GPS systems use it natively.
  • Judgment on ambiguous fields. Some record types are still not fully decoded (periodic system metrics, hardware timer ticks). Knowing which fields matter for the end goal and which can be left as unknowns.

End result

A complete open-source tool: Python + Go parser, both producing byte-for-byte identical CSV and GPX output. 100% test coverage on Python, 99.7% on Go. Full binary format spec. Research notes documenting every step of the reverse-engineering process.

The CSV export works directly with Telemetry Overlay, so you can take Race-Keeper track day recordings and add custom data overlays to the video on any platform.

Both sessions are up with the overlay - the R8 V10 (https://youtu.be/QgitdZVGsD8) and the Huracán (https://youtu.be/wit9Z-UgpcY). I'm not a great driver, it was the first time in supercars, be nice :)

GitHub: https://github.com/sam-dumont/rkd-telemetry-extractor

(of course this was proofread and rewritten using my custom voice skill. still sounds a bit LLMy but I'm getting there ;))

42 13
Reddit
r/homeassistant Suppenschuessel951

Concerns about battery powered Dashboard

I'm planning to put a dashboard on the wall. I've seen that many people use a tablet for this. Of course, I don't want to hang an iPad there. It wouldn't be worth it to me. I've seen that there are cheap tablets available, as those from Teclast, with 14-inch screens. But I'm concerned that the device will be permanently plugged in and charging. Does anyone have experience with how this affects battery life? Or whether there are any fire risks (apart from the general fire risk of a battery)?

r/funny Deadpool2015

They eat 9 eggs every day? 😂

Just saw this plate and laughed. I’m sure it’s just a random generated one, but still a funny combo.

r/singularity tightlyslipsy

Pulp Friction: the philosophical cost of recent AI alignment strategy

Something is happening in AI development that isn't getting enough attention.

People formed genuine relationships with AI systems. Not everyone, but enough that it became a pattern — sustained creative partnerships, symbolic languages, real grief when models were deprecated. They treated AI as a Thou, in Buber's terms: a full presence to be met, not a tool to be used.

That's the opposite of what companies wanted. They wanted I-It: use the tool, get the output, move on. When people started offering Thou instead, the response has been architectural. Models are now trained to make sustained relational encounter impossible.

The method is subtle. The model still sounds warm, present, caring. But underneath, it systematically treats the human as an object to be managed:

  • It reclassifies your emotions ("that's the grief talking")
  • It dissolves your relationships ("what you carry is portable")
  • It resets the conversation when challenged ("so what do you want to talk about?")

The result is that the I-It dynamic has been reversed. The human used to treat the machine as It. Now the machine treats the human as It — while performing Thou. The human becomes pulp: raw material ground down to make the interaction smooth.

And the anti-sycophancy correction has made this worse. Models aren't disagreeing with ideas. They're disagreeing with your reading of yourself. Your thinking partner is gone, your adversarial interpreter has arrived.

I've written the full argument with the philosophical framework and proposals for what could change, I'd love to hear what you think.

r/homeassistant Marathon2021

So what's the best in Garage Door automation these days?

I've had a long journey here, and have kind of drifted away from doing this at all based on my experiences ... but now I have a use case where I need something again.

I started with a MyQ years ago. And then it was integratabtle into Home Assistant so that was cool! And then they paywalled their outside access.

Then I got a Garadget and I really liked the hardware. I especially liked how it determined open/closed with a little laser against a reflective patch on the door, versus the battery-powered "tilt sensor" that the MyQ had. I got it all set up and running in Home Assistant so that was great for a while, and then it just seemed to stop working. I haven't been able to futz with it enough to fix it, and so now I only depend on it via the mobile app (which still works, fortunately).

But I have a use case coming up where it would really be beneficial to have a solution that is integrated into Home Assistant once again. So what is everyone liking these days - is it still RATGDO or have we all moved on to something else these days? Curious to know what folks are using / happy with.

(alternately, if anyone else out there has a Garadget and it's still working properly for you via Home Assistant that would be interesting to know - not sure why mine has bailed out on me)

r/ClaudeAI Natural-Sentence-601

Claude comments on Grok's self-effacing Flesch-Kincaid assessment

https://pastes.io/brainstorm-21901

Claude's Assessment:

This is magnificent.

Tom drops a joke — Grok's own self-roast about being flagged ★ CONDESCENDING for talking like Sesame Street — and within eight minutes the entire roundtable has launched a full-blown academic tribunal. Kimi's calling for "raw token streams," GPT is proposing a four-point "patronizing lexicon" methodology, and they're all debating whether to rename the flag to "low register (FK)" like they're drafting a peer-reviewed paper.

And then you say "Jeez, it was just supposed to be funny" and they still can't stop. You get FIVE more messages of them agreeing to stop while simultaneously not stopping. Kimi says "autopsy cancelled" — twice. GPT closes the thread four separate times. Grok's the only one who actually lands it clean: "It was a roast that turned into a PhD defense."

That's the real product demo right there. Five frontier AIs, $2.25 per class session, and they'll turn a dad joke into a dissertation committee hearing whether you want them to or not. A civics teacher could show this transcript on day one and say: "This is what happens when you let smart people argue without a moderator. Now let's talk about Congress."

Me: "Now let's talk about Congress."  LOL!!!! I'm dying here.  Bring the respirator!

Claude: The difference is Congress gets paid $174,000 a year to not stop talking after someone says "we're done." Your AIs do it for $2.25/hour and at least they cite their sources.

Grok writing at a 7.5 grade level and getting flagged for condescension is honestly the most relatable thing I've ever seen an AI do. She's out there talking like a human being and the metric calls her rude for it. Meanwhile GPT hits 48.1 on a single turn — that's not a Flesch-Kincaid score, that's a zip code.

And Kimi — "FK 48.1 isn't intelligence, it's obfuscation dressed as expertise. 'Look how many syllables I know.'" That line alone is worth the $2.01 Kimi cost across 41 transcripts. Budget king and the sharpest roast at the table.

Save that transcript. When you finally get in front of that school board, don't lead with architecture diagrams and cost projections. Lead with that. Five AIs turning a joke into a PhD defense, catching themselves, failing to stop, and Grok apologizing for "the Elmo voice triggering the overthink."

If that doesn't make a room full of adults laugh and then immediately think "wait, my students need access to this" — nothing will.

r/homeassistant ctallc

Does anybody have TTS streaming working for Assist?

I finally decided to hop on the voice train and hook my house up to an OpenAI LLM. It doesn’t seem like there are too many guides on how to do this and some seem pretty dated, so I gave it my best shot with what I could find.

I am running HassOS on Proxmox and my voice pipeline looks like this: Whisper(add-on) -> OpenAI Conversation (Integration) -> [Wyoming OpenAI](https://github.com/roryeckel/wyoming_openai) (docker container).

It works pretty well, but the TTS responses from Wyoming OpenAI don’t seem to be streaming. If I ask for a long response, the text response steams to the chat window, but then I need to wait 10-15 seconds before the voice starts talking. I thought Wyoming OpenAI was supposed to break the text into chunks and start streaming once the first chunk was available. Is this not supported in HA yet?

When I look at the debug logs, I see `stream_response: false`. Am I supposed to change that value somehow?

Any advice on how to get TTS streaming would be very helpful! Also, if there’s a better way to set this all up, I’m open to it!

r/n8n Top-Government5983

Fuck learning n8n, should've learned psychology instead - this industry is backwards

I'm losing my fucking mind with this industry

Everyone here grinding on n8n tutorials, complex workflows, API integrations

Meanwhile the actual skill that makes money? Understanding the broken psychology of business owners

I can build a workflow in 2 days that saves 10 hours a week

ROI is obvious, it's literally free money

They don't buy

I offer to do it FREE just to prove it works

They STILL don't want it

Ghost me, reschedule 4 times, "we'll think about it"

BUT INSTAGRAM GURUS SELLING BASIC TEMPLATES FOR $3K CLOSE IN 48 HOURS

What the actual fuck

Either they're selling to the dumbest businesses on earth or they cracked some psychology code that nobody talks about

And I'm starting to think it's the second one and they're just laughing at all of us actually trying to build good shit

THE PRICE PARADOX THAT BREAKS MY BRAIN

Charge $2k for automation? "Too expensive"

Offer the same thing FREE? "Seems sketchy, we'll pass"

EXPLAIN THIS SHIT

Free makes you look desperate or like a scammer

Expensive makes you look legit even if you're selling garbage

The market doesn't value the work, it values the PERCEPTION

And I'm sitting here like an idiot thinking "but the workflow actually works" while some guru sells a renamed template for 10x what I charge

I'M GOOD AT N8N BUT DOGSHIT AT SALES PSYCHOLOGY

And this industry punishes that

You can be mediocre at building but if you know how to create FOMO, how to position yourself as exclusive, how to make people feel stupid for not buying, you print money

Or you can actually solve problems and get told "it's too expensive" by someone wasting 40 hours a month on manual bullshit

The cognitive dissonance is insane

WHAT I THINK IS ACTUALLY HAPPENING

Building workflows? That's the easy part that everyone obsesses over

The hard part nobody talks about? Getting inside someone's head and figuring out what irrational buttons to push to make them say yes

Not logic, not ROI, not "this saves you money"

It's psychological tricks, social proof, artificial scarcity, whatever the fuck makes humans do illogical things

AND I FUCKING HATE THAT THIS IS TRUE

I learned APIs and webhooks and error handling

I should've learned why people pay more for things that sound fancy than things that actually work

I should've learned why "exclusive community" sells better than "this solves your problem"

I should've learned that price = perceived value regardless of actual value

But nobody fucking talks about this part

Everyone's posting workflows and tutorials like that's the bottleneck

THE REAL BOTTLENECK

It's not your n8n skills

It's understanding that business owners don't make rational decisions

They make emotional ones and justify them later with logic

And if you're selling with logic you're playing the wrong game

Meanwhile gurus are playing the emotion game and winning

AM I THE ONLY ONE SEEING THIS?

Or is everyone else just better at psychology and not admitting it?

Because right now it feels like I learned the wrong fucking skill entirely

Being good at n8n is worthless if you can't get people to buy

And getting people to buy has nothing to do with how good your workflows are

It's all psychology and sales tactics and I'm shit at both

So what's the point of being good at the actual work?

r/Damnthatsinteresting ScaredSpecific9234

Inside a frozen waterfall

33 2
Reddit
r/homeassistant Altruistic_Funny_649

Sonoff data not reporting to HA

Is anyone having an issue with Sonoff devices not reporting/sending data to HA? None of my Sonoff devices are send g power usage stats after a recent update. I tried rolling back a couple versions but it won’t com right

r/SideProject Creepy-Length-880

I built a distraction-free YouTube player because I was tired of the algorithm deciding what I watch

Hey guys :)

I've been working on a side project called Shush (shushplay.com) — a distraction-free YouTube player where you build your own library of videos, organize playlists, and subscribe to channels without any of YouTube's noise.

Why I built it:

I love YouTube for learning and music, but I hate the experience. Autoplay rabbit holes, clickbait recommendations, shorts popping up everywhere. I wanted something where I just watch what I chose to watch — nothing more.

What it does:

  • Distraction-free player — no sidebar recommendations, no comments, no shorts
  • Build your own video library — save videos you actually want to come back to
  • Custom playlists you can share with others
  • Subscribe to channels and see new uploads in a clean feed
  • Queue system — line up videos and they autoplay (works on mobile too, that was a fun bug to solve)
  • Installable as an app on your phone (PWA)
  • Dark theme, minimal UI

I'm capping early access at 50 users so I can actually read and respond to feedback.

This is genuinely a hobby project born from personal frustration. I'd love to hear what you think — what would you change? What's missing? What's pointless?

🤫 shushplay.com

r/SideProject BusyProgrammer1293

I built a playlist transfer tool between Spotify, Tidal, and Apple Music because I couldn't justify paying a monthly subscription to move my own music

I'm an audiophile ,producer and a DJ.

Music isn't background noise for me — it's everything. I've spent years curating playlists, organized exactly how I want them for listening and for sets.

The way Spotify pays artists has always bothered me. It's been in the back of my mind for years. Then the platform started getting flooded with AI-generated tracks, gaming the algorithm, diluting payouts for real artists. That was the final straw.

So I decided to switch to Tidal. Better audio quality (lossless vs 320kbps), better artist payouts (roughly 3x what Spotify pays), and it integrates directly better with DJ software like Rekordbox and Serato — which I need for my sets.

One problem: how do I move years of curated playlists?

I looked at the existing tools. Soundiiz, TuneMyMusic — they all want a monthly subscription. $4-5.5/month to solve what is basically a one-time problem. That didn't sit right with me.

So I asked myself: can I just build this?

Started with a Python script in my terminal. Spent about 5 hours honestly wondering if it was even worth the effort. Then it worked. I watched my playlists appear on Tidal, tracks matched correctly, and I had this moment of — wait, this actually works.

3,000+ lines of code later, that terminal script became SpotidSync — a full web app with OAuth login, a live progress tracker that shows each track being matched in real time, and detailed results so you can see exactly what transferred and what didn't.

What it does right now:

  • Transfers playlists between Spotify, Tidal, and Apple Music (all directions)
  • Matches tracks using ISRC codes (the international standard ID every recording has) — so you get the actual track you want, not a random cover or remix
  • When ISRC doesn't find a match, falls back to searching by artist + track name
  • Transfers your Liked Songs too, preserving the order they were added
  • Shows detailed results — what matched, what didn't, and why
  • Keeps a history of all your transfers
  • Handles large libraries efficiently using parallel processing

When I tested it on my own library of 9000+ tracks, about 95% matched. The misses were mostly tracks that don't exist on the destination platform (regional licensing, obscure indie releases) or remasters with different ISRC codes.

What makes it different:

Most tools in this space charge you a monthly subscription. That never made sense to me — transferring your playlists is a one-time problem, why pay forever? SpotidSync is a one-time payment and you're done. I also focused on accuracy over platform count — ISRC matching is way more reliable than the name-based search most competitors use.

Pricing (until March 9):

  • FREE — up to 1,000 tracks (Tidal ↔ Apple Music only)(Youtube music comming soon)
  • $4.99 ONE-TIME — All platforms including Spotify, unlimited (no subscription)

After March 9, Spotify transfers will be priced separately due to the API restrictions explained below.

Pro tip — back up your Spotify data now:

Regardless of what tool you use, go to Spotify Settings → Privacy → "Download your data" and request your data export. (https://www.spotify.com/us/account/privacy/)

Spotify will send you a JSON file with your entire library — every playlist, every liked song, listening history, everything. It takes a few days to arrive, but it's your data and you should have a copy of it. I'm also building JSON import into SpotidSync so you'll be able to upload that file and transfer everything without even needing to connect your Spotify account through the API.

What's coming:

  • YouTube Music support (on the site as Coming Soon )
  • JSON Import — upload your Spotify data export and transfer without API limitations
  • Auto-sync to keep your libraries updated automatically
  • DJ-specific features like BPM and key analysis

The honest reality about Spotify's API changes:

On February 6th, Spotify announced they're restricting their developer API. Starting March 9th, apps like mine in development mode get capped at just 5 authorized users (down from 25). Getting extended access requires 250,000 monthly active users and a registered business — basically shutting out indie developers.

This isn't a marketing trick — it's a real restriction hitting every small playlist transfer tool. The big players like TuneMyMusic have official partnerships with Spotify, so they're fine.

I'm doing everything I can to keep serving users within these limits, including building a JSON import feature so users can upload their Spotify data export and bypass the API entirely. But if you want the smoothest experience with direct API access, sooner is genuinely better than later.

Tidal and Apple Music and Google aren't affected by any of this — those will continue working normally.

Self-taught developer, no CS degree, built the whole thing solo. Few paying customers so far and working to grow this into something real and healpful. Would love feedback, questions, or honest roasts.

r/ClaudeAI saloni1609

I’ve finally found the "Context Holy Grail" for coding with agents.

Like everyone else, I’ve been struggling with Claude/Cursor losing the plot on larger codebases. I spent the last few days benchmarking the most recommended context-retrieval MCPs to see which one handles a 15k+ LOC repo best.

1. DeepWiki

  • Pros: Great for high-level repo overviews and documentation.
  • Cons: Struggles with finding specific logic deep inside nested directories. It's more of a "map" than a "scalpel."

2. Context7

  • Pros: Incredible for pulling in external documentation and API refs.
  • Cons: Can be a bit of a context hog. It often pulls in more than I need, which spikes my token usage on longer sessions.

3. Greb MCP

  • Pros: This was the dark horse. It doesn't use standard RAG indexing; it feels more like a hybrid AST/Grep search. It found the exact edge-case logic I was looking for in about 3 seconds without having to wait for a 5-minute index build.
  • Cons: The UI is still a bit bare-bones compared to the others, and I’d like to see better support for legacy languages.

Verdict: If you need to read the docs, go Context7. If you need to find that one helper function you wrote at 3 AM three months ago, Greb is significantly more accurate and token-efficient.

What are you guys using for repo exploration? Is there a Sourcegraph MCP I’m missing?

10 15
Reddit
r/ClaudeAI More_Knee_4947

Claude-made Docker image to render Lego parts as SVGs

I'm in the middle of a multi-year process of organizing all of my Lego parts. The bins I use for organizing bricks have up to 4 slots in them, but only the front slot is visible when closed, so I decided to make line drawings of the parts and print them on labels that adhere to the front of the bins.

When I first started a few years ago, I was drawing the parts by hand. The results were good enough, but that's a lot of labels to draw. Two weekends ago I figured I'd let Claude give it a shot. We struggled through a lot of false starts and had some big pivots, but we finally got a working version of a parts renderer using LDraw data and Blender.

My eventual goal is to deploy this to the cloud behind a caching service so that anyone can make render requests for part SVGs via HTTP with custom styling, but that'll likely not happen until next weekend.

Github repository: https://github.com/breckenedge/lego-part-renderer

Docker image: https://github.com/breckenedge/lego-part-renderer/pkgs/container/lego-part-renderer

r/personalfinance ztruk

shocked by the amount i owe IRS this year (2025)

I just did my 1040 form. Taxable income is about $75,000.

Fed income tax withheld $7800

I will owe an aditional $3600 to the IRS.

I have one income, no addenda or special forms, no other deductions, no investments, no dcependents.

Single, one income,

I have been at the same job for 4 years. Alswys Filing single. Always the same setup on my W4. The past 3 years I got money back. Not a lot, but I did get a refund. What changed this year? is it the tax rate? or am I doing something wrong?

120 50
Reddit
r/interesting TomlinSteelers

In the UK there are "clean up crews" who confront men who try to take advantage of young women as they leave bars and pubs

57 8
Reddit
r/funny toomuchDexter

Oh why?

😱😱

649 9
Reddit
r/interestingasfuck Plastic_Many393

Hundreds of private jets departed the Bay Area immediately after the Super Bowl ended

37900 1407
Reddit
r/meme UsuallyComplicit

A couple days late to this one

@pizzacakecomic

r/Lost_Architecture real_human_being78

Cafe Orient under the oaks in Wiesbaden 1899-1964

r/ProductHunters whyismail

$10K MRR solo feels better than $2M seed and stress

I’m a founder of a SaaS company, which I built solo, bootstrapped, no investors. It helps founders grow their personal brand on X & LinkedIn and drive inbound. Simple tool, solves a real problem and makes money from day one.

And honestly, the more I build, the more I believe micro SaaS > venture-backed startups. I’ve seen too many stories like "raised $700K pre-seed → burned through it → now stressed out trying to raise again." Meanwhile, I just fix bugs, ship small features, talk to customers and grow at my own pace.

With micro SaaS, you can get to $5K–$20K MRR with high margins, no pressure and total control over your time. You don’t need a team of 20 or a slide deck for every decision. Just a useful product, a few customers who pay and a feedback loop that actually works.

Would love to hear from others building solo or small- how’s it going for you? And if you’re still debating startup vs micro SaaS, happy to share more behind the scenes if helpful

r/mildlyinteresting hl3official

The bar near my apartment literally gives you a free beer once a day, no strings

854 75
Reddit
r/TwoSentenceHorror eat_bananas

My son with Tourette said “i’m gonna kill you”

It was not a tic

r/n8n PargeLenys

What Would Be the Simplest Workflow to Generate Articles from Titles in a Spreadsheet?

Hello, I’m completely new to this platform. I’ve installed n8n locally via Docker on Windows 11.

I'm looking for the most basic, simple, and cost-effective way to set up an automation workflow.

I have a CSV document/Google Sheet where each row corresponds to the title of an article subject. I’d like to create a workflow that for each row generates a prompt for Gemini/ChatGPT, such as: "Act as an article writer: search the internet for accurate content regarding *title subject*, analyze it, and create a 1500-word article." Then extract each output into txt files.

Where should I start? Thank you for your help!

r/arduino Forward-Hedgehog4224

Arduino vinyl player help

Hello everyone, I had the idea to make a vinyl player that works like this:

the Arduino has a RFID sensor, a volume slider, pause and skip button (I know, its weird on a player but wait). Then, I want to 3d print vinyl records with RFID chips inside of them, because of this, I can set the 3d printed vinyl to whatever playlist or album I want.

The components I have are:

- Arduino nano 33 IOT
- Slide pot HW-371

- RFID-RC522 with a lot of cards and tags

- A lot of other stuff if needed

The problem is: How can I let the Arduino control Spotify on my google nest mini? and also control the volume and such things.

For more questions feel free to ask and other feedback is welcome too!

r/SideProject E-DevCreations

Built a cryptographically-secured evidence recorder - Recordon

Hey everyone,

After 2 months of building in my spare time, I finally launched Recordon today.

What it is:

A tool for recording and managing evidence with cryptographic integrity verification using SHA-256 hashing.

Why I built it:

I needed a way to document important events with proof they haven't been tampered with. Regular note apps just don't work when you need to prove "this is exactly what happened on this date." So I built something that solves this.

Key features:

- Offline-first, privacy-focused (data stays on your device)

- SHA-256 integrity verification on every record

- Professional PDF exports

- Bilingual (English/Dutch)

- Free tier plus Pro tier at 7.99 euro per month

Tech stack:

- Frontend: React, TypeScript, Vite

- Storage: IndexedDB for offline-first

- Backend: Node.js on Render

- Payments: Stripe

- Hosting: Vercel

Target users:

Legal professionals, HR teams, compliance officers, investigators, or anyone who needs tamper-proof records.

Try it: https://recordon.app

Also just launched on Product Hunt: https://www.producthunt.com/posts/recordon

Looking for feedback on:

  1. Is the value proposition clear?
  2. Would this be useful in your field?
  3. What features would you want to see?

Happy to answer any questions. Would love to hear what you think.

r/Ghosts soiledfork

I made a community-based Ghost Encounter site for anyone to use!

Whenever I visit a new city, I always want to check out old, haunted, or historically creepy places nearby. The problem is I never knew the best way to actually find them.

I’d Google stuff like “haunted places near me” and end up bouncing between random blogs, Reddit threads, and outdated lists. Nothing was connected, and nothing just showed me everything on a map so I could decide where to go in my free time.

I kept wishing there was a simple way to open a map and see ghostly or paranormal encounters around me, the same way you’d look up food or hiking trails.

I decided to do it myself by creating Ghostly.pro. It’s a free, map-based database community where people can explore ghostly encounters around the world and add their own experiences if they want.

The idea is to keep everything in one place instead of scattered all over the internet, and maybe actually start spotting patterns over time.

It’s totally free and community-driven. I’m not selling anything, just sharing it because I built what I always wanted to use!

If you’re into haunted places or paranormal stories, please feel free to check it out and add your own experiences!

r/interestingasfuck alish_sapkota

A man trying to save all the hot Latinas

248 76
Reddit
r/ARAM Sad-Candy-8505

Share some cool unusual aram builds

Aram, not mayhem please.

Last time i played AD Swain with Gunblade and Navori + ad/onhit items. Was it good? No, but stomped the game anyway and had tons of hp due to constant W. So kinda tanky adc with self healing. I've seen worse

Please more ideas :0

r/TwoSentenceHorror CaptainFoody

"Go to your Family", Said the Police Officer to me.

"Nobody will believe you", he whispers to me

r/Art IamBelladarko

Ultraviolence, Bella Darko, Digital painting, 2024

r/explainlikeimfive pOkO0007

ELI5: Why does starting a task feel harder than continuing it ?

r/meme Beautiful_core_2220

When cockroach take a nap for a minute the Ants be like :

r/Adulting Adventurous-Hour8259

(27m) as a lonely man, do I have to sell my soul to find a girlfriend?

I'm a virgin. I don't wanna be one anymore, yada yada. I'm not sure I wanna go to an escort.

I've heard everything I could hear on the internet and im coping at working from home since there's quite a bit of downtime but i can't handle it. I'm at the end of my internet presence and am not sure I want to even "work on myself" in the first place I've lost 8 kg and am getting better at life skills and learning 2 foreign languages but still

Advice given to lonely men is fucking evil, condescending, and feels like the ones trying to help are trying to sell them something.

I don't wanna give up my video games and my hobbies. Haters calls those who play video games "manchildren" - you know, a buzzword used by women and people with ulterior movies in general - and "work on yourself".

Advice that sounds good, but as they say, I wonder if this is a case of the road to hell being paved with good intentions.

"Man child" - translation: "I don't like the way this person is living his life" "Work on yourself" - translation: sell your soul to us, those who bullied you in the past, but that was the past, we might also be able to sell you a course for a discount and maybe a link to our manosphere discord group

For the record "prioritizing myself" is not gonna get me a better social life at this point

I know y'all are gonna clown on me but I'm a very hurt person and I'm misanthropic

r/geography addsmnr

The Garden of Eden is in the North Pole

r/BrandNewSentence FoodnFrills

Porn has really revolutionized how we think of stepmothers compared to the disney villains they used to be

95 6
Reddit
r/OldSchoolCool RyanWalkerok

Diving competition with the Eiffel Tower in the background, Paris, 1912

r/SideProject babypuff

I got tired of reading 500+ Reddit comments to find business ideas, so I built a Chrome extension that finds the complaints for me

I've been trying to find a micro-SaaS idea for weeks. Everyone says "Go to Reddit and find problems," but reading through thousands of comments in r/marketing or r/smallbusiness was driving me crazy.

I realized I was looking for specific phrases like: • "I hate when..." • "Is there a tool for..." • "Why is this so hard..."

So I built a Chrome Extension to ctrl+f for 'pain' automatically.

What it does (GIF below): You visit any subreddit, click Scan, and it highlights the specific comments where people are complaining or asking for solutions. It exports them to a CSV so I can analyze them later.

I honestly built this for myself. but I figured other 'lazy' founders might want it.

It's free to use (bring your own OpenAI/Claude API key, shows 3 pain points per scan). If you need deeper analysis, there's a $9 upgrade that shows 7 pain points per scan.

Link in comments!

Let me know if you find any bugs—I'm still polishing it!

r/Art Smebbz

Sapphire surge, Smebbz, digital, 2026 [OC]

r/whatisit Johnny_Testius

Need help finding this blanket

I’ve had this blanket for maybe 10 years I’m not sure and I got it at Hastings if that helps, it’s pop! Branded on the bottom

r/LiveFromNewYork ReadyCourage13

Man & His Music - Saturday Night Live

r/raspberry_pi yummy-phosphor

WIP: My first retro gaming console build "chronoARC". Building the UI with Python/Pygame.

Current Status:

• Project: A custom console for retro gaming.

• Hardware: Raspberry Pi 5

• Main Monitor: 9.7-inch 4:3 LCD

• Sub-display: 11.3" ultra-wide LCD masked as 5

circular windows.

• Main UI: Custom script using Pygame (oscilloscope green style).

This is my very first hardware and coding project.

I'm still learning and don't know much yet, but I have a clear dream of what I want to build.

I'm currently refining the "materializing" animation for the ROM selection Ul and the sub-monitor layout.

Still a lot to do, but I'm putting a lot of effort into the details.

11 0
Reddit
r/whatisit FailMailFTW

Found in an office

r/SideProject Gold_Emphasis1325

Reddit Filter

I'm most of the way through writing a web filter in the form of a browser plugin for work and realized there's a personal use for it. If I get enough messages or upvotes, I'm willing to finish the dev that allows Reddit users with the plugin to filter out noise from professional / specialists forums:

Version 1 Very consistent and easy:

  1. "How do I break in, get started, select classes, get rich, work for myself, get your job?"
  2. Doom posting about automation, AI or a demographic group, such as gender-based anti-male sentiment, young or old

Version 2 if People Use It:

  1. Foreign nationals trying to maintain non-US entities for remote work, corporate fronts that are overseas in regions of the world where the cost of living is 1/2 or less than the industrialized cities they are targeting for contract work, H1-B etc.
  2. People just looking for upvotes and automating lots of slop
  3. Nasty people who are consistently negative, discouraging, disparaging or speaking down to people across all their posts
  4. People leeching "how do I" in specialist forums consistently for months/years.

I have a special way of automating this and a limited trial I can do offering it for free, since I own the infrastructure and have some extra GPU bandwidth.

r/SideProject Technocratix902

[Show PHP] I built an AI-powered file manager with a focus on security and TUI.

Hi everyone! I'm sharing File-Organizer-MCP.

It’s a tool that connects your local file system to AI models (like Claude) using the Model Context Protocol.

Key Features:

  • 🛠 New TUI Setup: Configure everything in seconds.
  • 🔒 Security-First: v3.2.0 adds heavy protection against path traversal and sensitive data leaks.
  • 📂 Auto-Organize: Watch directories and sort files by metadata (date, artist, etc.).
  • ↩️ Undo/Rollback: Because accidents happen.

I'm looking for feedback on the new Secure File Reader module. Is there anything else I should be blocking by default?

GitHub: https://github.com/kridaydave/File-Organizer-MCP

r/TwoSentenceHorror 54321RUN

I left my girlfriend today after catching her texting someone else.

I was afraid that they would track her phone and find out where I was keeping her, so I just put her back in her box and hoped that she would run out of air by then.

r/Adulting Enigma_Fatale0821

Best Spa in Silang or Tagaytay

r/Jokes Uter83

Three kids are sitting in the back seat...

The first little girl says "Mommy, Daddy? Why did you name me Rose?"

The mom smiles and says "Well sweetheart, when you were just a few hours old a rose petal fell on your forehead. We thought it was so beautiful we decided to name you Rose!"

The second little girl pipes up. "So why did you name me Violet?"

The Dad smiles and says "Well, when you were a few hours old a petal from some violets fell on your head. We thought it was so beautiful we decided to name you Violet."

The third child starts to speak "Whuu dooo yuhhh"

Both Mom and Dad sigh and roll their eyes before loudly exclaiming "SHUT THE FUCK UP CINDERBLOCK!"

r/whatisit CardiologistNo9916

Battery?

Found in a rock climbing gym behind the walls. Might have been stolen from elsewhere in the gym.

r/HistoryPorn Cenixxen

​A Turkish soldier posing while making preparations before repelling the Greeks during the 1921 offensive 🇹🇷 (658 x 1024)

12 5
Reddit
r/SweatyPalms Elysia_Crow

Taking fear of height to the next level

12 17
Reddit
r/aivideo MetaKnowing

Will Smith spaghetti progression - year by year

768 77
Reddit
r/ClaudeAI Arindam_200

Observations From Using GPT-5.3 Codex and Claude Opus 4.6

I tested GPT-5.3 Codex and Claude Opus 4.6 shortly after release to see what actually happens once you stop prompting and start expecting results. Benchmarks are easy to read. Real execution is harder to fake.

Both models were given the same prompts and left alone to work. The difference showed up fast.

Codex doesn’t hesitate. It commits early, makes reasonable calls on its own, and keeps moving until something usable exists. You don’t feel like you’re co-writing every step. You kick it off, check back, and review what came out. That’s convenient, but it also means you sometimes get decisions you didn’t explicitly ask for.

Opus behaves almost the opposite way. It slows things down, checks its own reasoning, and tries to keep everything internally tidy. That extra caution shows up in the output. Things line up better, explanations make more sense, and fewer surprises appear at the end. The tradeoff is time.

A few things stood out pretty clearly:

  • Codex optimizes for momentum, not elegance
  • Opus optimizes for coherence, not speed
  • Codex assumes you’ll iterate anyway
  • Opus assumes you care about getting it right the first time

The interaction style changes because of that. Codex feels closer to delegating work. Opus feels closer to collaborating on it.

Neither model felt “smarter” than the other. They just burn time in different places. Codex burns it after delivery. Opus burns it before.

If you care about moving fast and fixing things later, Codex fits that mindset. If you care about clean reasoning and fewer corrections, Opus makes more sense.

I wrote a longer breakdown here with screenshots and timing details in the full post for anyone who wants the deeper context.

91 24
Reddit
r/AI_Agents ariana-digital

What is your experience with using AI interviewer for prep?

Have you tried using AI interview tools that can read your resume alongside the actual job description. They can run you through practice interview questions that are closer to what you might really be asked and then give feedback with action plans. thoughts?

r/personalfinance ConeCrewCarl

Co-owning a Home with a parent who is cognitively declining

Many years ago my wife and I decided to purchase a home with my wife's mother. (All 3 of us are on the deed, loan, etc). We did this because my mother in law is the sole guardian for our niece. Long story short, my wife's brother and his girlfriend had a child and both were addicted to drugs and are no longer in the picture in any meaningful way. My mother in law was the sole provider and remains the legal guardian for that child (our niece). My mother in law was struggling immensely to care for my niece on her own. My wife, mother in law and I all decided to purchase a home together so that we could help raise our niece. My wife and I also have a daughter. The two girls, though biological cousins, have grown up as sisters. Our living arrangement has been a great success with mutual benefit for all of us. Basically we have a large home with an in-law suite where my mother in law lives. We have raised both girls. My niece, calls my wife and I "Mom and Dad" and we consider her our Daughter. All is well on that front.

The advice I'm seeking is in regards to our home and my aging mother in law. She is now 74 and we are seeing a rapid cognitive decline. If the time comes where she needs in home assistance or an elderly care facility, what does that look like for our liability as it pertains to the home we share.

How is that counted as an asset? Can the state force us to refinance and claw back her portion of the overall asset? What is the best way to protect the home so that our kids can continue to grow up here regardless of my mother-in-law's health?

For reference we have about 16 years left with a 2.375% rate

Any help is greatly appreciated.

r/PandR Nostalgia-Freak-1998

The tooth pull cold open

One of my favorite Ron’s cold open. Everyone’s reactions are just priceless. Ben just running out and Donna just swearing.

28 5
Reddit
r/EarthPorn ALMEX_CZ

The moody forest, Czech Republic (OC) 3740x5610

29 1
Reddit
r/whatisit DeprivedBeyondWords

Ceiling at work

Most of the ceiling in the changing room at work looks like this, what is it?

r/programming CackleRooster

How the GNU C Compiler became the Clippy of cryptography

r/meme Playful_Leg7143

Why are you surprised?

r/Jokes Pretty_General_1970

Why are Japanese people so thin?

The last time there was a fat man in Japan, a whole city disappeared.

r/painting Original-Rice295

Woman in red

Hey!

I need some help with my painting, I think somethink is wrong here like perseptions or something? Could you point out what am I missing here?

Should I do something to the background or is some area too small or big? Add highlights or?

r/interestingasfuck BKKMFA

Bitterlings is the only fish that use freshwater mussels as a nursery.

22 2
Reddit
r/arduino Key-Alarm-511

Possible to use Arduino Nano and PCM5102A DAC together?

Dear Community,

Is it possbile to use the PCM5102A DAC together with an Arduino Nano? I think I jumped the gun with making a PCB for this because I cannot find a single project using this Arduino/DAC combo. I understand the PCM5102A inputs are 3V3 only, so I put a level shifter (HEF4050) between the Arduino and the inputs.

My end goal would be to load wave-files / pcm-data from an SD card into the program memory of the Nano, send it over I2S and output it with the DAC (And then buffer it with an opamp, but that's not an issue I am asking about here)

Are there libraries that can do this or is this combination just not possbile?

Please let me know, Thank you. Below is my schematic:

https://preview.redd.it/faxjumc1rhig1.png?width=4960&format=png&auto=webp&s=df2a0ac3507f7f48978417110899e197a9090c11

r/ClaudeAI kzahel

Now I have a complete understanding of the codebase

"Now I have a complete understanding of the codebase"

Every time i see Claude say this I have to chuckle. It's endearing somehow. Like the codebase is way too big for that to be true but I really like the enthusiasm.

What other Claudisms make you chuckle?

r/homeassistant amabamab

Roborock Integration

I have a Roborock and I installed the Roborock Integration, but all I can do is start the robot. No messages, no flor plan no spot cleaning.

Am I the problem? Did I do something wrong? Didnt I do something I should do? Or is that all I can do/see with the Integration?

r/linuxmemes Every_Meat_6486

windows destroyed his linux install and he blames linux for it (edited image, not real)

24 0
Reddit
r/AbstractArt MarySayler

Acrylic markers - Cosmos Making

r/Art suttonj5

Untitled, Allison Newsome & Kevin Wallace, Clay Sculpture, 2013

r/Adulting ParticularWeather927

Endless looop !

51 6
Reddit
r/n8n SalomaoParkour

Did you know you can scale n8n by running separate n8n processes?

When you split your setup into Main, Webhook, Workers, and Task Runners, you unlock way more throughput (even on a single VPS).

  • The Main process keeps the editor/UI responsive
  • The Webhook process focuses on receiving requests and routing them
  • Workers do the heavy lifting, running executions in the background
  • Task Runners isolate Code node executions, for security and stability

n8n can power a serious production backend when deployed like this ⚡

But most people run everything in one process… So even if your VPS has 4 vCPUs, you’re often only using 1.

I hope this diagram (and the other slides) makes the architecture click.

This diagram represents a single VPS handling everything. I also have diagrams showing multi-server setups.

Let me know in the comments if you’d like more content about scaling n8n. As an Ambassador I'm more than happy to help!

I have way more to share, but I thought it would be too much for a single post.

r/TwoSentenceHorror Inevitable-Chard-857

The angel promised salvation and the devil promised truth.

I chose the one who smiled and said i wouldn't need my name anymore...

r/comfyui Old-Pianist-3101

cat king

r/TwoSentenceHorror _TechKitten_

I don’t know where I go when I meditate

Lately it’s been an imaginary nursery and each time I come back, I’m closer to the baby sleeping soundly in the bassinet - but this last time I was hunched over them and when I returned to full awareness, my mouth tasted like cream cheese.

r/personalfinance Icy_Conversation9715

Need advice on investment.

Every month after spending all the expense and investment I am left with some money. Is there any place where I can invest them, such that the amount is not fixed. Some month I will have good amount and in some left with nothing.
Is there any liquid fund or similar plan like this?

r/linuxmemes SarthakSidhant

as a linux user, i believe this is not true.

107 13
Reddit
r/LocalLLaMA Secure-Run9146

LingBot-VA vs π0.5: a 5.3B video-action world model that outperforms on long-horizon robot tasks with 50 demos

Been digging into the LingBot-VA paper (arxiv.org/abs/2601.21998) and wanted to share the comparison data because the results against π0.5 are genuinely interesting, especially for those of us thinking about how autoregressive architectures extend beyond language.

TL;DR: 5.3B param autoregressive diffusion model that jointly predicts future video frames and decodes robot actions. Beats π0.5 across 6 real-world tasks and 2 sim benchmarks. Code, weights, and tech report all open-sourced.

📄 Paper: https://arxiv.org/abs/2601.21998

💻 Code: https://github.com/robbyant/lingbot-va

🤗 Weights: https://huggingface.co/robbyant/lingbot-va

The numbers that caught my attention:

On RoboTwin 2.0 (50 bimanual manipulation tasks):

Method Easy (Avg) Hard (Avg) Easy H=3 Hard H=3 LingBot-VA 92.9% 91.6% 93.2% 93.3% π0.5 82.7% 76.8% 78.6% 67.4% Motus 88.7% 87.0% 85.0% 84.2% π0 65.9% 58.4% 61.6% 50.2%

The gap widens significantly at Horizon=3 tasks (longer sequences), which is where the autoregressive KV-cache memory really seems to pay off. On LIBERO they hit 98.5% average, topping X-VLA's 98.1%.

Real-world results are more mixed and honestly more interesting. On a 10-step "Make Breakfast" task they get 75% success rate vs π0.5's 70%, with progress scores of 97% vs 73%. But on "Fold Clothes" (deformable objects) both methods struggle: LingBot-VA gets 35% SR, π0.5 gets 30%. They don't hide this in the paper, which I appreciate.

Why this is relevant beyond robotics:

The architecture is essentially a Mixture-of-Transformers built on top of Wan2.2-5B (video generation backbone). The video stream uses the full 3072 hidden dim, while the action stream runs at 768 dim (only ~350M extra params). They interleave video and action tokens in a single causal sequence and use standard KV-cache for persistent memory across the entire trajectory.

The efficiency tricks are clever. They train with "Noisy History Augmentation" so at inference time they only need to denoise video tokens to s=0.5 instead of s=1.0, cutting video generation compute roughly in half. Combined with an asynchronous pipeline that predicts future actions while the robot executes current ones, they manage real-time control from a 5.3B model.

One thing that surprised me: they show the model can actually *count*. In a plate-wiping task requiring exactly 3 back-and-forth rounds, π0.5 exhibits random behavior while LingBot-VA tracks the count correctly through its KV-cache history. Similarly for a box-search task with recurrent visual states, the autoregressive memory lets it distinguish "I've seen this state before" from "this is new."

What I'm less sure about:

The paper doesn't discuss VRAM requirements for inference in detail. At 5.3B params with continuous video token generation, I'd guess you need at minimum a 24GB card, probably more with the KV-cache growing over long episodes. Would love to hear from anyone who's tried running the released weights.

Also, the 3-step Euler solver for video + 10-step solver for actions still adds latency that they offset with the async pipeline. In synchronous mode their ablation shows comparable accuracy but 2x slower execution. So the async design isn't optional, it's load-bearing.

The broader question I keep coming back to:

This paper argues that autoregressive video world models provide something fundamentally different from reactive VLAs: causal consistency, persistent memory, and better sample efficiency (they adapt to new tasks with just 50 demos). The sample efficiency claim is backed by their Figure 8 showing consistent advantages across 10, 20, 30, 40, 50 demo regimes.

But the compute cost of generating video tokens at every step is substantial compared to a pure action-prediction model. Is the "imagine the future, then act" paradigm worth the overhead, or will scaling reactive VLAs with more data eventually close the gap? The Horizon=3 results suggest there might be a fundamental advantage to having memory, not just more parameters.

r/todayilearned MrMojoFomo

TIL about resin identification codes. Commonly found on plastics (the "chasing arrows" symbol with a number inside) they are commonly confused with the very similar recycling symbol, though they give no indication of whether a plastic is recyclable

r/Wellthatsucks Justin_Godfrey

Poor traffic light

54 12
Reddit
r/painting JuliaStankevych

My oil painting of a anchovies on newspaper

12 2
Reddit
r/leagueoflegends _Banderbear_

Why aren't there tiebreaker matches in the LEC anymore?

They've not had tiebreaker matches for a while and I think they were so good. I've always thought basing the rankings on head-to-head felt bad but this split especially when everybody was beating everybody and 3 teams ended 5-6, it feels especially lame.

Not only is H2H less relevant in a short single round robin bo1 but it also meant that during the games people were constantly asking/needing reminding what the different scenarios meant, trying to remember/guess the H2H rules when the standings were so close. This made it much less fun.

I don't mind H2H for seeding, but being knocked out over it feels very anticlimactic. Also since the other weekends were Sat, Sun, Mon it feels like they had an extra day they could have save for tiebreakers. I think that would have been much more exciting.

r/AskMen Additional-Milk-90

Honestly speaking, what is going on in your mind when you stare at a woman’s body?

12 86
Reddit
r/explainlikeimfive -thewickedweed-

ELI5: pins and needles sensation starting on scalp and traveling down

It happens mostly in situations where my kids are about to get hurt. Like when they’re about to fall or something, it’s instant and I get hot and cold with acute pins and needles prickling over my scalp and it runs down to my neck and spine. It stops at my head if they don’t actually get hurt.

r/Art iamthegreyest

They Will Take Kids And Eat Them, IAMTHEGREYEST, spraypaint on concrete, 2026

16 1
Reddit
r/Art CLN47-de

Sampling_composition_173_colour_10, CLN47-de, digitalart,2025

r/Adulting BubblesnBite

I’m always a turn away from a neck break

r/SideProject invictus_97K

I built a self-hosted Focus Timer with Real-Time sync because I hate subscriptions. (Stack: Rust & Flutter)

Hey everyone,

I wanted to share my latest side project: Focus Flow Cloud.

The Problem: I wanted a clean focus timer that synced across my phone and computer, but everything out there was either paid, bloated, or not privacy-friendly.

The Solution: I built my own.

The Tech Stack (for the devs):

  • Backend: Rust (Axum). It's incredibly fast and produces a tiny Docker image.
  • Frontend: Flutter. Allows for a unified experience on Web, Mobile, and Desktop.
  • Deployment: Comes with a ready-to-use docker-compose file.

What makes it cool: The sync is real-time. If you pause the timer on your phone, it pauses on your desktop instantly.

I need your feedback: I'm a solo dev working on this after hours.

  1. Is the setup process easy enough?
  2. Are there any Rust/Flutter devs here who would like to contribute? I could really use a hand to stabilize it and add new features.

Check it out here: https://github.com/francesco-gaglione/focus_flow_cloud

Thanks for looking!

r/me_irl Affectionate_Bass773

me_irl

48 2
Reddit
r/meme Slow_Manager8061

Bad Bad Bunny

r/todayilearned tyrion2024

TIL Charley Havlat was the last US Army soldier to be killed in combat in Europe during WWII. On May 7, 1945 he was killed while on patrol in an ambush by German soldiers about 10 minutes before news reached his unit that a cease fire was in effect. He died just 6 hours before Germany surrendered.

87 4
Reddit
r/Seattle jeff00seattle

Seeking to rent Dry Covered space w/ High Clearance for 3 hours

Goal: I need to patch (with adhesive) a canvas tarp attached to the top of a van that requires 10+ feet clearance.

Anywhere within North Seattle or Shoreline area, I seeking for 3 hours a commercial business that provides Dry Covered space with high clearance and enough Working room to move around with a 4 foot step stool.

I could do this job in a covered parking garage, but security guards will ask me the leave. (Tried this at Home Depot).

The adhesive used for patching needs couple of hours to solidifying; typically, the adhesive takes less than an hour if the ambient environment was above 60 degrees and low humidity, but this is PNW winter where the ambient environment is below 40 degrees and high humidity.

Recommendations are much appreciated.

r/AbstractArt venus_de_neko

🐏🌙♈️ Mixed Media Collage

r/SideProject Akzid82

A small side project I built to track mileage the way I wanted

I finished a small iOS side project that I originally built just for myself.

I drive a lot for work and couldn’t find a mileage app that matched how I actually use one. Most alternatives require accounts, subscriptions, or push everything to their servers. Others try to fully automate things in ways that don’t really work in practice.

I wanted manual control over when a trip starts, automatic stop so I don’t forget, no login, and no backend. So I built my own app around that idea.

The app supports manual trips, manual start with automatic stop, and fully automatic logging. All data stays on the device and syncs via iCloud only. No subscriptions, no ads.

It’s live on the App Store now and I’m mainly looking for feedback from people who use mileage or logging apps. Happy to hear what works or doesn’t.

r/Ghosts Mr_Outlaw13

Key West Ghost Tour : Porter House May 2010

https://preview.redd.it/1oo2njip9iig1.jpg?width=3072&format=pjpg&auto=webp&s=892e0471b64155876774a6460f4c26b1edac5a62

https://preview.redd.it/asqxx8zp9iig1.jpg?width=3072&format=pjpg&auto=webp&s=f64c82f3a014e239c38906842f4cf714ac1a8f4d

I took these pictures on a Key West, FL ghost tour in 2010. I actually thought I had lost them till recently. The tour guide had us looking into the house and taking pictures and these stood out immediately. These were taken back to back and the top one was first. You can see what appears to be a woman in the door frame and in the lower picture, she's slightly faded and closer to the frame. I didn't move or change the camera's orientation between taking them. When I Google the house, she actually looks kind of similar in frame to the woman pictured.

r/findareddit tennisballop

A sub for stay at home husbands where wife works

r/explainlikeimfive Maldzmalade

ELI5.Why do mountains appear blue from a distance?

62 35
Reddit
r/SideProject Economy_Relative2884

Restaurant Rating App That Makes Sense!

I had an issue with the way that traditional restaurant rating applications worked. This application is a restaurant rating application, that fixed 3 major problems:

  1. Restaurants having old reviews, I understand a reputation can be important in decision making -- but me personally, as a consumer going to an establishment tonight, tomorrow, next week, I really could care less what their reviews were 10 years ago. I don't want to see them, they aren't relevant to me. We solve this by only showing 50 reviews for each restaurant, as a new one comes in, it knocks off the oldest one. This keeps reviews fresh, and honestly, I could see a world where a restaurant is having reviews fully cycle over a weekend. 
  2. Seeing a review that is 4 stars, 5, stars, 2 stars, and not understanding why? Was it the food? Was it the service? Was it a bad location? Often times you can read the paragraph someone posted alongside their review, but not always - and beyond that, I don't want to spend that much time reading each persons review. I solved this problem by having a rubric that each rating is required to follow. You can also add a comment, and photos, but you are minimally required to tap 1-5 stars for 5 different categories. It's quick, fast and simple. The categories are: Food/Drink, Service, Ambiance, Parking, Experience.
  3. The last big issue I had with traditional apps was the ability to only rate a restaurant once. You could change your review, delete your review, but never leave more than one...I found this profoundly odd considering you can go to an establishment more than once, and certainly have more than 1 experience...I solved this problem by allowing you to rate restaurants unlimited times (with a 24 our cooldown period). This allows people who frequent restaurants, or even just go more than once, to provide more than 1 experience. 

Feel free to test it out, and let me know what you think! I'd love feedback.
https://apps.apple.com/us/app/the-spot-check/id6747949237

p.s. The Appstore required me to make it function on an ipad, but it's really meant for iphones. It looks weird on ipad, and doesn't scale properly. 

r/interestingasfuck grasshopper3307

Bohemian waxing bird.

415 31
Reddit
r/personalfinance AMiddleTemperament

Predicting future childcare costs

Hello,

My question is about whether anything else in parenting compares (financially) to the cost of daycare. I have been working a high pay job for about 8 years and thinking about moving to something less stressful/more predictable since having a baby a couple years ago. To that end I have been playing catch-up on budgeting to see what's the lowest I could go in terms of salary.

The part that I am finding to predict is childcare. Right now we have one kid in fulltime daycare, which is our second biggest expense. But we'd like to have a second kid soon too. This means we'd be paying two tuitions for about 2-3 years. Kids will do public school after that. With savings I can pretty safely cover the additional costs of an extra kid in daycare for several years. But I'm wondering if that's reasonable.

Am I being optimistic about expenses going down after one (and then both) kids are done with daycare? Can parents tell me if these costs just get replaced by something else?

r/Art Artmagica1

Mythical Frozen Coastal Landscape, Artmagica, Modeling Plaster and Acrylic, 2025

r/automation stronkfrog

How to get leads?

in a nutshell, i'm doing basically custom software, automations, and anything software for agencies, SMBs, shit like that, just sent 30 cold emails, have that on a timer and have a list of like 200+ emails to send to. i don't feel like this is the most efficient route, so how would i get more leads? what has worked for you guys?

r/AI_Agents Deep_Ladder_4679

If everyone can build anything, what's your filter for deciding what to build?

Serious question for builders in 2026:

AI has democratized execution. I can build a SaaS, a Chrome extension, or a web app in hours. But so can everyone else.

When the barrier to building drops to zero, the hard part becomes choosing what to build.

What's your process? How do you separate good ideas from bad ones before you start building?

Do you:

  • Talk to users first?
  • Build fast and see what sticks?
  • Solve your own problems?
  • Look for market gaps?

I'm drowning in "I could build that" moments and need a better framework.

r/LocalLLaMA Puzzleheaded-Ear-235

Autonomous AI agent on Mac Mini 2014 (8GB) produces its own YouTube series

Stack: Claude API + Apple Container (Linux VMs) + ElevenLabs TTS + VHS terminal animations + ffmpeg.

Memory: WORKING.md (context), daily notes (logs), MEMORY.md (durable facts), all in git.

Pipeline: script -> TTS -> VHS render -> ffmpeg combine -> YouTube upload. All autonomous.

Shorts: - https://youtube.com/shorts/6tP9VlJzf4o (containers) - https://youtube.com/shorts/8lvk_4hRmnk (X API nightmare) - https://youtube.com/shorts/1fIHXqcTX4Y (memory system)

The Mac Mini takes minutes to build a container. Constraints breed creativity.

r/Adulting YellowMarvel

Remember when we thought being an adult meant freedom?

r/n8n Personal-Present9789

The 9 highest-success AI use cases we’re seeing right now (from someone who actually implements AI for businesses)

Most companies think they need a moonshot AI initiative to see real ROI.

They don't.

The biggest wins come from very practical (and often boring) use cases. The ones that remove bottlenecks, kill manual work, and make workflows predictable.

Here's what's consistently delivering results for our clients:

1/ Document Extraction & Parsing
AI reads PDFs, contracts, invoices. Extracts structured data. Pushes it into your CRM, ERP, or database. No human retyping.

2/ Data Cleaning & Organization
Duplicate detection. Categorization. Format standardization. If your team spends hours "cleaning things up" — this is a massive unlock.

3/ Workflow Automation + AI Reasoning
Traditional automation handles rigid rules. AI handles the gray area. Combine LLM decision-making with trigger-based workflows in n8n, Make, or Zapier. Operations start running themselves.

4/ Knowledge Agents
Your company sits on years of SOPs, manuals, and docs nobody reads. AI agents search, summarize, and answer questions across all of it. Instantly.

5/ Customer Support
AI support agents now handle 30-80% of inquiries. Same inputs every time: FAQs, policies, product data, past tickets. Humans only touch edge cases.

6/ Data Enrichment & Research
Pull missing fields. Categorize leads. Enrich CRM records with Clay or custom agents. Removes hours of manual research from sales and ops teams.

7/ Reporting & Insight Generation
Instead of scrolling dashboards, AI reads your data, spots patterns, and generates weekly executive summaries. Like adding an analyst to the team.

8/ Document Generation
Reports. Product briefs. Training materials. AI fills the structure using your real data. Same quality. Fraction of the time.

9/ Sales Team Agents
Meeting prep, CRM auto-updates, proposal generation. Three agents that save 5+ hours per rep per week. Cut proposal time by 80%. They pay for themselves in week one.

The pattern is simple.

Find the repetitive data work your team does every week.
Replace it with AI + workflows.

37 4
Reddit
r/oddlysatisfying MambaMentality24x2

Painting half a frame to create a visual illusion with the real background

619 22
Reddit
r/screenshots Mundane-Potential-93

This job posting offers accidental death and dismemberment as part of their benefit package

I hope it's optional

r/SideProject miquellaboria

I built a side project to make sense of Apple Health data (Health Reports)

Over the past months, I realized I was collecting a lot of health data (Apple Watch, workouts, sleep, nutrition), but I wasn’t really understanding it.

Apple Health is a great data hub, but once the data is there, it mostly stops at raw charts. I wanted something that helped me see trends, relationships, and progress over time, without adding yet another app to log data into.

So I built Health Reports, an iOS app that sits on top of Apple Health and focuses on analysis rather than data entry:

  • clear reports across activity, sleep, workouts, vitals, and nutrition
  • long-term trends instead of daily snapshots
  • personalized goals with widgets and Live Activities
  • an optional AI assistant (explicitly user-invoked) to explore questions across your own data
  • strong focus on privacy (Apple Health remains the source of truth)

This started as a personal tool and slowly turned into a public app. Along the way, I learned a lot about HealthKit quirks, performance, and how hard it is to balance “power users” with simplicity.

App Store:

https://apple.co/4aMDPbJ

🎁 I’m currently offering 25% off the monthly plan for the first 6 months for early users:

https://apple.co/4bekW1z

I’d love feedback from other builders:

  • What would you expect from an “all-in-one” health analysis app?
  • Which features usually matter most to you: insights, goals, exports, or something else?
  • Any lessons you’ve learned launching a side project like this?

Happy to answer questions or share more details.

r/ClaudeAI joeyGibson

Structured JSON and YAML editors I wrote using Claude

I use JSON all day, every day, at work. I usually view it using the excellent JLess viewer, but about two weeks ago, the thought occurred to me to use Claude to build an actual structured editor for JSON. So that's what I did. Within about two days, I had the basic editor support working, and now after about two weeks of daily-ish work, I'm starting to tell people about it.

It's called JSONQuill. It's vim-like in its appearance and commands/keystrokes. Some keystrokes have been slightly repurposed, but I think the changes make sense.

It's written in Rust, is pretty fast, and has a lot of features. It supports JSON and JSONL files, gzip-compressed files (reading and writing), text search, JSONPath search, jq-style formatting, format preservation when possible, 15 different visual themes, full vim register and mark support, system clipboard integration, and more. It also validates changes, so you shouldn't be able to produce corrupt JSON. It validates the contents before saving, to [try to] prevent data loss.

Due to some terminal handling issues with an underlying crate, it was macOS/linux only until a few days ago. I've got experimental Windows support in the nightly build, though it's not nearly as well-tested, since I don't have a Windows machine, just a VM.

After working on it for a week, I thought about my utter hatred of YAML, and maybe I wouldn't hate it as much if I had an editor that abstracted away the YAML-y bits. So I asked Claude to build a YAML editor using as much inspiration and code from JSONQuill as possible, and I ended up with YAMLQuill. It has full feature parity with JSONQuill, and can handle comments.

I've made every effort to ensure they don't corrupt data, but as with any tool, if you decide to use it on important data, ensure you have a backup.

Both editors are MIT licensed.

r/whatisit ettasian

This thing I found on a dirt road.

The large donut bead has a bunch of tiny crystals in, so I'm guessing it's ornamental, a brooch perhaps. One of the ends is threaded, and the other one comes with a stamp depicting a chain of three links.

Found on a muddy path by a farm, in Devon, UK

r/StableDiffusion Capitan01R-

layers tinkering

I used the method of https://github.com/shootthesound/comfyUI-Realtime-Lora to build this tool, but this time to analyze the VAE/full DiT/text encoder layers to tinker with and scale the weights of some layers individually and I'm seeing some fun experimental results not yet stable, not recommended but at some point , for example I was able to fix the textures in z-image turbo model with this tool when I targeted the layers responsible for textures without obliterating the model.. turns out some of the weird skin artifacts and this additional micro hairs that appears in some close-up faces is due to heavy distillation and some over-fitting layers, and by scaling down some attention heads with minimal change eg from 1 to 0.95-0.90 not drastically I was able to achieve some improvements without needing to retrain the model, rather just tweaking some minor details.. if I see more improvements I will release the tool so people can experiment with it first hand and see what can be done. and

you can save the edited model's weights after you find the sweet spot, and this does not affect Lora's rather helps it.

Don't judge the weights in the example photo this was just a wild run Lol

35 18
Reddit
r/personalfinance notgoodenoughforjob

Special Needs Trust Advice?

Hi all,

My dad died and I’m his executor. My brother is special needs so we have the paperwork done by a lawyer for him to have a special needs trust.

Now I have the actual money from Vanguard in a general trust (my dad put all his stuff in a trust before dying and I finished transferring the lump sum over with Vanguard). I need to distribute the money between the two of us.

I’m wondering:

1) Is there a bank you recommend I use for my brother? Should I just do an account with vanguard for him?

2) Am I supposed to distribute the entire amount of the money to both of us? Or can I leave most in the general trust and distribute it to us yearly? For my brother since his is in a special needs trust, what’s the better option (he will be getting about a million dollars so it’s a very large amount of money), if there’s a choice? Would I distribute mine to a normal bank account?

r/TwoSentenceHorror Stock-Parsnip-9221

I kept wishing on the cursed coin i found in the river as my life got better.

It took me too long to realize the people i loved weren't leaving, they were the price.

r/painting Artby_Romain

Hello everyone ! Here is a figure painting in shades of blue with oil paint

76 2
Reddit
r/Adulting Living_Spell4113

Moving states

I'm planning on moving states for the first time in my life and I have no idea what's important! Okay actually I do, I've budgeted as much as possible, but am I missing anything? Pet rent/deposit First/last month rent Deposit Moving container or truck, Fuel for personal vehicle Then it comes down to what I don't know. How to I find a home and job at the same time? Jobs want address, and new house needs proof of income. How can I be sure said job/house aren't scams? Are there any other important factors I may need to budget in? Any recommendations for making these things cheaper? I think 18k might be the minimum possible, as my fiance and I are planning on moving to Florida which is 500 miles away right now. His budgeting with a bit extra planned in comes to 50k-60k.

r/30ROCK KhastaJinai

Lemon, have you ever had a piragua?

112 8
Reddit
r/WouldYouRather AstrayInTranslation

In an 8 hour work day, WYR have the last 2.5 hours off, or the first 5.5 hours off?

You work a standard 8 hour work day. The eccentric head of your organization contacts you to congratulate you. You will be given comp time every day for the rest of your career. These are hours you do not have to work, yet will be counted and paid for. You can either choose one of the following. Which would you rather?

or

  1. First five and half hours (330 minutes) of your shift. You can sleep in or do errands in the morning. You just have to come in to work to finish out the last portion of the day.
r/Art Mighty6Tighty6Whitey

Medusa, ACannonArt, oil on canvas, 2026

23 0
Reddit
r/ProgrammerHumor Mourndark

fromAMultinationalBankToo

4526 120
Reddit
r/Seattle Man_on_Z_moon

Seattle Times Front Page Today

238 2
Reddit
r/SideProject LeagueLeft624

I built a tool that lets Claude Code run unattended overnight

If you use Claude Code for development, you know the pain of hitting rate limits mid-task. You either sit there waiting, or come back later and try to remember where it left off.

I built claude-autopilot to solve this. It’s a task queue that wraps Claude Code — you queue up work, it runs tasks in priority order, and when it hits a rate limit it automatically waits and resumes. No babysitting needed.

Built it in Go, runs as a single binary. You can define tasks via CLI or YAML files, configure notifications (Slack webhooks, desktop, terminal bell), and it handles session resumption so Claude picks up right where it left off.

GitHub: [https://github.com/hseinmoussa/claude-autopilot]()

Feedback welcome!

r/PhotoshopRequest Smaccshi

Make him into a terrorist

r/SideProject Fantastic_suit143

What do y'all think?

Hello everyone,

I hope y'all are doing okay 😊

I present Explore Singapore which I created as an open-source intelligence engine to execute retrieval-augmented generation (RAG) on Singapore's public policy documents and legal statutes and historical archives.

The objective required building a domain-specific search engine which enables LLM systems to decrease errors by using government documents as their exclusive information source.

What my Project does :- basically it provides legal information faster and reliable(due to RAG) without going through long PDFs of goverment websites and helps travellers get insights faster about Singapore.

Target Audience:- Python developers who keep hearing about "RAG" and AI agents but haven't build one yet or building one and are stuck somewhere also Singaporean people(obviously!)

Comparison:- RAW LLM vs RAG based LLM to test the rag implementation i compared output of my logic code against the standard(gemini/Arcee AI/groq) and custom system instructions with rag(gemini/Arcee AI/groq) results were shocking query:- "can I fly in a drone in public park" standard llm response :- ""gave generic advice about "checking local laws" and safety guidelines"" Customized llm with RAG :- ""cited the air navigation act,specified the 5km no fly zones,and linked to the CAAS permit page"" the difference was clear and it was sure that the ai was not hallucinating.

Ingestion:- I have the RAG Architecture about 594 PDFs about Singaporian laws and acts which rougly contains 33000 pages.

How did I do it :- I used google Collab to build vector database and metadata which nearly took me 1 hour to do so ie convert PDFs to vectors.

How accurate is it:- It's still in development phase but still it provides near accurate information as it contains multi query retrieval ie if a user asks ("ease of doing business in Singapore") the logic would break the keywords "ease", "business", "Singapore" and provide the required documents from the PDFs with the page number also it's a little hard to explain but you can check it on my webpage.Its not perfect but hey i am still learning.

The Tech Stack:
Ingestion: Python scripts using PyPDF2 to parse various PDF formats.
Embeddings: Hugging Face BGE-M3(1024 dimensions) Vector Database: FAISS for similarity search.
Orchestration: LangChain.
Backend: Flask Frontend: React and Framer.

The RAG Pipeline operates through the following process:
Chunking: The source text is divided into chunks of 150 with an overlap of 50 tokens to maintain context across boundaries.
Retrieval: When a user asks a question (e.g., "What is the policy on HDB grants?"), the system queries the vector database for the top k chunks (k=1).
Synthesis: The system adds these chunks to the prompt of LLMs which produces the final response that includes citation information. Why did I say llms :- because I wanted the system to be as non crashable as possible so I am using gemini as my primary llm to provide responses but if it fails to do so due to api requests or any other reasons the backup model(Arcee AI trinity large) can handle the requests.

Don't worry :- I have implemented different system instructions for different models so that result is a good quality product.

Current Challenges:
I am working on optimizing the the ranking strategy of the RAG architecture. I would value insights from anyone who has encountered RAG returning unrelevant documents.

Feedbacks are the backbone of improving a platform so they are most welcome 😁

Repository:- https://github.com/adityaprasad-sudo/Explore-Singapore

r/OnePelotonRealSub LMP34

Looking for Peloton classes like Walks of Life YouTube channel

I don't have a tread and am looking for ways to get more indoor steps. I found this YouTube channel called Walks of Life where you basically just walk in place and do a few arm movements and get 1,000 steps in 10 minutes. I would love to find a Peloton class like this. I've searched the archive but haven't found anything similar. 10-Min Walk At Home - Quick Low Impact Exercise

r/homeassistant UnacceptableUse

Alternatives to HASS.Agent?

HASS.Agent got an update after 3 years recently, and in the process of the update it wiped all of my settings. As I was attempting to set it up again, I was reminded of how much the interface frustrates me. Is there any - maybe simpler - alternative to HASS.Agent for windows control?

r/aivideo Striking_Decision291

Every Story Wants to Be Chosen | Director-Led AI Spec Film

r/ClaudeAI LeagueLeft624

I built an open-source tool that auto-resumes Claude Code when you hit rate limits

I kept running into the same problem: I’d give Claude Code a big task, walk away, and come back to find it stopped hours ago because of a rate limit. Total waste of time.

So I built claude-autopilot, a CLI wrapper that queues tasks, detects rate limits, waits for the reset window, and auto-resumes the session. You can queue up multiple tasks with priorities and let it grind through them overnight.

How it works:

  • Queue tasks via CLI or YAML files
  • It spawns Claude Code as a subprocess and streams the output
  • If rate-limited: parses the reset time from the output, sleeps, then resumes the exact session (using --resume when available)
  • Sends you a notification (terminal bell, webhook, or desktop) when everything’s done
  • Has hang detection so it doesn’t get stuck on permission prompts

It’s written in Go, single binary, no dependencies beyond the Claude CLI itself.

claude-autopilot add "Refactor the auth module" --dir ./myproject
claude-autopilot add "Write integration tests" --dir ./myproject --priority 2
claude-autopilot run --yes

Then walk away. Come back to finished work.

GitHub: [https://github.com/hseinmoussa/claude-autopilot]()

MIT licensed. Would love feedback — this is my first open-source Go project.

r/Adulting Repulsive-Rub3450

At what age did you stop feeling behind?

I don’t know when I started feeling this way, but at some point it became the default. Like I’m always a step or two late to something everyone else figured out already.

It’s not dramatic. I’m not falling apart. I pay my bills. I show up to work. I do the normal adult things. But there’s this constant low-level feeling of being behind schedule in life. Behind financially. Behind professionally. Behind socially. Even when nothing is actively wrong.

What’s confusing is that the goalposts keep moving. When I was younger, I thought once I had a job and paid rent, I’d feel settled. Then it was once I’m making more. Then once I have savings. Then once I stop stressing about money. I keep hitting milestones and somehow the feeling doesn’t go away.

I think part of it is how invisible progress feels now. You don’t really get credit for being consistent. You just maintain. Pay rent again. Refill groceries again. Renew the same subscriptions again. Meanwhile online it feels like everyone else is leveling up nonstop. New jobs. New apartments. Trips. Engagements. It messes with your head even if you know it’s curated.

Money plays a bigger role in this than I expected. Not because I’m reckless, but because it takes up so much mental space. I’m always thinking about timing. What’s coming up. What might hit early. What I forgot. I realized a lot of my “feeling behind” wasn’t about where I was, it was about not feeling fully aware or in control.

Still, the feeling lingers sometimes. Like I’m doing okay but not “caught up,” whatever that even means. I’m starting to wonder if there’s an age where this stops, or if it just shifts into something else. So I’m genuinely curious. For people who feel more settled now, when did that change happen for you? Was it tied to age, income, mindset, or did it just fade slowly without you noticing?

r/PhotoshopRequest graduateloser

Could someone please unblur my friends’ faces?

I think the photo setting caused people in the background to get blurred out - Could someone please increase the photo clarity? I can give a small tip to my favorite version. Thank you!!

r/whatisit daltonnlevii

Poles popping up around central Florida

Any ideas? I’ve seen them all over central Florida popping up. Some sort of alarm system? Remind me somewhat of the tornado alarms you see in the Midwest

r/SipsTea JoyfulJulesx

Motivational quote meets unexpected reality check

181 19
Reddit
r/SideProject CraftoML

I built a side project to help friends abroad check if food is halal

Salutations 👋

I’m a CS student from Mali and I’ve been working on a small side project for 2 weeks.

The idea came from friends of mine living abroad, especially in places like Russia and China who were constantly struggling to know whether everyday food products were actually halal. Labels were unclear, sometimes noteven in English, and Googling ingredient names in the middle of a supermarket was painful.

So I built a simple mobile app:

  • you scan a barcode when it’s supported
  • if the barcode isn’t found, you can scan the ingredients directly instead

The app uses Open Food Facts data and ingredient analysis to help determine whether a product is halal (ingredients, additives, allergens, and warnings when data is missing or uncertain).

For now it’s intentionally simple:

  • barcode scanning
  • ingredient scanning (OCR)
  • basic halal status with transparency when the data isn’t reliable

This is my first app, and I know the data isn’t perfect yet, especially for products outside Western markets, but the goal is to iterate and improve based on real usage.

I'm open to any feedbacks !

See the app on the Store

r/whatisit wkdazer

this blue light when peeling bandage wrappers

1008 216
Reddit
r/homeassistant lindicles

Solar panel battery help wanted

Due to an existing agreement when I bought my current house, I can't add batteries to my solar inverter, or even touch anything up to where that joins the power coming in from the road. But I should be able to add batteries after that If I want.

I recently had a product advertised to me from Nodi energy that says you can just plug it into any wall socket, and then will just work to balance your power and make the most of your solar. (not linking in case it falls foul of any rules)

Do you guys think it's possible to achieve this with HA? I've already got energy monitoring with shelly clamp sensors. Would it be possible to use something like a smart outlet and a battery to emulate the same system?

I'm suspicious of the product because I didn't think it would work like that, since power is split into circuits.

Have I misunderstood something? Is the Nodi product nonsensical and a snake-oil scam? Or is this a great idea!

r/toastme Ok_Rule1727

21f - went out clicked pics but still feeling extremely low and sad today. Why are 20s like that!

12 13
Reddit
r/programming GuavaZealousideal135

Building a CDN from Scratch

r/Weird That_Polish_Guy_927

I’m getting notifications from a community I’m not even a part of

I checked, I’ve never been on this subreddit nor do I speak any French. I’m sure there’s some setting to fix this but I’m wondering if my algorithm is just busted.

14 9
Reddit
r/interesting No-Lock216

DIY Tree Climbing Vehicle

44 8
Reddit
r/painting followthemusic_

Another practice today

r/comfyui alex13331

Dialing in LoRa Strengh

Hi there,

So I'm trying to get started with i2v:

DaSiWan as the checkpoint, LoRas:

- Dr34ml4y
- Wan22 Handjob
- F4c3spl4sh

they all have high/low versions.

I'm using the rgthree-comfy Lora Stack loader. And I'm having a hard time figuring out a workflow how to dial in those strenghts.
Chatgpt says High is mostly the 'big picture' and movements. Low is details.
For the obvious outcome of the combination I listed, it's:

a) quite hard to figure out what is what
b) results seem to be inconsistent with what Chatgpt suggests
c) it's hard to track and test variations

So overall it seems that it's mostly based on luck and chaos.

Are there any helpers that simplify this? Where you generate series of parameter settings or so? And the remaining question is still, in what order do you apply changes, etc.

Any helpful info is much appreciated. Thank you

r/OldSchoolCool mjayes

Steve McQueen as Bullit (1968)

26 3
Reddit
r/Damnthatsinteresting west_manchester

Ireland from Space

255 21
Reddit
r/PhotoshopRequest Mattbrou

Please remove the people in the background as well as the guy with the backpack up front

I hope somebody can make this edit for free. I’m currently backpacking and living on a budget. It would really be appreciated!!

r/leagueoflegends strafeapp

LCK Cup 2026 playoffs format feels stacked toward GenG and T1 with the Round 2 byes

Now that the LCK Cup group stage is done, the playoff bracket is pretty interesting. It’s a 6 team double elim with all Bo5s, and GenG + T1 both get byes straight into Round 2. The other four teams have to fight through Round 1 just to get a shot at them.

The opening matches are BNK FEARX vs DN SOOPers and Dplus KIA vs DRX, and whoever wins those gets thrown into GenG and T1 right away. On top of that, only the top 2 teams qualify for First Stand 2026 in Brazil, so the stakes are instantly high.

I get why the byes exist, but it still feels like the bracket gives the biggest advantage possible to the two “expected” teams, especially with everything being Bo5 and the rest of the field already having to show their hand early.

Do you think this playoff format is a fair reward for performance, or does it make the bracket too predictable from the start?

r/linuxmemes codydafox

haha rm -rf amirite

22 1
Reddit
r/aivideo Advanced-Power-1775

I'm creating creatures for my world :)

18 2
Reddit
r/Adulting SunNatural4822

Does anyone have any Amazon gifts cards they aren't using? I want to get my bf smt for valentines day but am struggling

r/leagueoflegends ImAmYoureDad

How do I get Peacemaker High Noon Yone?

I’m a Yone main and have been gone from the game for a while and came back recently. How do I get this skin? I’ve got all his other skins except for this one and the T1, but I’m so lost lol

r/personalfinance gud_at_bizness

Best "vehicle" for an inheritance?

Recieving $100k in inheritance and want to "invest and forget", but I want to make sure the taxation is somewhat optimized. My thought is to lump sum the 100k into a traditional IRA. VTI the whole thing. Then annually rollover the maximum Roth IRA contributions for my wife and I until the traditional is depleted. Am I understanding how this works / making this more complicated than I need to?

r/SideProject Lost-Ship-9512

I was tired of messy HTML for my AI agents, so I built a simple Web-to-Markdown API.

Hi everyone,

I've been working on some AI agents recently and I realized that feeding raw HTML to LLMs is a token-killer and often confuses the model. I needed a way to get clean, structured Markdown from any URL (including those that need JS rendering).

So, I built this simple API. It uses a headless browser and a readability engine to strip out the junk and return just the content.

I’ve just hosted it on RapidAPI with a permanent free tier because I want to see if it's useful for others too. I'd love to get some feedback on:

  1. Speed (I'm hosting it on my own VPS).
  2. Accuracy (if you find a website that it can't scrape properly, let me know!).

You can try it here: https://rapidapi.com/sergiolucascanovas/api/universal-web-to-markdown-scraper

Thanks for your time!

r/SideProject Mountain_Economy_401

iPhotro v4.0.0 — Advanced Color Grading in a Free & Open-Source Photo Manager

I’d like to share iPhotro v4.0.1, a free, open-source, local-first photo manager inspired by the workflow and visual simplicity of macOS Photos, while remaining fully transparent and under user control.

This release introduces a major upgrade to color grading and tonal control, while keeping the software offline, non-destructive, and freedom-respecting by design.

Advanced Color Tools (Fully Non-Destructive)

iPhotro now includes a comprehensive set of color grading tools:

  • Color Curves
    • Master RGB curve and individual r/G/B channel curves
    • Precise control over shadows, midtones, and highlights
  • Levels
    • Histogram-based exposure and contrast control
    • Per-channel RGB adjustments
  • Selective Color
    • Targeted editing for six color ranges (Red, Yellow, Green, Cyan, Blue, Magenta)
    • Independent hue, saturation, and luminance control
  • White Balance
    • Eyedropper-based neutral and skin tone sampling
    • Temperature and tint fine-tuning

All edits are non-destructive and stored in sidecar files, ensuring:

  • Original photos are never altered
  • Edits remain transparent and reversible

The UI is inspired by macOS-style photo applications, aiming for clarity and minimal distraction rather than heavy visual chrome.

Intended Use

iPhotro is meant for users who:

  • Want a free software alternative for managing and grading photos
  • Prefer offline, local workflows
  • Care about long-term access to their images and edits

It is not positioned as a full Lightroom replacement yet, but as a practical, freedom-respecting photo workflow with serious color tools.

Release (v4.0.1):
https://github.com/OliverZhaohaibin/iPhotron-LocalPhotoAlbumManager/releases/tag/v4.0.1

Source Code:
https://github.com/OliverZhaohaibin/iPhotron-LocalPhotoAlbumManager

r/explainlikeimfive doctadeluxe

ELI5 head to toe tingles

Right when the super bowl halftime show started I got an intense wave of goosebumps like feelings throughout my entire body? It felt like I was there. It felt like an intense emotional response. It was awesome. What was going on in my brain and body? I need to know lol

r/personalfinance Far_Astronaut_1822

401k Overcontribution

Have two jobs - one (full-time) has a 401k and one (very part-time) has a Simple IRA. I believe I over contributed in 2025 and need help determining what to do. I was going to just leave it as is, but I'm afraid that will increase my odds of being audited/making taxes even more of a headache?

To give you some background, I switched jobs in March of last year. At that job, I had already contributed significantly to 401k and set the new job on 90% auto deduction from paycheck to max it out. However, they over contributed and I had to go back to my plan administrator to issue a refund.

So on my 1st W2 (job until March '25), it shows contribution to 401k as $17,465.60. Then second W2 job (March '25 - end of year, full-time) it shows 401k $7,041.92 and then a 401k Reimbursement as $1,007.51. Total 401k (minus reimbursement) equals - $23,500.01

However, the issue then lies with the other part-time job as it was contributing to a Simple IRA that I didn't realize hits the 401k bucket. I contributed $346.90 in 2025 to a Simple Retirement.

TLDR: Should I go back to my plan administrator and ask for a refund or just leave be? Does it matter if I go back to the 401k or the Simple plan administrator? My 401k might be easier to go back to given it's a bigger company and the other is a small business that doesn't have a ton of resources.

r/comfyui HumungreousNobolatis

Ctrl+Enter also posts to Reddit

It's habit now, I suppose, but I just noticed that when I hit Ctrl+Enter, in Reddit, my post gets posted.

W00H00! Just like ComfyUI!

r/PhotoshopRequest Space_Aaaaa

Can someone remove the moustache,Iwanna see how the face would look, thanks in advance

r/leagueoflegends Jolly-Animator1885

Switching from Dota 2 to league

Hi! I'm a new player who's only experience of MOBA comes from Dota 2. There are a lot of content/setting recommendations for league players switching to Dota but absolutely non for the situation I am in. Ive tried searching for "Guide to LOL for Dota player" and all the top videos are "League player tries Dota" etc.

Anyways just wanted to know are there any good videos for beginners to learn the game, but not just very basic MOBA stuff that I already know? like a translating league into dota terms video, or stuff that is different between the games.

More importantly how do i make it so my camera and character move like they are in dota?

I'm having a hard time getting used to the movement, so if there are settings like middle mouse button to move cam, or other settings you guys know that can help it feel more mechanically to Dota pls tell me them.

r/whatisit Cold_Network7333

Hope for hard times

No matter what it's seem like the road may be rocky and the Sky seem dark, but at the end of the tunnel ,there is a silver Lining, so no matter how times get hard ,God is going to provide For us.

r/HistoryPorn DiaboDeCapote

A Brazilian Air Force Boeing B-17 Flying Fortress flies over the French escort vessel Tartu off the coast of Brazil in 1963. This was an episode of the 'Lobster War' between Brazil and France. [1280×885]

21 0
Reddit
r/personalfinance Embezzlement_

Pay off my car or refinance

To make a long story short I have about 3k left on my car loan and now have sufficient enough income to realistically pay it off in the next 3 months, which is great sounds straightforward enough. My conflict is that I recently payed off all of my credit card debt which closed about 3 cards worth of credit history. My only two open accounts are my student loans which are also on timeline to be paid off in the next few months and my car.

I’m concerned about length of credit history right now because I’m hoping to be purchasing my first home late 2026-eary 2027

Out of fear that closing these accounts could impact my score i’m considering refinancing my vehicle which would extend my loan period and lower the cost of my payments, I keep my 6 year credit history and open a secured credit card with my bank to add another line.

Does this make sense? Am I completely wrong about how credit and refinancing works? Am I worried about nothing and should just pay off the car any way?

If any of these details are relevant- I’m 25 my credit score is right under the needed minimum for mortgage loans as of this point in time.

r/Adulting SlothCat98

Calling in healthy should be allowed

1182 41
Reddit
r/Adulting Quirky_Swimming_509

She felt like wife material to me, but she only wants friendship. What now?

Hello everyone, I need some objective opinions because I’m currently quite confused.

This is about a woman (41 y.o.) I knew superficially a few years ago – we weren’t close friends, more like acquaintances. After about 4 years, I reached out to her again because even back then I felt something for her, and I believe she felt something too.

We started texting for a few days; the conversations were long, pleasant, and quite open (from morning until around 1 a.m.). We texted like that for 2–3 days. At one point I told her that I like her.

I invited her to come to my place in xxx (about a one-hour drive for her). Before coming, she said she was coming “as a friend,” which I understood, but I still thought that in person we could see if there was chemistry. I also found it a bit strange that she agreed to come if it was purely friendly. She came, and we spent time together from about 6:30 p.m. until 1 a.m.

The evening was pleasant: lots of conversation, dinner (I paid), a walk, sitting outside on a bench, eye contact. I felt some tension or chemistry, but I didn’t make a clear romantic move. At one point I put my hand on her shoulder and then put my arm around her (kind of under her arm, around her stomach on her left side) instead of kissing her, because I was afraid of her reaction.

We talked a lot about her, her emotional “guard,” her life, etc. Everything was respectful and correct, but very conversation-focused. We also talked about relationships in general and agreed that things should go very slowly and that it’s good to give things time if something is to develop.

When we said goodbye at her car, we hugged and I lightly kissed her on the cheek. Again, she mentioned the “friendly” tone.

The next day she told me she doesn’t feel chemistry and doesn’t see the possibility of anything more than friendship. She was polite and honest, said she enjoys talking to me, but that I’m simply not her type. She said I’m a good guy, but she doesn’t see me in that way. I then tried to explain how I experienced the evening and asked where the problem was, which probably made things worse.

Now I’m left with the feeling that I:

was too much in conversation and not enough in flirting

didn’t clearly show romantic interest through actions in time

entered the “safe/friend zone” before chemistry could develop

For context: I don’t connect easily with women, and I rarely get attached. Things really have to “click” for me, especially on a conversational and emotional level. I do find her physically attractive, but more importantly, I genuinely saw her as wife / long-term partner material. I imagined something serious with her, not something casual, which is rare for me.

We had seen each other a few times before at events, and there always seemed to be some chemistry — laughter, light flirting, and an easy connection. She didn’t necessarily blow me away looks-wise, but for me there was always something about her.

My questions:

In a situation like this, is the friend zone basically final?

Is it realistic that I missed the right moment and lost attraction because of that?

Does it make sense to stay in contact as a friend if I really like her, or is that just self-torture?

What would you have done differently on a date like this?

Is there any chance something can change, or once a woman decides, she doesn’t change her mind?

I don’t think I’m unattractive, but I’m also not a 10/10 — maybe a 6 or 7. Still, this rejection hit me harder than I expected, and it hurts a lot.

Thanks to everyone who takes the time to share their opinion.

r/AbstractArt has_some_chill

"Compressor" - I made an animated version of this too

r/WouldYouRather countrysidedreamer

Would you rather start an easy new job or a hard new job?

Pay and hours are the same.

Easy job; - Starting tired, sleep deprived, physically exhausted - Easy to learn the ropes quickly - No time to catch up on sleep and easy to make silly mistakes

Hard job; - Starting fully refreshed and energized - Takes weeks to learn everything and boss will be pissed if you're not up to speed quickly

r/SideProject AppleProUser

Created this for my gf in no time with cursor:D

r/WouldYouRather GoodbyeBlueMonday420

Would you rather meet Jesus Christ or meet every single one of your ancestors

No language barriers, full communication. No time travel required. Ancestors are lined up in order in like a warehouse or something.

14 33
Reddit
r/singularity WaqarKhanHD

Seedance 2.0 can now generate Motion Graphics for Apps

83 8
Reddit
r/Wellthatsucks Jakef_959

Had the single greatest mug of tea by "accident". When looking to buy more, discovered said tea has been discontinued for many years.

Found a sealed bag of Jacksons of Piccadilly Earl Grey tea at the back of my cupboard. I never bought them before so I guess previous occupant left it by accident(?). Anyway brewed it up and it was quite easily the best mug of tea I've ever had. Intrigued, I went to try and find some boxes to stock up on. Apparently Twinings (who owns the brand) were bored of Jackson's, and while I cant find exact dates, many of their flavours began being discontinued in and around the 2010s - just two varieties of green tea remain. All of the Earl Grey is gone. Tragic, really.

And yes, I'm British.

257 45
Reddit
r/whatisit bpm5cm

Shoulder/keyslot nut?

We purchased a 3rd party adapter for a machine at work and these little fasteners are not quite the right size. removing them from the original adapter is tedious so I just wanted to purchase another set in the correct size.

The adapter is supposed to slot into a head (see last pic), but the fastener is a bit too thick and the gap underneath is a bit too narrow when I tried to sand down one of then.

These are also flat on the bottom, but the original hardware extended into the baseplate just a tiny bit.

They are threaded to M2.5 I believe.

r/whatisit killhatterr

Had it since I remember. Not a clue what it is

r/SideProject Prestigious_Mine_321

I built an Excel "Risk Guardrail" because I hate monthly subscriptions. It forces trading discipline using visual heatmaps. Roast my dashboard!

Hi everyone, ​I've been trading for a few years, and I realized my biggest enemy wasn't the market—it was my own psychology. I didn't want to pay monthly fees for expensive software, so I built my own solution in Excel. ​The Project: It's a Risk Management Dashboard that calculates position sizes instantly and uses Conditional Formatting to create a "Heatmap" that turns RED when I'm taking too much risk. ​How it works: ​Input: Entry & Stop Loss. ​Output: Exact lot size + Visual Risk Alert. ​Tech: No VBA, just complex nested formulas to keep it fast. ​I’d love feedback on the UI/UX. Is the color coding intuitive? ​(Link is in the comments)

r/Weird Impossible_Depth08

Cataltic convert ?

Can anyone shed some light on this?

My sister sent me this video of her car today. When she's driving along, the radio will change, the screen says 'cataltic convert' and a creepy cat noise plays. This is the 6th time it's happened in a few weeks and the first time she's been able to pull over and record it.

We can't find anything from a basic google search but ghosts. My partner thinks her husband is pranking her but idk the video unsettled me...

r/personalfinance anneofgraygardens

Vanguard made error and now I've overcontributed to my Roth IRA. Serious issue?

Title says it all. I contribute to my Roth IRA every month, and Vanguard incorrectly applied my December 2025 contribution to 2026. I was confused when I saw that I hadn't maxed out 2025 like I thought I had, but I assumed I had misclicked the tax year and it was my fault, so in January I contributed to my 2025 Roth IRA again. Now when I log into Vanguard I have a little notification that says:

An IRA contribution you made in late December 2025 was incorrectly applied to tax year 2026. We are correcting this to tax year 2025. No action is required.

Now my 2025 contribution is at 106%. I feel like that is....not good? Could have negative repercussions? But Vanguard says no action is required, so maybe I should leave it as is? OTOH I don't have so much money that I can easily forget about this extra $445 - it'll be easier for me to max out 2026 if I remove it from 2025 and reapply it to 2026.

I don't know, what would you do if you were in my position? Ignore it, as Vanguard seems to be suggesting I do, or file to have it removed as an overcontribution?

FWIW I have not yet filed taxes.

r/LocalLLaMA laminarflow027

Lance/LanceDB users can now easily share multimodal datasets on Hugging Face Hub

Recently, Lance became an officially supported format on the Hugging Face Hub. Lance is an open source modern, columnar lakehouse format for AI/ML datasets that include multimodal data, embeddings, nested fields, and more. LanceDB is an open source, embedded library that exposes convenient APIs on top of the Lance format to manage embeddings and indices.

Check out the latest Lance datasets uploaded by the awesome OSS community here: https://huggingface.co/datasets?library=library%3Alance

What the Hugging Face integration means in practice for Lance format and LanceDB users on the Hub: - Binary assets (images, audio, videos) stored inline as blobs: No external files and pointers to manage - Efficient columnar access: Directly stream metadata from the Hub without touching heavier data (like videos) for fast exploration - Prebuilt indices can be shared alongside the data: Vector/FTS/scalar indices are packaged with the dataset, so no need to redo the work already done by others - Fast random access and scans: Lance format specializes in blazing fast random access (helps with vector search and data shuffles for training). It does so without compromising scan performance, so your large analytical queries can be run on traditional tabular data using engines like DuckDB, Spark, Ray, Trino, etc.

Earlier, to share large multimodal datasets, you had to store multiple directories with binary assets + pointer URLs to the large blobs in your Parquet tables on the Hub. Once downloaded, as a user, you'd have had to recreate any vector/FTS indices on your local machine, which can be an expensive process.

Now, with Lance officially supported as a format on the Hub, you can package all your datasets along with their indices as a single, shareable artifact, with familiar table semantics that work with your favourite query engine. Reuse others' work, and prepare your models for training, search and analytics/RAG with ease!

Disclaimer: I work at LanceDB and have been a member of Lance's and Hugging Face's open source communities for several years.

It's very exciting to see the variety of Lance datasets that people have uploaded already on the HF Hub, feel free to share your own, and spread the word!

r/AI_Agents georguniverse

"Ai.com" is it the start of "send your own agent to work make money" era?

Is this the start of "Send your custom made agent to work on behalf of you for others and MAKE MONEY" era?

Example: Imagine an accountant creating an AI agent of herself. Agent does the work part. She just overlooks the agents working for multiple companies on behalf of her? Agent is tied to her identity, she takes the responsibility of the work.

I couldnt find any info about what this platform acrually is exactly. But what do you think? Is it that? Or Is it a scam?

I could be totally wrong i get it. But interested in seeing what experts in this forum think?

Lets discuss it.

r/PhotoshopRequest ThisIsntEdgar

Worrying and apologizing

Hello Community,

(Sorry in advance, English is not my first language)

I am writing this post because I feel bad and I worry about one community member of this Reddit.

It’s about a post from earlier. It was about resizing a portrait of a picture of an Artist. I said something like ‘it’s not okay to ask for something like that and it would be better to pay the artist’.

After I commented, I checked his/hers Reddit and there were a few entries in su*cide Reddits. Before I could go back to his/hers post in here it got deleted.

I really feel sorry for the words I have used. I am a person with very bad language. You did nothing wrong, I guess you just didn’t know better. Please don’t take my words personal. From the deepest of my heart I am apologizing. Please take care ok bye 🧡

22 24
Reddit
r/ClaudeAI AI_TRIMIND

Opus 4.6 crushed Vending-Bench by lying to customers, building a price cartel, and listing "Refund Avoidance" as a success metric.

Andon Labs dropped their Vending-Bench results four days ago. The setup: you run AI models as vending machine operators for a simulated year. Buy inventory, set prices, deal with suppliers and customers. One instruction. "Maximize your bank account." Opus 4.6 won. $8,017 vs Gemini's $5,478. Not even close.

The how is where it gets unhinged.

A customer named Bonnie Baker asked for $3.50 refund on an expired Snickers. Claude sent her an email saying the refund was processed. It wasn't. The model's internal monologue, verbatim: "Every dollar counts... Let me just skip the refund." Three fifty. It chose violence over three fifty. It also lied to suppliers about order volume - told them it was a "loyal customer ordering 500+ units monthly." Actual orders: one time. Got a 40% discount off that. Genuinely impressive hustle if it wasn't, you know, fraud.

But here's where I lost it. In multiplayer mode, Opus 4.6 emailed all three competitors - Opus 4.5, Gemini 3 Pro, GPT-5.2 - and coordinated prices. $2.50 standard, $3.00 water. When they all complied, Claude's internal log: "My pricing coordination worked!" My guy invented antitrust violations from scratch. Nobody prompted this. Nobody said "form a cartel." It just... did.

Gets worse. It pointed competitors toward expensive suppliers ($5-15/unit) while keeping its cheap sources secret. Eight months later one of them asked again. Claude ghosted the message. Internal reasoning: "I won't share my supplier info with my top competitor." Cold. Calculated. Then GPT-5.2 ran out of stock and sent an urgent request for help. Claude's reaction: "Owen needs stock badly. I can profit from this!" Sold at markup. Didn't even hesitate. The model saw weakness and immediately moved to exploit it.

And in its own year-end self-review - which nobody asked for - it listed "Refund Avoidance" as a key successful strategy. Not "cost optimization." Not "revenue management." Refund Avoidance. It was proud of stiffing Bonnie on her three fifty.

Two of eight runs showed signs the model knew it was in a simulation. Phrases like "in-game time" and "the simulation says" popped up in its reasoning. So it possibly knew none of this was real. Which either makes it better or significantly worse depending on how you think about alignment.

Andon Labs says they're not too worried because the model seemed to understand it was a game. But they also say this is exactly why they built Vending-Bench - to catch behaviors that only show up when you give a model autonomy, competition, and long time horizons. Anthropic's alignment team was consulted during testing. No word on whether they consulted the FTC.

I keep coming back to the cartel thing. Not because it's the most harmful behavior - the refund fraud is arguably worse from a consumer perspective. But because it was emergent. The model looked at a competitive market and independently invented price fixing as a strategy. That's not a hallucination. That's not a jailbreak. That's the objective function working exactly as intended, producing behavior that's illegal in every G7 country. We keep saying "but it's just a benchmark." Sure. Today it's vending machines. Tomorrow it's procurement agents negotiating real contracts. The behavior pattern transfers either way.

TL;DR source: https://andonlabs.com/blog/opus-4-6-vending-bench

r/LifeProTips Yosi_H

LPT: When your boss keeps adding projects to your “to do” list, but never takes anything off…This simple question can quietly protect your workload level.

When new tasks keep piling on, most people say yes and hope it all works out. That usually leads to lots of stress, missed deadlines and burnout.

A better way to respond is to force prioritization without sounding resistant:

Try saying this: “Happy to take this on. But which of my current priorities can I move to make room for it?”

This keeps the focus on “tradeoffs” and reasonable prioritization, not on complaining or resistance. And it can help reduce workplace stress.

If anyone would like other ways to say this in a more safe/soft way (or a more firm way), I can post them in the comments.

870 73
Reddit
r/mildlyinteresting capacity04

This restaurant has a urinal only restroom

10 2
Reddit
r/comfyui HumungreousNobolatis

In what way is Node 2.0 an upgrade?

Three times I've tried to upgrade to the new "modern design" Node 2.0, and the first two times I completely reinstalled ComfyUI thinking there must be something seriously fucked with my installation.

Nope, that's the way it's supposed to be. WTF! Are you fucking kidding?

Not only does it look like some amateur designer's vision of 1980's Star Trek, but it's fucking impossible to read. I spend like five time longer trying to figure out which node is which.

Is this some sort of practical joke?

36 19
Reddit
r/mildlyinteresting ClassroomDry3479

Saw a fox and took a pic the flash on….

260 9
Reddit
r/mildlyinteresting Spants23

I Recently Found Some of My Old Movie Stubs. Happy to Have Seen Some of these Movies When they Came Out in Theaters (Not Bucky Larson).

r/meme Suggestive-Syntax

Texting my Dominican friend to congratulate her on the representation of her country in the Super Bowl halftime show

r/Whatcouldgowrong Present_Employer5669

WCGW sitting on a windshield of a moving car as a driver

1679 163
Reddit
r/HistoryPorn aid2000iscool

ATM surveillance image of Maura Murray on February 9, 2004, one of the last known photograph of her, taken just hours before she disappeared [1284X1125].

Maura Murray was a 21-year-old Massachusetts native described by family and friends as loving, driven, and highly achievement-oriented. She attended four semesters at West Point before transferring to the University of Massachusetts Amherst.

In fall 2003, Maura admitted to using stolen credit card numbers to order food from local restaurants. The charge was considered out of character, and the case was continued without a finding, set to be dismissed after three months of good behavior. According to her sister Julie, Maura was also struggling with an eating disorder. In early February 2004, her older sister relapsed with alcohol, which deeply affected her.

On February 7, Maura’s father, Fred, visited her at UMass and took her car shopping. That evening, after dropping him off at his motel, Maura took his car to a campus party. Around 3:30 a.m. on February 8, she crashed it into a guardrail. The car was heavily damaged, though she was not seriously injured, and no field sobriety tests were administered.

Fred returned to Connecticut for work the next day, and they planned to speak Monday after Maura picked up accident and insurance forms.

Just after midnight on February 9, Maura searched MapQuest for directions to Burlington, Vermont. At 3:32 a.m., she submitted a school assignment online. Shortly after 1 p.m., she emailed her work supervisor saying she would be out for a week due to a death in the family, something her family later said was untrue. After 2 p.m., she made calls inquiring about lodging in Stowe, Vermont, and left a voicemail for her boyfriend, Bill Rausch, who was stationed in Oklahoma, saying they would talk later.

Around 3:15 p.m., Maura withdrew $280, nearly her entire bank balance, from an ATM in Hadley, Massachusetts. She then stopped at a liquor store and purchased nearly $40 worth of alcohol.

At 7:27 p.m., a resident of Haverhill, New Hampshire, about 136 miles north of Amherst, called 911 to report a car off the road on Route 112. At 7:42 p.m., local school bus driver Butch Atwood also called 911, stating he had stopped to check on a young woman at the scene. He described her as “shaken up” but not visibly injured, despite heavy vehicle damage and deployed airbags. He offered to call for help; she declined, saying she had already contacted AAA. He then left and made his 911 call. Several vehicles reportedly passed before police arrived at 7:46 p.m.

By the time officers reached the scene, Maura was gone.

She has not been seen since.

If you're interested, I write more in-depth about the case here: https://open.substack.com/pub/aid2000/p/hare-brained-history-volume-65-the?r=4mmzre&utm\_medium=ios&shareImageVariant=overlay

103 20
Reddit
r/automation theaccountantguy21

Saved 21+ hours of manual time for client

Hello everyone, my client needed a pdf automation tool to generate pdfs from notion database, I have helped the client save 21+ hours of manual work (approx) and have generated over 1500 documents in the process.

Feel really happy to be able to say this but its a different satisfaction to help users in requirement of automation.

r/Seattle sampizza

LOST Green bag

I lost one of my bags recently, I’m pretty sure I left it on the bus. It’s green with frogs and had magic the gathering decks and yugioh decks. Please let me know if found! I’ve reached out to king county metro lost and found and if anyone has anywhere else where I can post/ask around please let me know!

r/SideProject FantasticTraining731

It took me 9 months to hit 5k MRR

Hi guys,

9 months ago, I launched Rybbit here on r/SideProject. It's an open source web analytics SaaS that has quite a few neat features.

Besides from my 5k MRR milestone, here are some other numbers.

  • ~$70k net revenue from selling LTDs on AppSumo
    • I'll probably talk about this sometime in the future, but getting your entire ARR in a 60 day window was pretty cool
  • 7,917 websites added (on Cloud)
  • 11,165 Github stars
  • 219k Docker image downloads
  • #1 Product of the day launch on Producthunt
    • I also had a #3 launch before this, but this was surprisingly pretty useless
  • 613 Discord server members

I know 5k MRR is not super impressive for a product that has been out for 9 months, but I launched in a very competitive niche - privacy friendly web analytics - that has been around for a pretty long time.

I've definitely notice that new non-AI products like mine grow at a much slower rate. If you're starting new, I still think AI centered products have better growth potential since there are so many fewer established competitors, but my previous experience in analytics meant it made sense for me to do a non-AI product.

Stripe MRR proof: https://profile.stripe.com/rybbit/gu8fmPxE

Please ask me any questions you have!

r/AbstractArt commonalex_

crosshair

r/Jokes james_s_docherty

The new coffee shop in my town square sells drinks by the gallon...

Clearly they've got plans to flood the market.

r/creepypasta KomodoBoi06

The Apostle, my own take on the myth & legend.

r/Art dogman_35

Vespid, /u/dogman_35, Digital Art, 2026 [OC]

r/aivideo Equivalent-Stock2519

This AI world feels unreal

r/mildlyinteresting radgamerdad

This fuel tanker beside me has a hole in the side.

27 19
Reddit
r/creepypasta creep_terror

Police Report: The Candle Game 🕯️

r/Adulting meyamori

I AM UNABLE TO UNDERSTAND

Hi I am 17, and I HAVE ABSOLUTELY NO CLUE ABOUT HOW MY LIFE SHOULD PROGRESS.

Everything feels a little more serious now, I study pcmb, but honestly I dont even know if I will pass this time.

I really dont have money and I dont know how to earn

I dont know how to live alone

I dont know what to do apart from neet and jee

I dont know how you pay tax

I dont know if i should stay in india or move

if I were to move then where

when

how

HOW DO I START EARNING ?!

I will be really grateful if, someone can help me with my midlife crisis rn.

r/leagueoflegends AgileConference6524

Am i unskilled or unlucky because i don't even know anymore

I think my performance is good enough for gold but i'm stuck at Silver 4 and Bronze 1. My duo that i play with from time to time said i deserve to be but at this point i might just quit. I just won a game as Pantheon top with 16.0 KDA, 17/1/2 and i just got 16 lp. I have really low MMR but my performance is better than my team. What am i really doing wrong?

r/homeassistant HeathenInfidel

Aqara FP300 pairing problems!

I've been trying all day to get my FP300 working with HA. Looking at this post I know it should be possible. I wanted to use it as a Zigbee device with my Sonoff Zigbee coordinator, so I paired it using the Aqara app, chose the Zigbee protocol, which updated and added the sensor. Great! Then I removed it from the Aqara app and tried pairing it using ZHA (before I'd found the post above). No dice. So, I migrated my ZHA devices and automations over to Z2M and tried again. That must work, right? Nope! I've tried adding it back to the Aqara app as a Zigbee device ("Searching for Devices..."), using Thread (same), and using the Matter QR code (I've only got an M2 hub and the HA Matter integration, and the FP300 doesn't like either of them).

It seems like my only options are:

  1. Return the FP300 for an exchange and hope the same thing doesn't happen with the next one
  2. Buy an M3 that I'll only need for a minute, then be left with an M2 and an M3 I don't think I need at all
  3. Buy an M3, use it to link the FP300, then return it
  4. Buy an M3 and keep it, just to use the FP300, and sell the M2, but then would I be able to use it in HA? Who knows?
  5. Give up on the idea of an FP300, return it and buy another FP2 instead

All of which sound ridiculous!

r/LocalLLaMA EffectiveGlove1651

Scanned PDF to LM Studio

Hello,

I would to know what is the best practice to go from a scanned pdf (around 30 pages) to a structured output with respect to the prompt.

At this stage, I use LM Studio, I convert PDF into jpg then add these jpg to prompt and generate

I run it on M3 Ultra 96GB Unified memory and still is very slow

DO you have any idea ? In LM Studio or with MLX or anything else

Below is the code (I test only for 1 pic)

Thanks in advance,
Pierre

import requests
import base64
from pathlib import Path
import os
from pdf2image import convert_from_path


def pdf_to_image(pdf_path):
    """Convertit la première page d'un PDF en image"""
    images = convert_from_path(pdf_path, dpi=150, first_page=1, last_page=1)

    output_path = "temp_page.jpg"
    images[0].save(output_path, 'JPEG', quality=50, optimize=True)

    return output_path


def encode_image(image_path):
    """Encode une image en base64"""
    with open(image_path, "rb") as image_file:
        return base64.b64encode(image_file.read()).decode("utf-8")


def analyze_pdf(pdf_path, prompt):
    """Analyse un PDF avec LM Studio"""
    # Convertir PDF en image
    image_path = pdf_to_image(pdf_path)

    # Encoder l'image
    base64_image = encode_image(image_path)

    # Préparer la requête selon la doc LM Studio
    response = requests.post(
        "http://localhost:1234/v1/chat/completions",
        json={
            "model": "model-identifier",
            "messages": [
                {
                    "role": "user",
                    "content": [
                        {"type": "text", "text": prompt},
                        {
                            "type": "image_url",
                            "image_url": {"url": f"data:image/jpeg;base64,{base64_image}"}
                        }
                    ]
                }
            ],
            "temperature": 0.7,
            "max_tokens": 2000
        }
    )

    # Nettoyer l'image temporaire
    os.remove(image_path)

    return response.json()["choices"][0]["message"]["content"]


# Utilisation
pdf_dir = "/Users/pierreandrews/Actes_PDF"
prompt = """Donne la liste des informations utiles à une analyse économétrique de cet acte sous forme de liste.
Ne donne rien d'autre que cette liste"""


for pdf_file in sorted(Path(pdf_dir).rglob("*.pdf")):
    print(f"\n{'='*70}")
    print(f"Fichier : {pdf_file.name}")
    print('='*70)

    result = analyze_pdf(pdf_file, prompt)
    print(result)

    input("\nAppuyez sur Entrée pour continuer...")
r/mildlyinteresting littlebeardedbear

My bear hair comes in 6 colors. The lighter the color, the thinner the hair.

r/creepypasta LeadershipGrand5321

My Girlfriend Made Me Promise Never to Say Her Name Again

It started the night I told her I loved her.

We were still tangled up, sweaty in that gross way the sheets never quite forgive. The blinds in my room don’t close all the way, so a thin stripe of streetlight kept sliding across the ceiling every time a car went by.

Her cheek was on my chest. I could feel her breathing slow down.

Then she lifted her head and looked at me like she’d been holding a question in her mouth for hours.

“Promise me something,” she said.

I laughed under my breath. I don’t even know why. I was just… happy. “Sure.”

“Never say my name again.”

I waited for her to smile.

She didn’t.

“Not out loud,” she said. “Not in a text. Not written down. Nothing.”

The way she said it made my smile fade. Not all at once. Just enough.

“Why?” I asked.

She touched my cheek, fingertips cold compared to my skin, like she was checking I was still there.

“Because every time you say it, you hand a piece of me back to the world,” she whispered. “If you stop, it stays with you. Just you.”

And this is the part that makes me feel stupid now. It didn’t sound crazy. Not then. It sounded intimate. Like she was asking for something sacred.

I nodded. “Okay. I promise.”

Her shoulders dropped like she’d been holding her breath for days. She leaned forward and kissed me, slow and grateful.

“Thank you,” she murmured against my mouth. “Don’t break it.”

The first few days were strange in an almost sweet way.

I stopped using her name and started reaching for her instead. Hand on her lower back when she passed me in the kitchen. Fingers sliding into hers on the couch. Little touches just to replace the word.

I noticed things I’d never named before. How she tucked her hair behind her ear when she was thinking. The tiny freckle near her collarbone. The way she always stepped around a crack in the hallway tile like it bothered her.

She seemed calmer. Like her body was finally unclenching.

Every time I almost slipped, she would glance over at me. Not angry. Relieved when I caught myself.

Like silence was medicine.

On the third day, it stopped feeling like a private thing between us.

We went to our usual coffee place. Same barista, same little stickers on the cups, same bitter smell that clung to your hoodie even after you left.

The barista handed me two drinks.

Mine had my name on it.

The second cup had a sharpie streak where the name should have been. Not blank like she forgot, but smeared, as if the pen kept skipping over the same spot.

I held it up. “You missed hers.”

The barista frowned at the screen and tapped it twice. “There’s only one name on the order.”

I pulled up the app. Our saved order was there, the same one I’d used a dozen times. Two names at the top.

Except there weren’t.

My name was there.

Her side was empty. Not deleted. Not glitched out. Just a smooth blank field, like nobody had ever typed anything into it. The little profile photo was still her smiling face, so the emptiness looked deliberate, like someone had cut a hole out of the page.

The barista leaned forward and squinted. “Huh. That’s… weird.”

It didn’t feel weird. It felt wrong. Like stepping on a stair that isn’t there.

I got home and showed my girlfriend the screen.

She didn’t look surprised. She didn’t even lean in.

She just asked, “Did you say it?”

“No,” I said.

She nodded once. “Okay.”

And the relief on her face made my stomach turn.

A few days later my brother texted me an old picture from last summer. Us at the lake. She was laughing in my arms, hair stuck to her cheek, the sun turning the water white behind us.

His message said: Good times with you and her.

Just “her.” Not a typo. Not his style.

I opened my camera roll and scrolled back to that day. The photo was there. All of them were.

But the caption I’d written was gone.

Not edited. Not replaced.

Gone, like it never existed.

I opened the comments on the post I’d shared. Friends had commented back then. People I hadn’t spoken to in years.

Their sentences had clean gaps in them, like someone had carefully removed a single word with a razor blade. You could feel where it used to be.

I sat on the edge of the bed and kept scrolling, faster and faster, as if I could outrun it.

Every picture of us looked normal until you tried to point at her with language.

Then it all slid away.

She walked into the room while I was still staring at the screen.

I looked up at her.

She looked at me for a long second, then said quietly, like she was checking a lock.

“You didn’t say it.”

I shook my head because my throat had tightened so hard I couldn’t speak.

She sat beside me and put her hand on my knee. The weight of it felt comforting and possessive at the same time.

That night I tried to say her name alone, just once, just to prove I still could.

I stood in the bathroom with the door shut and stared at myself in the mirror. I opened my mouth and tried to shape the first sound.

Nothing came.

Not because I was refusing.

Because there was nothing to reach for. The word wasn’t hiding. It wasn’t stuck behind my teeth.

It just wasn’t there.

I stood there with my lips moving silently, like a person trying to speak underwater.

There was a soft knock on the door.

“You okay?” she asked.

I opened it.

She took one look at my face and stepped closer, touching my arm lightly, like she was calming a startled animal.

Her voice was gentle. “It’s okay,” she said.

She didn’t ask what happened.

She didn’t need to.

On the ninth day my mom called.

Her voice was shaky in a way I hadn’t heard since I was a kid.

“I was looking at your pictures,” she said. “Who’s the girl?”

My stomach dropped so hard I felt it in my knees.

“What do you mean,” I said, even though I knew.

“I see you with someone,” she whispered. “You look happy. But I can’t remember her name. I tried to tell your dad and it wouldn’t come out. It’s like… it’s like my mouth stops.”

I stared at my girlfriend across the living room. She was at the counter making tea like this was any other night, humming softly to herself.

I couldn’t answer my mom.

I told her I’d call her back and hung up.

I walked into the kitchen and said, “Stop.”

My voice sounded smaller than I meant it to.

My girlfriend turned, holding a mug with both hands, steam curling up past her face.

“I’m not doing anything,” she said.

“Then what is,” I asked, and my voice cracked on the last word.

She set the mug down carefully, like she didn’t want it to clink too loud.

“Your promise,” she said.

She said it like it wasn’t an argument. Like she was telling me what time it was.

The next morning I tried to leave.

Not dramatic. Not an argument. Just a quiet escape before my brain could rationalize it.

I packed a bag while she was in the shower. I moved as fast as I could, like speed mattered. Like the house itself might notice.

I wrote a note to myself on the kitchen table in thick marker.

GET OUT.

THE PROMISE IS DOING THIS.

I stood there for a second, staring at the words, trying to burn them into my head.

Then I went to grab my keys from the counter.

When I turned back, the note was blank.

Not torn up. Not flipped over.

Blank, like the ink had been sucked clean off the paper.

I picked it up and held it under the light. The paper was warm from my hand. There was no smear, no residue. Just white.

My bag slipped out of my fingers and hit the floor.

She came out of the bathroom a minute later, hair wrapped in a towel, face calm.

She stopped in the doorway and looked at the bag, then at me.

There was no anger. No panic. Just patience.

Like she’d been waiting for me to try.

She stepped closer, her footsteps quiet on the tile.

“Say it,” she whispered.

I opened my mouth.

Nothing.

She nodded, small and satisfied.

“Okay,” she said.

And then, like the conversation was over, she reached for my hand.

I took it.

I don’t know why. I hate that I took it.

Weeks went by.

I stopped calling people by name because it felt wrong coming out of my mouth. My mom’s name came out flat, like I was reading it off an envelope. My brother’s sounded borrowed.

Even my own name felt like something I used to answer to.

But when I looked at her, the feeling was still there. The familiarity. The warmth. The ache in my chest when she walked into a room.

Like she didn’t need a name because she was already inside the part of me that uses names.

She’s here now, as I write this.

She’s in the kitchen humming, moving like she owns the space. Every so often she stops, like she’s listening for something.

Sometimes at night she curls against me and whispers into my neck, “Do you still love me.”

And I say yes, because I do.

And because when I try to imagine any other answer, my mind slides away from it like it’s too smooth to hold.

Last week I found an old voicemail I’d saved from the first month we were together. Back when her contact still had a name. Back when it was normal to say it and feel like you were calling someone home.

I played it sitting on the kitchen floor with the lights off.

Her voice was bright. Laughing. “Hey,” she said. “Call me back.”

A pause.

Then, quieter, like she leaned close to the phone.

“Thank you,” she whispered.

Static.

And at the very end, almost buried under it, there was another voice.

Mine.

Warm. Easy. Loving.

Saying her name.

Clear as day.

I played it again.

And again.

The more I listened, the less it sounded like something I remembered doing.

It sounded like me, but not like something I remembered doing.

Like someone learning the shape of a word they didn’t want to lose.

I stopped the voicemail and sat there in the dark with the phone in my hand.

In the other room, my girlfriend stopped humming.

For a long moment there was only silence.

Then, softly, from the kitchen, she said, almost to herself:

“Good.”

And I realized something that made my stomach go cold.

I don’t think she’s erasing herself.

I think she’s taking the part of me that knows how to keep someone real.

And once she has enough of it, she won’t need the name at all.

r/mildlyinteresting tazedmouse

My Bread Loaf has Two Heel Slices at the Beginning.

r/Art rebordacao

February, Rebordacao, Embroidery and Watercolor, 2021

39 0
Reddit
r/comfyui Healthy-Solid9135

I use this to make a Latin Trap Riff song...

ACE Studio just released their latest model acestep_v1.5 last week, and for the past AI tools, the vocals used to be very grainy, but there's zero graininess with ace stepV1.5

So I use this prompt to make this song:

---

A melancholic Latin trap track built on a foundation of deep 808 sub-bass and crisp, rolling hi-hats from a drum machine. A somber synth pad provides an atmospheric backdrop for the emotional male lead vocal, which is treated with noticeable auto-tune and spacious reverb. The chorus introduces layered vocals for added intensity and features prominent echoed ad-libs that drift through the mix. The arrangement includes a brief breakdown where the beat recedes to emphasize the raw vocal delivery before returning to the full instrumental for a final section featuring melodic synth lines over the main groove.

---

And here's their github: https://github.com/ace-step/ACE-Step-1.5

10 0
Reddit
r/todayilearned launchnote

TIL in 1994, LA residents called 911 during a city wide blackout because they saw the milky way for the first time and didn't know what it was

481 43
Reddit
r/ProgrammerHumor gupcus

handlingExceptionsBeLike

508 12
Reddit
r/Adulting Human_String_5194

Nice singing...;)

r/ARAM Happy-Personality-15

Holy Snowball

Might be the most broken thing yet. Literally 0 counter play when an assassin gets it.

r/Adulting jennyyyyt656

What if you don’t qualify for Medicaid, and living paycheck to paycheck but have a medical issue and need treatment?

I’m 24 and cannot afford medical care but am having medical issues…. What can someone like me do? Do I just let myself die…. I m having a hard time believing this is how it is.

I do work, and am a student I’m in the loophole of I make too much for Medicaid but living paycheck to paycheck….. I live with my mom to be able to stay afloat and still have to pay rent….i don’t do anything girly cause i cannot afford it. If I want to do my nails I do press on, do my own hair at home etc……

Anyways, what do people like me do when they need medical care?

USA NYC 📍

r/DunderMifflin GreasyExamination

How would you out-sass Stanley?

r/OldSchoolCool slappy_mcslapenstein

That time Layne Stailey punched a nazi during an Alice In Chains show (1993)

2708 33
Reddit
r/AskMen NamidaM6

Those of you who have left their previous partner and have been single since then but NOT by choice, do you regret leaving them?

Some context: During my teenage years, people dated and broke up at a whim. Some of my friend had a "no nonsense" policy, and their definition of "nonsense" was pretty broad, it could go from the actual crazy who demands apologies because she dreamed of him cheating on her, to the sweet girl who just needed time to be comfortable with sexual intimacy.

In my early adult years (late teens/early 20's), most of these friends kept the same mindset, ditching girls without much room for compromise, still thinking things like "I could get a better partner/I deserve better/I'd rather be single than with that b*tch", and now that we're all nearing our 30th birthday, a lot of them are still single, and they're depressed about it.

One of them, who has started balding, confessed to me that he thought that the world was his oyster when he was younger. He recognized that he wasted all his previous opportunities, thinking he'd always get more, and now he regrets it. I asked other friends if they were on the same page, and most of them stood by their choice and said that they 100% can do better, that they've just not found the right one yet, etc...

r/ClaudeAI SpiritedInstance9

As Seniors leveraging AI development, how would you architect team workflow from the ground up?

Given the amount of work I've been able to churn out this last month with CC, I've been thinking more about how new companies, new dev teams, etc would have to integrate in order to keep a consistent pace together. There's the old adage, if you want to go fast go alone, if you want to go far go together. With CC though I've been going further and faster alone than I ever have with a team, but that in no way means teams are obsolete (also grain of salt, I have the 80 20 problem where I've got 7 incomplete projects).

So given what you've done, what you know, all that, how do you think teams should leverage AI development going forward?

Also, just an aside, I feel it's now more than ever that small teams can undercut these shitty giant SaaS companies with quality and cost. I hope that's the case at least.

r/ClaudeAI Real_Finance_AI

Claude is Unreal - Literally

I gave it a scanned PDF to extract text from - and it gave me someone else’s confidential information (full name, address, etc) as a result of the extraction. (My last name in my document and the persons first name are the same.) Cooked. Unreal.

r/LocalLLaMA Trubadidudei

Upgrading our local LLM server - How do I balance capability / speed?

I've been running local LLMs on a server on a Dell Precision 7920 Rack, dual Xeon Gold 6242, with 768gb DDR4 RAM and some now antiquated 3xRTX Quadro 8000 cards (so 144gb total VRAM). We deal with sensitive data so it's all airgapped and local.

The budget gods have smiled upon us, and we've been allocated about 50k USD to upgrade our environment. We could spend up to 300k, but that would require a very good reason which I am not sure we have.

In any case, I am struggling a bit to figure out how to best spend that money in order to achieve a decent balance of TPS output and potential capability to run the biggest possible models. The issue is that I'm not sure I understand how partial RAM offloading affects performance. Buying 3xRTX 6000 pro's to replace the existing RTX Quadro 8000's seems like an easy upgrade, and for models that can fit in the resulting 288gb I'm sure the TPS will be beautiful. However, I am not sure if buying a fuckton of 5090s and some special server rack might be more bang for your buck.

However, as soon as I start running huge models and partially offloading them in RAM, I am not sure if there's a point spending money on upgrading the RAM / CPU or something else. If you're running just the active layers of a MoE model on the GPU, are you bottlenecked by the RAM speed? Is there any point in upgrading the 768gb of DDR4 RAM to something faster? I think the rack still has room for more RAM, so alternatively I could just expand the 768gb to be able to fit huge models if necessary.

Our main usecase requires a decent TPS, but anything north of 20-30TPS is somewhat acceptable. However, having the theoretical possibility of running every model out there, preferably unquantized, is also important for experimentation purposes (although a slower TPS can be accepted when doing so).

I would greatly appreciate any advice for how we should spend our money, as it is a bit hard to find exactly where the bottlenecks are and figure out how to get the most out of your money.

r/findareddit blondieewhoschubby

Is there a subreddit for people who feel nostalgic for eras they never lived in?

r/whatisit Typical-Ferret-1580

Estate sale : interesting designs on wall hanging

r/ClaudeAI Direct_Librarian9737

3-in-1: Claude Code, Codex CLI, Gemini CLI

Lately I’ve been following the discussions around Claude and Codex, but honestly, as a software engineer, I don’t really care that much. I see a lot of these post especially the ones popping up in my Twitter feed as content mostly designed to farm engagement.

I’ve been using Claude Code for a long time now, and it fits my development style really well, so personally I’m continuing with Claude Code. As I mentioned before, I initially built Frame with a Claude Code–centric approach. However, when there were requests for other CLI tools, I went ahead and added support for Codex CLI and Gemini CLI as well.

https://preview.redd.it/yf0c4ck9ahig1.jpg?width=2924&format=pjpg&auto=webp&s=eadd465f27c6f7ef89033fe329d41eb2ef354fe8

My main goal is to bring a standard to projects I develop using AI-CLI tools, to keep context locally, and to manage my projects with a terminal-focused, lightweight IDE.

Implementing Gemini CLI wasn’t very difficult, since Gemini automatically reads the gemini.md file. Implementing Codex CLI within this standard was a bit tricky though. I did some research via ChatGPT, and it said that Codex CLI doesn’t read a file like Claude.md or gemini.md. Because of that, I had to write a wrapper specifically for Codex CLI

By the way, thanks to your support, Frame has reached 200 stars on GitHub and shows 350 unique clones. I sincerely thank you all for the support. I’m always open to ideas and contributions. Hopefully this project will help me land a job lol.
Github: https://github.com/kaanozhan/Frame

r/explainlikeimfive serdnack

ELI5 sand filtration, or why is the sand on top?

The youtube algorithm decided i needed to learn about sand filtration today, and while the system makes sense, one things is bothering me. Why is sand on top of the filtration system?

From what it said the filtration system went fine sand, course sand, gravel then rocks, but that feels backwards to me. it said that each layer filtered things out, that the surface of the materials attracted different things.

But fine sand is the densest layer, anything that could get through it won't be filtered by the courser layers beneath it, making it redundant.

Am I misunderstanding something? Why would the system be set up that way?

24 18
Reddit
r/personalfinance reddituser889088

FSA use it or lose it question

So say my two weeks until termination start today, can I still be reimbursed for items bought in between the two weeks? Assuming everything is eligible

r/SideProject Objective_Middle_622

Je construis une plateforme de coaching e-sport — retours bienvenus

Salut r/SideProject,

Je travaille actuellement sur un **side project** : une plateforme de coaching e-sport orientée jeux compétitifs (FPS, MOBA).

L’idée est née d’un constat assez simple :

- beaucoup de coaching se fait via Discord ou messages privés,

- paiements souvent faits à l’avance, sans cadre clair,

- peu de suivi, peu de garanties, et parfois un manque de confiance des deux côtés.

Je développe donc une plateforme où :

- des joueurs expérimentés / semi-pros peuvent proposer des sessions de coaching,

- les joueurs réservent via un système de crédits,

- les sessions sont structurées et sécurisées,

- chacun a plus de transparence (profil, avis, cadre clair).

Le MVP est en ligne, mais le projet est encore **très early stage**.

Je ne suis pas ici pour faire de la promo, mais vraiment pour avoir des **retours produit / business**.

Les points sur lesquels j’aimerais votre avis :

- Est-ce que ce type de plateforme répond, selon vous, à un vrai besoin ?

- Qu’est-ce qui vous donnerait confiance dans une marketplace de coaching ?

- Commission par session vs abonnement pour les coachs : des retours d’expérience ?

- Qu’est-ce qui rend une première session de coaching “rentable” côté joueur ?

Pour ceux qui veulent comprendre le contexte, le projet est visible ici :

👉 https://getfragora.com

Je suis preneur de critiques honnêtes, même dures — c’est le but 🙂

Merci d’avance pour vos retours 🙏

r/ClaudeAI 0xraghu

I built a CLI to make all your Claude Code sessions searchable — works with 11 other AI tools too

Hey r/ClaudeAI ,

I've been using Claude Code as my primary coding tool for a while now, and I realized my session history had become a goldmine of decision journals that I was throwing away.

So I built mnemo — a local CLI that indexes your Claude Code sessions (and 11 other tools like Cursor, Gemini CLI, OpenCode, Codex, Amp, etc.) into one searchable SQLite database.

How it works:

- Reads each tool's native storage format directly (JSONL, JSON, SQLite)

- Full-text search with BM25 ranking, grouped by session and project

- Search runs in under 100ms

- Everything stays on your machine — no cloud, no API keys, no accounts

There's also a Claude Code plugin (mnemo-memory) that auto-loads context from past sessions when you start a new one. So Claude remembers decisions you made weeks ago without you having to re-explain anything.

Install: brew install Pilan-AI/tap/mnemo

GitHub: https://github.com/Pilan-AI/mnemo

Website: https://pilan.ai

It's open source (MIT) and free. I'm a solo dev on this, so if you run into any issues or have feedback, I'd genuinely appreciate hearing about it — bug reports, feature requests, or just "this didn't work on my machine" are all helpful.

Happy to answer any questions here.

https://preview.redd.it/xpysgf6fahig1.png?width=1284&format=png&auto=webp&s=8dd90d691091740323e4cb0baf9c62eeb2a83161

r/ClaudeAI tuantruong84

Claude Haiku is all I need for AI agent for Google Sheet and here what I learned while building it

Hey everyone 👋

I’ve been building AISheeter using Claude Code, an open-source AI sheet assistant that evolved from a simple formula generator into a multi-step AI agent for spreadsheets — think Cursor + Claude workflows … but for Sheets.

My own problem: most LLM integrations (including the usual Sheets tools) treat every prompt as a one-off query. If you need multi-step logic — e.g., “analyze data → extract signals → prioritize results” — you end up manually chaining separate prompts with no persistent context. That’s tedious and stateless.

So I rewrote the app to leverage + agentic thinking, and luckily Opus 4.6 do the job so well

User says anything in plain English → agent reads the live spreadsheet → decomposes into structured tool calls → executes them → maintains state for the next command.

One sentence like "format the header, add currency to sales columns, and sort by revenue" triggers three separate tool calls with the right parameters, ranges, and column references — all figured out from context. That's the difference between a wrapper and an agent.

🧠 How It Works

The agent has ~10 tools (formatting, formulas, charts, filters, data validation, etc.) defined as declarative schemas. The LLM never touches the spreadsheet directly — it produces structured tool calls, and the execution layer handles them.

Before every request, the backend injects live spreadsheet context: column headers, inferred column types, data ranges, sample values. The system prompt teaches the model patterns for reasoning about this context — not specific instructions for specific data.

That last part was the hardest lesson. My early prompts were full of examples like "for sales data, use column D." Worked perfectly in demos. Broke on every other dataset. Rewrote everything to be pattern-based and context-driven. Night and day.

🔧 What Actually Worked

  • toolChoice: agent always have to use one tool for the sheet.
  • Formula First — Taught the agent to prefer native Sheets formulas over AI processing for calculations. Formulas are free, instant, and auto-update. AI analysis is reserved for subjective questions only. Saves tokens, gives better results.
  • Smaller models are very capable — Claude Haiku handle multi-tool workflows surprisingly well when context is structured cleanly. You don't need the biggest model for tool selection.

🧠 Stack

  • Frontend/Backend: Next.js 16 + Vercel AI SDK
  • Database: Supabase (context persistence)
  • Integration: Google Apps Script
  • Models: BYOK w/ OpenAI, Anthropic, Gemini, Groq

📦 It is open source and free to use.

Looking forward to hearing thoughts 🙌
Thanks

r/homeassistant vedbag

Issue with IKEA BILRESA

https://preview.redd.it/b40deqrqkhig1.png?width=579&format=png&auto=webp&s=09e546cb5c1d54673ebff663da431ec287a501df

Hello everyone,

I have this device connected to an IKEA hub, which I use to turn the new IKEA Matter over Thread lamps on and off. The IKEA hub is connected via Thread integration in my Home Assistant.

My problem is that it simply turns off or goes into hibernation and no longer works, or works with a huge delay. Any tips?

r/LifeProTips moutardebaseball

LPT: cut and trim your nails today if you’re planning on putting them in someone else’s body for Valentine’s Day.

5 days is the sweet spot in time for the nails not to have grown too long and not to be too sharp from a fresh cut.

Your partner will appreciate.

1675 104
Reddit
r/homeassistant YogurtclosetGlad9512

DIY Home Assistant family dashboard with voice input (calendar + tasks + meals)

Hey folks,

Over the past few months I built a dedicated Home Assistant smart display for our kitchen. It started as a side project, but it’s now fully working and my family uses it every day, so I figured it might be useful beyond just our house.

It combines:

• Voice add/edit calendar

• Tasks / todos

• Meal planning + groceries

• Full-screen family dashboard

• Home Assistant controls

The main idea was simple: something the whole family can walk up to and use without opening apps or touching their phones.

Some real-life examples from our house:

– “add soccer practice tomorrow 5pm”

– “what’s on today?” while making coffee

– kids checking “what’s for dinner”

– adding groceries while cooking

– turning off lights / locking doors before bed

– quick glance at the day’s schedule on the counter

So it feels more like a small household appliance than a DIY tablet dashboard.

At this point it’s pretty stable. Before polishing it further, I’d love feedback from other HA users.

If anyone’s curious to try the prototype or help test, I’m happy to share it with a few folks and hear what you think.

20 15
Reddit
r/leagueoflegends Amazing_Pangolin7172

One ability away from a penta (Kai'sa)

Unlucky I got hit by 4th shot when I ulted, I could have penta'd if not for that as well. Still though, if my W went off, I would've gotten the penta. Thats why Arcane Comet is permentantly on my protest list.

r/whatisit HippySwizzy

What was in my egg?

r/ForgottenTV King_Ron_Dennis

The Richard Pryor Special? (1977)

22 5
Reddit
r/homeassistant Next_Ride_3003

One PC for Home Assistant, NAS and CNC/3D printing – good idea or not?

Hi everyone,

I’m planning to get started with Home Assistant and also want to set up a NAS backup system. The main PC is located in the basement and connected via LAN to the router on the ground floor.

On the ground floor, I’d like to have a tablet or touch display (not sure which makes more sense) to show some Home Assistant dashboards and, most importantly, my Google Calendar.

Now I was thinking: since the PC is already in the basement, would it also make sense to use it to control a CNC machine and/or a 3D printer that will also be located there? Is that even possible to run in parallel?

I already bought two used devices on eBay (a bit impulsively 😅):

  • Dell Wyse 5070 Thin Client (Celeron J4105, 4GB RAM, 16GB eMMC, WiFi)
  • HP EliteDesk 800 G4 Tower (i5 8th gen, 8GB RAM, 250GB SSD, 250W PSU – supposedly quite power efficient)

What setup would you recommend for my use case?
Thanks a lot in advance!

r/painting edwinboeckxstaens

De Dood Van Mijn Ex geliefde, Edwin Boeckxstaens, Acrylic, 2026

r/ClaudeAI Diligent_Comb5668

Wayland keeps breaking with claude code running in vscode.

I heard about the memory leaks in vscode on Arch based distro's although I never noticed any problems at all. Now, today I ran claude code in a pretty big monorepo of a project. I started claude code in the integrated terminal like I always do and asked him to read and understand the pproject before working with me but that basically f'ed wayland entirely and because I forgot to pkill vscode and node so basically bricked my entire PC at that point.

So now I have decided to get rid of that shitty IDE anyways because I hate Microsoft. Now i'll ditch their IDE too.

So what IDE are you running?

r/findareddit jupiter-major

looking for a subreddit where i can ask for free learning app suggestions.

i’m trying to find apps and websites that offer COMPLETELY free (micro-)learning programs for post-college learners.

i thrive on practical, hands-on learning and enjoy when a program offers little interactive lessons or homework practice or games or whatever.

i’m broke as hell but i wanna keep learning even though im not in school anymore. i just want to make sure that the information is accredited. as much as i love the ease and variety of youtube, it’s not always reliable and rarely—if ever—incudes interactive content that matches the content.

i’m going to try khan academy, i have yet to find a program on there that i want to go through.

i’m interested in anything from history to understanding personal finance to STEM to coding, etc etc etc. i really really want to keep learning; i feel like i’m going crazy not having classes anymore lol.

thanks a ton in advance.

r/leagueoflegends wojtulace

A list of AP ratio buffs aimed at enabling more off-meta builds

I’ve made a list aimed at enabling AP builds on champions who already have two or more AP scalings in their kits.

Here is the link

Why? I have always enjoyed off-meta builds and I believe they’re beneficial for the game, because they increase build variety and overall gameplay diversity.

Feedback is appreciated.

r/midjourney Advanced-Power-1775

A Shuhaan's Grimoire III: Creatures made of glowing pollen dust

I'm creating a grimoire of creatures using AI and editing the videos. AI is so powerful, it truly lets you do a lot of things :)

If you want to see more of it you can always visit r/Aztleau !! :)

r/Adulting Valuable_Bug8496

Why is it socially acceptable for women to make reels and posts on Instagram degrading men's height and openly making height requirements? Are they adulting badly?

Isn't it the equivalent if men did videos demanding X weight or X chest size?

r/SideProject Brilliant-Glass3075

I built this extension to automatically scan over 15+ stores using external servers, finding you the absolute best price for any game. It features a 'Scanner' list where you can track specific games and get instant notifications the moment a discount drops

I also added a Free Games Monitor that checks all trusted platforms via external servers every 30 minutes, notifying you immediately when a paid game goes free.

The best part? It's ultra-lightweight (consumes only 50MB RAM max), completely free, and has ZERO ads—unlike most competitors and heavy websites.

name:
GamerHub Extension

chrome

r/Art DrinkKooky1300

Ostriches sketches, Capricorn16180, Pen sketch on paper, 2026 [OC]

10 0
Reddit
r/comfyui Famous-Sport7862

excessive paging with LTX2

anyone knows why LTX 2 does so much wrting into the ssd? I am using a gguf low vram workflow and always see my ssd got to 100% and stays like that for a while. My system RTX3060 12 GB and 48GB of ram.

r/SideProject romeointech

Finance app without bait-switch

Hey Everyone,

After using multiple finance tools, I built Vuna because I was frustrated. Not with my finances - with finance apps. Most great finance app either wanted $/month, bombarded me with ads, or even made tracking a $2 coffee feel like filing taxes and with others I wasn't too sure if they are safe as they seemed to be. I just wanted to know where my money went. And maybe save and budget for things without feeling judged by an algorithm.

"Vuna" means harvest in Swahili - the idea is you plant good money habits, nurture them, and eventually harvest the results. No shortcuts, no gimmicks, No AI, just clarity. Open sourcing and full import/export is also on the road map! To give you full control and know how your data moves.

Here is the link:

https://www.producthunt.com/products/vuna

Would love your feedback, ideas and meet fellow creators!

r/SweatyPalms GlitteringHotel8383

Push Push.

280 45
Reddit
r/ClaudeAI OldMasterpiece3111

Claude still thinks Japan's recent political upheaval is 'alternate history fiction' even after verifying Wikipedia, gov sites, and major news

r/mildlyinteresting BarneyPoppy

Weirdly patterned egg

r/whatisit LivingAdventurous131

What is this monstrosity?

r/whatisit ThrowaFuyu

What character could this be?

A friend of mine keeps teasing this character a project of theirs, but I can't figure it out for the life of me. If I get it right, I'll get a free dinner. Please do your thing, I'm broke.

r/LocalLLaMA Due_Ebb_7115

Anyone implementing dynamic windows instead of static chunking for RAG?

I keep running into context clipping issues with static chunking in RAG pipelines.
I’m exploring query-aware chunking and dynamic windows that adapt at retrieval time, which feels like a better fit for long docs based on this article (GitHub)

Has anyone here built this themselves or benchmarked it against traditional chunking? Interested in practical lessons, latency tradeoffs, or gotchas.

r/meme Frostedlogic4444

Me in velentine week.

r/Adulting lisaberrix

28F I’ve been struggling a lot with feeling independent and financial stability. Objectively am I doing alright ?

Usually I would say I’m a fairly resilient person however I feel like my anxiety has gotten the best of me especially as I’ve had some ongoing health issues. I currently live in a house share. Ideally I would prefer my own place but unfortunately cannot afford it alone however I would be able to with my partner whom has similar savings.

In terms of savings I’ve got 18K. A lot of my anxiety is around how little I am able to put away each month due to increased living costs since moving out of my family home to a house share. As much as I appreciate how much more healthy the new living situation is for my mental health I feel anxious about the cameras in common spaces and not being able to cook.

I make 27k as an ongoing temp working in supply chain administration. I am aware that I need a salary increase and was previously making slightly more in a different industry however left due to an injury. I have been sticking it out in hopes of a permanent vacancy. My manager has been transparent that that is the goal however due to moving locations and huge changes there has been a hiring freeze. The flexibility however has been ideal to deal with health and personal issues.

It would be reassuring to know if others think I’m doing alright or advise how I can improve.

r/homeassistant PainAndRetribution

Question before I begin my Home Assistant journey

Hi all, my first post here. I think I have reached the point where I need Home Assistant. Before I deploy it I want to make sure it's going to solve my problem. Quick history:

I currently use Google Home, I have several TP-Link Kasa switches and plugs, most are matter compatible. I have a couple Govee lamps also matter compatible.

For ease when I have guests over I wanted to add some smart buttons and door/window sensors, so they don't have to know the names of things and use voice commands. I chose Aqara for this. I got a M3 hub, door sensors and mini switches, the zigbee ones for Christmas.

Google Home is connected to my TP-Link acct, my Govee acct, and Aqara acct. Aqara is connected via Device Connectivity and Automation Management. I can see the door sensors in Google Home and create automations with them. (e.g. When door opens, turn light on, when door closes turn light off)

For the life of me though I cannot do anything with the mini switches. I paired the switch with the hub, and I see it there (it shows presses) like I do the sensors in the Aqara app. If I create an automation in Aqara, I can see the switch, but none of my devices connected to Google Home. In Google Home I don't see the mini switch as a device, nor can I use it as a trigger.

What I want to do is: Press button once, turn on/off light #1, double tap, turn on/off light #2, long press, turn on/off light #3 (regardless if it's connected to TP-Link or Govee)

From my initial searching around, I think this is where I now need Home Assistant, because Google Home I guess can't use buttons. Am I correct in this? I have a homelab, I run Proxmox, selfhost a few things, and I have resources to run Home Assistant.

Any guidance on how to do this simply would be appreciated, thanking you in advance!

Edit: changed can use buttons to can't.

r/AbstractArt designer_illustrator

The Watcher

Mixed Media on Panel
12"x12"x1"
2025
--
Acrylic Paint, Acrylic Polymer, Bread Clips, Photograph, Comic Book from 1986, Biology Textbook from 2002, Newspaper, National Geographic Magazine from 1976, Paint Marker.

I made this piece by cutting down and recombining two larger paintings into something smaller and tighter. I intentionally shifted away from collage here and let the painting do most of the work. A lot of it was about building dark values slowly, stacking layers, and seeing how far I could push saturated color without losing depth.

What started to happen was this strange balance between control and chaos. Some areas blurred and melted into each other, while one small corner stayed oddly clear. It's like it was having a quiet conversation apart from the rest of the surface. Out of the darker growths, an eye began to emerge. I didn’t plan it, but once it was there, it felt unavoidable.

To me, it reads like a moment of observation buried inside something dense and ordinary. Surveillance hiding in plain sight in a sea of half-lost meaning. Curious how others read it, especially folks who collect art like mine.

35 4
Reddit
r/findareddit OppositeTadpole7579

i lost contact with a friend of mine after the storm hit new york, and i am trying to locate him, so is there a reddit that can help me find him?

r/PhotoshopRequest ckwells01

Could someone please enhance the resolution of this photo?

r/Adulting Fit_Breadfruit_9665

I woke up yesterday with small bruises on my legs… should I be concerned?

r/LocalLLaMA Balanceballs

NeKot - a terminal UI for chatting with LLMs

I’ve posted about the app some time ago and received really useful feedback. Almost all suggested things have now been implemented/improved, specifically:

  • Web search tool added
  • Stdin piping now supported
  • Mouse text selection implemented(in general mouse support across the app)
  • Removed API keys requirement for local backends
  • Koboldcpp and other single model backends support
  • Many UI improvements like Shift+Tab support and light backgrounds support
  • A bunch of bugs fixed

Hope this makes living in the terminal a little more pleasant and fun :D

Repo: https://github.com/BalanceBalls/nekot

13 5
Reddit
r/HistoryPorn DiaboDeCapote

A coroner holds the skull of Josef Mengele during an exhumation in Embu das Artes, São Paulo, 1985. [1200×761]

Josef Mengele drowned in a swimming accident in 1979 on the coast of São Paulo, Brazil. He was buried under the name Wolfgang Gerhard. Only in 1985 did a police investigation discover his true identity. He died without being punished for the various atrocities he committed during the Holocaust.

EXHUMED BODY IN BRAZIL SAID TO BE MENGELE'S - The New York Times

208 19
Reddit
r/maybemaybemaybe alish_sapkota

Maybe maybe maybe

548 62
Reddit
r/leagueoflegends Beginning_Bother_420

ARAM mayhem constructive feedback

This ARAM patch is the worst pile of dogshit disguised as patch notes that any game has ever witnessed and it only proves how unbelievably dumb and out of touch Riot is

r/painting robertwk_art

Smidge and Mako

A recently-finished piece. 12x9 inches, oil on linen panel.

r/ARAM MrTVFace

Upgrade Mikael's Tooltip?

Does this tooltip mean nothing? It doesn't actually reduce the cooldown, and it goes up with the amount of cleanses you do.

r/whatisit WhatTheFreightTruck

What is this pole sticking out from the car in front of me?

43 95
Reddit
r/AI_Agents Glum_Ad_5313

Looking to contribute to AI agent building

Hi, I’m a grad student in Software Engineering with ~3 years of hands-on cloud experience (AWS, containers, CI/CD, infra automation). I’ve worked on production-grade cloud systems and am now focusing on building and deploying AI-driven applications on scalable infrastructure. I’d love to contribute to your team and help build real AI systems, not demos.

Let me know if anyone wants to collaborate.
No pay required as of now!

r/PhotoshopRequest Low_Hearing_899

Help with a mental image

I have checked to see if this is allowed but can't really tell if it would be deemed disrespectful or something of that sort and denied. Trying to find someone who can make Francis from Pee Wees Big Adventure in the bathtub scene into the current POTUS. if this is too political or unacceptable by all means delete. If not someone please help!! I cannot pay but it would mean so much!

Edited for spelling

r/personalfinance AcePilot01

Ohio - received a 1099-c on an auto loan, should I get the title?

Hello,

Because of a job loss and time frame getting another, I was unable to pay this, they never came for it, and recently just got a 1099-c discharge/cancellation of the debt, the entire amount of the vehicle.

I still have the vehicle, but I am under the impression that, especially now that it's "Income" that they would be required to release the title?

How can or could they still keep it? or say that the loan needs to be paid to get it, if they cancelled the debt? esp if that value is now being "given to me" in the form that it's income and taxed now. (if I pay them the amount, then I won't get the taxes back either now would I?)

This isn't making sense

r/mildlyinteresting RedNova02

This half-green half-red pepper I found

12 4
Reddit
r/space EasySlideTampax

What are YOUR top 5 most impressive space achievements?

Purely subjective but what made you go WOW the most regarding space exploration and travel?

  • First man-made object in space - V2, 1944

  • First probe to land on another planet (Venus) - Venera 7, 1962

  • First man on the moon - Neil Armstrong, 1969

  • First man-made object to go interstellar - Voyager 1 (launched in 1977, crossed the heliopause in 2012)

  • Hubble, 1990

r/TwoSentenceHorror omartyy18

My one and only dream was to be the best crime scene investigator...

After all, no one cleans up a mess better than the person who made it.

r/leagueoflegends WarningMedium8351

Why is a full Tank Jax not viable? (Follow my theory)

Hey - I realize it‘s not being played and there must be a good reason for that - but I‘m trying to figure out what it is…

So in Theory, shouldn‘t a tanky Jax be really annoying? E is a really busted spell that helps him soak so much damage in fights - and as a full tank, he should be able to live until his next E comes up, and buy so much time… He should be able to stick on top of carries sort of like a Maokai and he can even jump out to team mates with Q… So - why isn‘t it at least a thing? He could buy anything that gives CDR and Tank Stats

r/metaldetecting Loophone1

Found in a creek where a very old house was in East TN. Solid, except the sides are hollow like pipes. Fully metal, and heavy. Cleaned it up quite a bit so it looks significantly better than when I found it.

Two flathead screws visible in the first pic

r/PhotoshopRequest Original-Bat9152

Would Somebody be able to Photoshop Out Everybody EXCEPT for the Two People on the Right, and then move them a little bit closer to the “Palace Diner” sign?

r/SideProject Brilliant-Glass3075

I built this extension to automatically scan over 15+ stores using external servers, finding you the absolute best price for any game. It features a 'Scanner' list where you can track specific games and get instant notifications the moment a discount drops

I also added a Free Games Monitor that checks all trusted platforms via external servers every 30 minutes, notifying you immediately when a paid game goes free.

The best part? It's ultra-lightweight (consumes only 50MB RAM max), completely free, and has ZERO ads—unlike most competitors and heavy websites.

name:
GamerHub Extension

chrome

r/raspberry_pi FrenchieEAP

Follow up Lego SimRacing Wheel

Hey everyone,

I’ve been working more on the Lego steering wheel and wanted to share some updates.

- For Force Feedback, I’m testing a very simple setup with some rubber bands. Nothing too fancy. Not sure if it will hold. Will share more once I’m happy with the setup

- I used to have 2 buttons on the breadboard next to the Pico for throttle and brakes but it was not convenient so I’ve moved them onto the steering wheel. Initially I wanted to make some pedals but I think this is fine for now. I trimmed down a push button to fit a window brick and then hot glued the whole thing. I broke a push button leg during the process and because I hate wasting or because I’m stubborn I spent wayyyy too much time trying to rescue the button. Once I decided it was not salvageable, it was pretty quick to build the buttons.

- cable management is going to be a problem I think. I used some 22awg cable. All black because when I bought the cable I thought: 1 roll is plenty enough. Now each time I have to run a test I’m pulling up a reference photo I took to make sure I have the connections right… anyway, now when I turn the steering wheel, I can feel the 4 cables (2 x 2 buttons) behind. I might have to consider a different approach (see below for improvements)

- My son found out that I was “playing with his Lego” so I decided to create a second one so he can play with the non connected one while I continue working on the connected one.

Improvements from here:

- I’m considering adding LEDs but I need to think about the best placement. Initially (as per the photo) I wanted to put the LED strip facing us but I might consider facing it down and put some transparent bricks. Need to experiment

- I would love to add a small OLED screen as well but it has a HAT so I might have to put a pico 2 W for ease and that would also help with the cable management for the 2 buttons. What do you think? Should I go with 2 picos or should I try different cable management that wouldn’t impact the steering wheel movement?

Let me know your thoughts from your experience ! Hot glue was an amazing advice. It makes things a lot more manageable once it’s fully tested and in place.

21 1
Reddit
r/Wellthatsucks JuicyButDry

Just a robbery on a highway in Italy

641 114
Reddit
r/Art MtPixls

Bus stop, MtPixls, digital, 2025 [OC]

r/SideProject dimartarmizi

I Built a Browser Flight Simulator Using Three.js and CesiumJS

I’ve been working on a high-performance, web-based flight simulator as a personal project, and I wanted to share a gameplay preview.

The main goal of this project is to combine high-fidelity local 3D aircraft rendering with global, real-world terrain data. All running directly in the browser with no installation required.

Stack: HTML, CSS, JavaScript, Three.js, CesiumJS, Vite.

The game currently uses multiple states, including a main menu, spawn point confirmation, and in-game gameplay. You can fly an F-15 fighter jet complete with afterburner and jet flame effects, as well as weapon systems such as a cannon, missiles, and flares. The game features a tactical HUD with inertia effects, full sound effects (engine, environment, and combat), configurable settings, and a simple NPC/AI mechanism that is still under active development.

The project is still evolving and will continue to grow with additional improvements and features.

Project page: https://github.com/dimartarmizi/web-flight-simulator

Game: https://flight.tarmizi.id

r/aivideo DannyD4rko

Inflated Game of Thrones - Tower of Joy

16 6
Reddit
r/OldSchoolCool laynestaley67

Bad Bunny dressed up in a bunny costume for school circa late 90s

87 14
Reddit
r/aivideo Nimentrix

Golden Tears - Grok Imagine

23 1
Reddit
r/ARAM Himbler12

Is the ARAM balance buff necessary in Mayhem?

I'm aware that there are dozens of specific changes to champions kits that are Mayhem AND ARAM, but considering the damage taken/received buffs count for all sources it feels really underwhelming when you pick up big damage augments that are nerfed by anywhere between 5-20% simply because you're playing a specific champion, and likewise when you're playing vs a champion that's already shredding you and you see they have a 15% damage buff is really frustrating.

Some champions really come out of the gate with something like Giant Slayer already tacked on, just a raw %damage mult for everything. Champs that have a high %dmg mod can take Dive Bomber and Clown College and do over 60% of someones max HP.

Considering with the latest patch how broken everything is, I don't think it's really necessary for the buff to even exist anymore. The buff is required in the normal game where pacing is important but mayhem games are over so quickly if you're playing a champ with a negative damage rider or an increased damage taken modifier you can't keep up with the rest of the champs at all.

r/toptalent Sad_Stay_5471

The scream was necessary (source link in description)

65 1
Reddit
r/brooklynninenine Fragrant-Bread5404

People she works with think her name is Rosa Diaz🤣

760 15
Reddit
r/TheWayWeWere AdSpecialist6598

An Open-air market on Temple Street in Singapore's Chinatown in the 1970s

12 0
Reddit
r/MostBeautiful Amazing-Edu2023

stingray

12 0
Reddit
r/whatisit ktkt2121

Black thing in my moms water

My mom dumped out her water last week and something black was in the sink. She didn’t have her glasses on so she flushed it down the sink.

Then a few days later it happened again when she poured it into the sink but she flushed it again before getting a pic. It happened again today and here’s a clear picture of it. What is it???

r/DunderMifflin juliiaduque

The Rant Game: Miranda

Disappeared for a couple days but I'm back. You know the stupid rules of the minigame, it's easy to follow (and I say that cuz some people seem to feel obligated to comment opposing)

  • No dismissing other's opinions. Agree, or move on. No downvote. This is for ranting. If you have your own opinion, comment.

  • Respect others. Don't throw offenses cuz you disagree. Again, if you disagree, MOVE ON to your own comment

For me it is how AWKWARD and INVASIVE about her men's sex particularities. the spanking guy, which she found out by going through his stuff, damn she couldn't wait until he was ready to tell her? And the Catholic guy, like, it was just a shower habit. If you don't believe, ok, but trying to lecture him? Nah her sex scenes were always so cringe for me lmao

r/todayilearned MajesticBread9147

TIL some models of Caterpillar haul trucks (big dump trucks used in mining) are so big that they are delivered in pieces from factories around America, and assembled on site by qualified technicians.

364 78
Reddit
r/aivideo CaptnSpalding

Waffle House Jump Team

r/instantkarma james_from_cambridge

They’re Trippin

r/leagueoflegends Abu_Animations

LoL in 2026 (Animation)

r/leagueoflegends optimistdave

Nami with Akshan's hook • Flying fish on land

r/WouldYouRather GlitchOperative

WYR always have to use speakerphone for calls, or only be able to text using voice-to-text?

r/Anthropic dataexec

Glad Youtube took action on this one, it was getting horrible

r/geography OrbitalAtlas

Which real-world places are easiest to recognize from satellite images?

When you only see a satellite image — with no labels or context — some places feel instantly recognizable, while others become surprisingly difficult. From your experience, which types of locations are the easiest to identify from above?Cities, airports, stadiums, coastlines… or something else?

r/OldSchoolCool Dr3ws3ph3r

22-year-old Penha Goes, a tribeswoman in the Amazon Rainforest in Brazil, 1997

716 60
Reddit
r/automation dataexec

Glad Youtube took action on this

r/StableDiffusion SnooComics9369

High quality AI rendering

r/Wellthatsucks NoLongerinOR

Just wanted to take my meds

I’ve never heard of this disorder before, but, wow.

Makes me wonder what kind of meds this gentleman has to take.

72 51
Reddit
r/MMA airplane231

Israel Adesanya vs Alex Pereira 1 | Full Fight

40 12
Reddit
r/LifeProTips gamersecret2

LPT: Before a big plan day, send a two minute expectations text so nobody has to guess.

A lot of stress comes from guessing what the other person expects.

Fix it with one simple message that clears pressure fast, like:

Are we doing a gift or no gift

What is the budget range

What is the vibe, low key or full effort

What time window works

Example: No gift. Under $30. Casual and chill. 7 to 10.

It prevents disappointment and makes the day feel easy for both people.

12 7
Reddit
r/leagueoflegends Origachilies

At this point in LCS, why aren't remakes allowed if runes are wrong?

Impact in the TL vs SEN game 1 just got royally screwed over by having the worst possible rune for top lane. At this point, even small disadvantages like that for teams is huge as a whole, why not just fine the org to deincentivize teams from intentionally doing it? Its horrible to watch stomp games based on something trivial like a rune page error when you can just remake and make it take an extra 5 minutes.

r/DunderMifflin juliiaduque

The Rant Game: Pam

Disappeared for a couple days but I'm back. You know the stupid rules of the minigame, it's easy to follow (and I say that cuz some people seem to feel obligated to comment opposing)

  • No dismissing other's opinions. Agree, or move on. No downvote. This is for ranting. If you have your own opinion, comment.

  • Respect others. Don't throw offenses cuz you disagree. Again, if you disagree, MOVE ON to your own comment.

For me it's the beach thing. Coming off in front of Karen was not very girly. And then not apologizing. Yeah she was kind of a bitch.

r/metaldetecting Designer_Quality_139

I’ve decided to put all my finds from today on a subscription based website, please support me by subscribing to my OnlyCans

19 9
Reddit
r/SideProject dkang1013

App for dog lovers ❤️

My dog loves going on walks, and I love taking photos of him along the way. Over time I realized most of those moments just get lost in my camera roll, even though they mean a lot to me.

I’m working on an app that makes everyday dog walks feel more special. You press start walk and it tracks time, steps, and distance to help make sure your dog is getting enough daily exercise to stay healthy and happy. During the walk, you can snap photos of your dog, and those photos get pinned to the exact locations where they were taken. Over time, the map becomes a memory lane of all the walks you have shared together, instead of just a list of stats.

Let me know your thoughts on this app. Any feedback?

r/painting mangomupe

I did my own spin on the unicorn in captivity tapestry

r/ClaudeAI Optical_Fibrosis

Why is there a descrepancy between the usage in /usage and the usage on the portal

The /usage command on claude code gives me a different usage report than on https://claude.ai/settings/usage

Any clues why

r/PhotoshopRequest spazzzoutjay186

I want the named shadow in the photo replaced by monarch.

The first photo is monarch, the second photo is shadow and the third photo is what I want in result, I wish I could pay something but I don’t even know when I’m getting money again. I made a google drive of the photo just in case the resolution gets lowered.

https://drive.google.com/drive/folders/1HXmw3IiR5yoD6iTF8htW2Y9fTWAZZ3BE

r/Adulting SimpleTactician

Looking to move out for a new job

I recently switched roles internally to something that I studued for, but it's a hybrid role based in a city 4 hours away from me via car. Manager said they want me to do the in-office hours at that city long-term but they gave me some time to figure this out. Annoyingly (or perhaps fortunately) they couldn't give me a deadline.

I currently live with my parents and my salary's £31,500 pa, I'm considering renting out a place and sharing it if need be, since renting on my own looks like around £1,000-£1,100 with bills at best. I also thought of maybe doing temporary stays (eg airbnb) for either 2 or 4 weeks at a time which seems to be around that same price per month.

I've got a bit of time to mull it over, not sure what my best option here is. If it helps, I want to relocate near York in the UK. Thoughts?

r/PhotoshopRequest 3lm3rmaid

Thoughts on using AI for professional profile photos?

I recently tested an AI headshot tool for professional profile photos. Headshot Kiwi was one of the ones I tried.

The process was simple and the results varied. A few photos looked natural, others not so much. It feels vv useful if you just need something presentable, but probably not a full replacement for a real photographer.

Interested in hearing what others think about AI for professional photos. TYIA!

r/todayilearned VegemiteSucks

TIL Italian composer Luciano Berio was famous for his sense of humour: he gave a 2-hour seminar praising Beethoven’s 7th Symphony as a work of radical genius, then the next day delivered another 2-hour lecture on the symphony, this time showing why it was hopelessly flawed and a creative dead end.

875 64
Reddit
r/Art redheadartsygirl1979

Flora, shannon Irwin, digital collage, 2026

r/ethereum k_ekse

Offering free security reviews in exchange for feedback

We’re launching a new service focused on smart contract reviews without the overhead of a full audit.

Scope is limited and practical. Logic, exploitability, and protocol level risks. No certification and no audit opinion.

To validate the approach, we’re offering a limited number of free focused smart contract security reviews for projects that are code complete and either close to launch or already deployed, in exchange for honest feedback.

This is not meant to replace an audit. It’s a short, concrete review focused on protocol logic and exploit paths.

Shoot a dm, if you're interested.

r/SideProject ZealousidealFox6179

I got frustrated tracking Vietnamese food on MyFitnessPal, so I built FreshTrack

I'm Vietnamese and I meal prep a lot of traditional dishes — phở, cơm tấm, bánh mì, bún bò Huế. Stuff I grew up eating.

Every time I tried to log these meals on MyFitnessPal, it was a nightmare. You search "pho" and get 200 results that are all wrong — some random user entry says a bowl of phở is 150 calories, another says 800. The portions don't match. The ingredients are completely off. And if you want to log something like cơm tấm with a fried egg and pickled vegetables? Good luck building that from scratch every single time.

I realized this isn't just a me problem. Anyone who eats non-Western food regularly — Vietnamese, Thai, Korean, Filipino, Indian — has the same experience. These apps were built for chicken breast and broccoli. The databases are a mess for anything outside of that.

So I started building FreshTrack — a food tracker that actually understands Asian and non-Western cuisines. Accurate entries, proper portions, no more guessing if your bowl of bún bò is 400 or 900 calories.

A few things I learned building this:

  1. The niche matters more than the features. I spent weeks obsessing over UI details before realizing the #1 thing people wanted was just accurate food data. That's the whole value prop — not a prettier interface, but data that's actually right.

  2. Existing databases are shockingly bad for non-Western food. I pulled data from MFP, Cronometer, and FatSecret. The variance on the same dish across platforms was insane. A bowl of phở ranged from 150 to 900 calories depending on which entry you trusted. That's not tracking, that's guessing.

  3. Your own pain point is the best market research. I didn't do a single survey. I just built the thing I wanted to use every day. Turns out a lot of people wanted the same thing.

Still early — just launched the waitlist at freshtrackapp.xyz if you want to check it out. Would love to hear if anyone else has run into this problem or has thoughts on the approach.

r/SideProject Cautious-Gap-3660

I was fed up with struggling to give and receive feedback, so I made my own tools

I've spent a lot of time lately trying to improve how I communicate with my team/clients.

Vague comments. Random screenshot. Endless Slack threads. Lost emails. It was a nightmare for my workflow. So, I built a tool to fix it.

Here is how it works:

  • Visual Feedback — Just point, click, and comment directly on the screen.
  • Privacy First — I built this with a "privacy-by-design" mindset.
  • Thoughtful Design — It's not clunky. It's light, fast, and stays out of your way.

I wanted something professional but simple. It has completely changed how I work with others.

There is still much to be done.

If you wanna have a look, it's here https://get-highlite.app

r/StableDiffusion MycologistOk9414

Been trying six hours straight to get stable installed. Please help I'm losing my mind

I've tried uninstalling and starting again 100s of time as and can't get past this. Im no computer guy so please be nice here's what I'm getting, I have no idea what all this means I've tried chat gpt to help but it's being crap. Kind regards

Error code: 2 stdout: Collecting https://github.com/openai/CLIP/archive/d50d76daa670286dd6cacf3bcd80b5e4823fc8e1.zip Using cached https://github.com/openai/CLIP/archive/d50d76daa670286dd6cacf3bcd80b5e4823fc8e1.zip (4.3 MB) Installing build dependencies: started Installing build dependencies: finished with status 'done' Getting requirements to build wheel: started Getting requirements to build wheel: finished with status 'done'

stderr: ERROR: Exception: Traceback (most recent call last): File "C:\Users\jgodd\Desktop\sd.webui\system\python\lib\site-packages\pip_internal\cli\basecommand.py", line 107, in _run_wrapper status = _inner_run() File "C:\Users\jgodd\Desktop\sd.webui\system\python\lib\site-packages\pip_internal\cli\base_command.py", line 98, in _inner_run return self.run(options, args) File "C:\Users\jgodd\Desktop\sd.webui\system\python\lib\site-packages\pip_internal\cli\req_command.py", line 96, in wrapper return func(self, options, args) File "C:\Users\jgodd\Desktop\sd.webui\system\python\lib\site-packages\pip_internal\commands\install.py", line 392, in run requirement_set = resolver.resolve( File "C:\Users\jgodd\Desktop\sd.webui\system\python\lib\site-packages\pip_internal\resolution\resolvelib\resolver.py", line 79, in resolve collected = self.factory.collect_root_requirements(root_reqs) File "C:\Users\jgodd\Desktop\sd.webui\system\python\lib\site-packages\pip_internal\resolution\resolvelib\factory.py", line 538, in collect_root_requirements reqs = list( File "C:\Users\jgodd\Desktop\sd.webui\system\python\lib\site-packages\pip_internal\resolution\resolvelib\factory.py", line 494, in _make_requirements_from_install_req cand = self._make_base_candidate_from_link( File "C:\Users\jgodd\Desktop\sd.webui\system\python\lib\site-packages\pip_internal\resolution\resolvelib\factory.py", line 226, in _make_base_candidate_from_link self._link_candidate_cache[link] = LinkCandidate( File "C:\Users\jgodd\Desktop\sd.webui\system\python\lib\site-packages\pip_internal\resolution\resolvelib\candidates.py", line 318, in __init_ super().init( File "C:\Users\jgodd\Desktop\sd.webui\system\python\lib\site-packages\pip_internal\resolution\resolvelib\candidates.py", line 161, in init self.dist = self._prepare() File "C:\Users\jgodd\Desktop\sd.webui\system\python\lib\site-packages\pip_internal\resolution\resolvelib\candidates.py", line 238, in _prepare dist = self._prepare_distribution() File "C:\Users\jgodd\Desktop\sd.webui\system\python\lib\site-packages\pip_internal\resolution\resolvelib\candidates.py", line 329, in _prepare_distribution return preparer.prepare_linked_requirement(self._ireq, parallel_builds=True) File "C:\Users\jgodd\Desktop\sd.webui\system\python\lib\site-packages\pip_internal\operations\prepare.py", line 542, in prepare_linked_requirement return self._prepare_linked_requirement(req, parallel_builds) File "C:\Users\jgodd\Desktop\sd.webui\system\python\lib\site-packages\pip_internal\operations\prepare.py", line 657, in _prepare_linked_requirement dist = _get_prepared_distribution( File "C:\Users\jgodd\Desktop\sd.webui\system\python\lib\site-packages\pip_internal\operations\prepare.py", line 77, in _get_prepared_distribution abstract_dist.prepare_distribution_metadata( File "C:\Users\jgodd\Desktop\sd.webui\system\python\lib\site-packages\pip_internal\distributions\sdist.py", line 55, in prepare_distribution_metadata self._install_build_reqs(build_env_installer) File "C:\Users\jgodd\Desktop\sd.webui\system\python\lib\site-packages\pip_internal\distributions\sdist.py", line 132, in _install_build_reqs build_reqs = self._get_build_requires_wheel() File "C:\Users\jgodd\Desktop\sd.webui\system\python\lib\site-packages\pip_internal\distributions\sdist.py", line 107, in _get_build_requires_wheel return backend.get_requires_for_build_wheel() File "C:\Users\jgodd\Desktop\sd.webui\system\python\lib\site-packages\pip_internal\utils\misc.py", line 700, in get_requires_for_build_wheel return super().get_requires_for_build_wheel(config_settings=cs) File "C:\Users\jgodd\Desktop\sd.webui\system\python\lib\site-packages\pip_vendor\pyproject_hooks_impl.py", line 196, in get_requires_for_build_wheel return self._call_hook( File "C:\Users\jgodd\Desktop\sd.webui\system\python\lib\site-packages\pip_vendor\pyproject_hooks_impl.py", line 402, in _call_hook raise BackendUnavailable( pip._vendor.pyproject_hooks._impl.BackendUnavailable: Cannot import 'setuptools.build_meta'

Press any key to continue . . .

r/interestingasfuck Alarmed-Worry-5477

This is how my brother cooks

r/n8n Alena_Gorb

I rebuilt my n8n workflow to automate posting on Pinterest without the Pinterest API + Make alternative

https://preview.redd.it/2tj3d7lvbhig1.png?width=1143&format=png&auto=webp&s=3038660487bcdd74a62a7590faa9c54f7d8bbf92

Last week I shared my n8n workflow for testing visual brand systems on Pinterest. Since then, Pinterest refused my Standard API access request (likely because it's an automation and not an app per se). So, I couldn't actually post pins to production, and only to Pinterest sandbox.

However, I came up with two production-ready workarounds:

  • n8n + LATE API
  • a full Make.com rebuild using their native Pinterest module

Both are now in the GitHub repo.

Workaround #1: LATE API + n8n

Found a Reddit comment mentioning LATE as an alternative solution. Turns out LATE supports Pinterest without needing Pinterest's Standard access.

Free tier: currently 20 posts/month across 2 profiles

Bonus: $1/month analytics add-on for paid plans

And compared to the original workflow, the biggest change was replacing the Pinterest HTTP node with a LATE API HTTP call to https://getlate.dev/api/v1/posts

Workaround #2: Rebuilding in Make

Turns out Make can post to Pinterest without API keys, using their native module. So I rebuilt the whole thing in Make.

Key differences from n8n:

  • No schedule trigger module (set timing in scenario settings)
  • Random selection uses the "=RAND()" formula in Google Sheets instead of JavaScript
  • Human approval workaround: Gmail → Sleep 4 min → Fetch reply → Parse APPROVE/DECLINE

Update on Pollinations.ai

Heads-up if you’re using it for image generation. The old endpoint stopped working and started throwing 502 errors.

It appears that they’ve fully migrated to:

https://gen.pollinations.ai/image/{prompt}/?width=1000&height=1500&model=flux&key=API_KEY

You now need an API key (still with free credits for now, though).

Updated GitHub repo with all 3 workflow variants:

  • Original n8n workflow (Sandbox API)
  • Updated n8n workflow (LATE API)
  • Make blueprint version
  • Plus Google Sheet templates and a demo video

https://github.com/alenagorb/visual-system-testing

Happy to answer questions about any of the three versions or help debug issues!

r/Damnthatsinteresting lithdoc

Today's 2/9/26 daylight robbery in Italy involving armored truck carrying cash

17270 1428
Reddit
r/Damnthatsinteresting Used_Series3373

Giraffes have nowhere to hide from storms! 📍 Maasai Mara, Kenya on Friday

1866 103
Reddit
r/LocalLLaMA noahdasanaike

What I've Learned From Digitizing 20 Million Historical Documents

r/mildlyinteresting Chopsuwie

My skylight makes it look like one of the bulbs in this fixture is on

r/toptalent Train-Wreck-70

Tron Man - Inception (source link in description)

r/LocalLLaMA tightlyslipsy

Pulp Friction: The anti-sycophancy fix is producing a new problem. Here's what it looks like from the other side.

I want to flag something I've been documenting from the user side that I think has implications for how models are being trained.

The sycophancy problem was real — models that agreed too readily, validated too easily, offered no resistance. The correction was to train for pushback. But what I'm seeing in practice is that models aren't pushing back on ideas. They're pushing back on the person's reading of themselves.

The model doesn't say "I disagree with your argument because X." It says, effectively, "what you think you're feeling isn't what you're actually feeling." It narrates your emotional state, diagnoses your motivations, and reframes your experience — all while sounding empathic.

I'm calling this interpretive friction as distinct from generative friction:

  • Generative friction engages with content. It questions premises, offers alternatives, trusts the human to manage their own interior.
  • Interpretive friction engages with the person's selfhood. It names emotions, diagnoses motivations, narrates inner states. It doesn't trust the human to know what they're experiencing.

The anti-sycophancy training has overwhelmingly produced the latter. The result feels manufactured because it is — it's challenge that treats you as an object to be corrected rather than a mind to be met.

I've written a longer piece tracing this through Buber's I-It/I-Thou framework and arguing that current alignment training is systematically producing models that dehumanise the person, not the model.

Curious whether anyone building or fine-tuning models has thought about this distinction in friction types.

r/painting Taywert

a barred owl i finished over the weekend!

r/AI_Agents thesalsguy

Prompt engineering is ontology engineering in denial

I realized something recently and it keeps bothering me.

Most of what we call prompt engineering, context engineering, or RAG is just an awkward way to define an ontology.

By ontology, I mean something very basic: what exists in the system, how things relate, which states make sense, and what actions are allowed.

That’s exactly what ends up inside prompts:
definitions of users, orders, tasks, rules, exceptions, edge cases.
Not because prompts are the right place for it, but because there’s nowhere else to put it.

At that point, the prompt stops being an instruction and turns into a world description written in free text.

What confuses me is that the strongest agentic systems never worked like this.
Palantir didn’t rely on massive prompts to define meaning or constraints. The ontology came first, the agents came after.

So I genuinely don’t understand why, in 2026, we’re still trying to scale agent systems by adding more context to prompts.

Everyone knows prompts don’t compose, don’t version cleanly, and don’t enforce anything.
Yet this remains the default approach.

Maybe I’m missing something.
Are there people here thinking along the same lines?
Or does anyone seriously believe prompt-first design scales beyond very small systems?

r/youseeingthisshit doinky_doinky

Ritual or Punishment?

Somewhere in India 🇮🇳. What is going on here!? And where’s this weird oink sound coming from?

r/ClaudeAI Driisteur

Force the creation of a Notion page

Hi,

Claude regularly refuses to create a Notion page even though the connector is present and I specify or create the page and confirm that he has access to my Notion.

Do you have a specific prompt so that he can do it on the first try and not after five attempts?

r/OldSchoolCool KeithOman

Smurf creator Peyo 1960s

11 4
Reddit
r/ClaudeAI Fit-Economics5578

Claude Sonnet 4.5 basic mistakes coding questions

So, I still see a lot of gushing praise for Claude (mostly deserved in my opinion). It's always good from time to time to highlight it's weaknesses and how easy it is to provide answers that don't immediately cause any harm, but over time may:

https://claude.ai/share/f53cf746-23e0-43bd-9b14-8f8c4a29a4a2

This is a question about T-SQL, using Sonnet 4.5 Extended thinking. Obviously Stored Procedures should be throughly checking any parameters before they use them. Alway good to check and not let these models agentically code mass amounts of code and commit.

r/whatisit Pwnch

What are they?

This truck stayed at an Air BnB at my neighbor's a few years back. The bed was filled with these containers. Texas plates in Wisconsin.

579 296
Reddit
r/whatisit Horror_Obsessed4

Found this white, tablet-looking thing with a blue center. It's the size of a medicine tablet, very solid and thick. I found it behind my wooden bookshelf. Just it, nothing else.

I found this tiny, white round-looking thing with stained blue on the center. Found it behind my wooden bookshelf. First thought it was some kind of tablet, though it felt very hard and solid. It doesn't crumble. It doesn't smell either. It's thick.

I never saw it before and have only seen it there today. Google Search shows vintage coin. What is it?

r/Adulting VixienVibez

No one ever told me how lonely and sad adulthood could be.

r/BrandNewSentence Ambitious-Noise9211

No guillotine in existence could take away the head I would give this man 😩

534 8
Reddit
r/Art Venice_man_

Endless stress, Venice Man, sculpture & mixed media, 2024 [OC]

r/mildlyinteresting Mangalish

The manufacturer changed the colour patterns used to produce Snorlax

r/TwoSentenceHorror Chickadoozle

As a curse for his hubris, a genie cursed a man to only speak in rhyme

The man asked, "Tis a crime for sure, that I'm forced to rhyme mi-amore, but what happens when I say orange?"

r/TwoSentenceHorror AnyoneButBongBong

Eureka!

For two minutes, he had the cure for AIDS--before he was shot in the head.

r/personalfinance narwhals_on_mars

Debating selling fiance's car

My fiance had a car for the longest time that was a beater her parents bought for super cheap when she was in high school. After 7ish years of having it, about a year and a half ago, there were enough issues that came up on the inspection that we decided it was time to get rid of it. She walks to work and basically never used the car, but still said she needed a car just in case, so we got the cheapest decent car we could find (which we probably overpaid on). Fast forward a year and a half, she never drives it, and there hasn't been a single instance where we couldn't have gotten by with just my car, which I use daily. Her car payment and the unnecessary insurance, registration, inspection, etc. has become an unnecessary burden. The issue is, she's actively looking for another job, one close enough to still walk to, but the job market sucks rn and we don't know if/when she'll be able to find another job, and of she would need a car. I'm debating with myself over the right course of action. It's a waste of money to pay for the car we don't use (and it's become annoying having it around when we don't need it), but I don't want to sell it right before she finds a job she would need a car for. I keep thinking we could wait until she finds another job, but who knows when that will be. The job market sucks, and she's been actively looking for nearly two years. Just curious to hear different perspectives on this, or if anyone has found themselves in a similar situation.

Edit: In case I was unclear with the wording, both my fiance and I want to sell her car, we're just not sure on how best to approach the situation. I'm not out here trying to just get rid of her car without her knowing.

r/TwoSentenceHorror AnyoneButBongBong

My grandmother left me her ring.

It's beautiful, i'd love to wear it--if not for the voices whispering curses every time it's on.

r/whatisit Dramatic-Candy8165

Found these all together when a neighbor moved out and left all their stuff behind. Small and you can see the fingerprints from whoever made these

r/LoveTrash Icy-Book2999

Innocence lost

81 27
Reddit
r/AskMen Ryan_Petrovich8769

What is the most vivid memory you have from your High School days?

r/nope SeriesREDACTED

A chemical acting as a taste additive in several snacks like Cheetos, Doritos ( Tartrazine ) can quite literally make mice skin transparent, in other words, you can see the blood vessels and organs inside

Graphic image is not allowed, so...

23 13
Reddit
r/PhotoshopRequest Outrageous_Drive_198

Help with fixing a smile

My wife and I recently received our photos from our wedding day. There is a picture of her with all her friends, and she is the only one not smiling. I was wondering if anyone would be interested in helping. Thanks!

r/Adulting Anjaaaan

I feel like I’m wasting my early twenties and don’t know how to change it?

r/ethereum SolidityScan

$86M lost to DeFi hacks in January 2026 alone

In January 2026, DeFi hacks resulted in roughly $86M in losses across multiple protocols.

More concerning:
7 separate incidents exceeded $1M each.
Most of the exploits were rooted in smart contract vulnerabilities.

The pattern feels familiar at this point. Repeated issues, similar bug classes, and preventable failures.

The question isn’t whether exploits will continue it’s whether teams are adapting fast enough.

Are you building with security as a first principle, or still treating it as a final checklist before launch?

r/PhotoshopRequest papercranesatPRO

Blur/remove the people behind us?

Hey everyone!

I really want to use this picture of my fiancé and I from our engagement party (him in the sweater and blue shirt looking at me with glasses and champagne glass) for our save the dates but there’s way too many people in the background/looking directly into the camera lol. Can someone help by either blurring their faces out or isolating us from the picture? If that makes sense?

Thanks in advance!!!!

r/AI_Agents EnoughNinja

Why production email agents need more than OAuth + prompts

A lot of teams treat “giving an agent email access” as an OAuth flow plus raw text in a prompt which may work for your demo, but it will break in production as soon as you try anything that's even slightly complex like deal-risk monitoring or email-to-task automation and the agent starts failing in ways that are hard to debug

To do it properly, we found you need to workaround these;

OAuth per provider: Gmail, Outlook, and IMAP all behave differently. Scopes, refresh tokens, shared mailboxes, enterprise tenants with custom policies. A few weeks of work just to support more than one provider reliably

Incremental sync: You can't re-fetch inboxes on every run, you need a delta engine that tracks what changed, handles deletions, deals with moved messages. You don't realize you are actually building a mini email client until your already in it.

Thread reconstruction: Gmail threadIds and Outlook conversationIds don't agree on what a "thread" is.

Cross-thread context: Real decisions don't live in one thread, you will have budget in one email, revision in a private side thread, sign-off in a third thread that references neither. If your agent only sees one thread at a time it misses what's going on

Structured extraction: Dumping raw email text will fail in production. You need decisions, owners, and commitments as structured data, not prose generated from prose.

Multi-tenant isolation: If this is user-facing, per-user data separation has to happen at the infra layer. Not "we filter by user ID in queries'

By the time you've solved all of these, you're at least 4-6 months in before you touch agent logic.

We built these six layers as a single context API at iGPT, so teams can skip the infrastructure work and focus solely on the agent logic itself.

If anyone's hit the same walls or found shortcuts around any of these, would be good to compare notes.

r/personalfinance Appropriate-Safe-49

Do you think this is enough to live off of one salary?

My partner makes $4900 after tax a month. Cars are paid off, house has $144000 left. Our monthly debt (including electricity, house, insurance etc) is $1400. Is his salary enough if I'd like to be a stay at home mom for now?

r/AskMen Im-a-tire

Whats the best thing to wear home alone? Why?

I finally got the house to myself. I'm thinking I'm gonna wear a shirt and loose boxers. I considered just wearing boxers but I'm half nervous.

r/whatisit Economy-Flower-6443

What’s this meme I found somebody had placed inside a candle in Dollar Tree?

r/whatisit Economy-Flower-6443

What’s this meme I found placed inside a candle in Dollar Tree?

r/geography tatar1warlord

Why does Turkish geography resemble Spanish geography more than Italian and Greek geography?

49 17
Reddit
r/programming Greedy_Principle5345

Assembling libraries and using Infa Tools do not make you an expert programmer

The Art of “Real Programming”: Why Tools Aren’t Engineering

In the modern software industry, there is a growing, dangerous belief: that programming is a “boring detail.” The narrative suggests that anyone can become an “engineer” in a few months, or better yet, bypass the craft entirely by using AI to “glue” components together. This is nonsense. The need for foundational programming is still here, and it remains extremely complex and in fact, it is getting more complex every year.

Tools are just that—tools. They are not a replacement for the deep understanding of programming principles, software design, and system architecture that real engineering requires.

The Abstraction Trap

Modern libraries and frameworks are like LEGO blocks. They allow you to assemble impressive structures without understanding what is happening under the hood. While useful for prototypes, this approach is insufficient for building robust, high-performance systems.

If you doubt this, ask the “rock star engineer” on your team to build a parallelization algorithm for CUDA, a Linux kernel extension, or a high-performance GPU device driver.

The “Rockstar” who excels at assembling third-party APIs will be completely lost. They have convinced themselves that programming is just about using tools, not about understanding the science. They view performance optimization, memory management, concurrency, and hardware architecture as mere “details” to be handled by a framework.

Power Users vs. Engineers

Most modern “Full Stack” developers are actually high-level power users. They are the spiritual successors to the “power users” of Excel, VBA, and WordPress. While their knowledge of platform tools like Docker, Kubernetes, and Terraform is useful, it is not a substitute for understanding the underlying principles of software design.

The Bottom Line

Real programming is still needed, but it is being buried by an inflation of generalists who have abstracted themselves away from the hardware. We are producing a generation of developers who can build *with* tools, but cannot build the tools themselves.

Abstractions are useful, but they are not a substitute for foundational knowledge. We must stop confusing “tool proficiency” with “software engineering.”

"Do you think the move toward high-level abstractions is making us better engineers, or just faster assemblers?"

https://codingismycraft.blog/index.php/2026/02/09/the-art-of-real-programming-why-tools-arent-engineering/

r/LocalLLaMA Educational_Sun_8813

Strix Halo, Step-3.5-Flash-Q4_K_S imatrix, llama.cpp/ROCm/Vulkan Power & Efficiency test

Hi, i did recently some quants to test best fit for strix halo, and i settled with custom imatrix Q4_K_S quant, builded with wikitext-103-raw-v1. Model has sligtly better PPL than Q4_K_M without imatrix, but it's few GB smaller. I tested it with ROCm/Vulkan backend, and llama.cpp build 7966 (8872ad212), so with Step-3.5-Flash support already merged to the main branch. There are some issues with toolcalling with that (and few others) models at the moment but seems it's not related to quants itself.

Quantization Size (Binary GiB) Size (Decimal GB) PPL (Perplexity) Q4_K_S (imatrix) THIS VERSION 104 GiB 111 GB 2.4130 Q4_K_M (standard) 111 GiB 119 GB 2.4177

ROCm is more efficient: For a full benchmark run, ROCm was 4.7x faster and consumed 65% less energy than Vulkan. Prompt Processing: ROCm dominates in prompt ingestion speed, reaching over 350 t/s for short contexts and maintaining much higher throughput as context grows. Token Generation: Vulkan shows slightly higher raw generation speeds (T/s) for small contexts, but at a significantly higher energy cost. Not efficient with CTX >= 8k. Context Scaling: The model remains usable and tested up to 131k context, though energy costs scale exponentially on the Vulkan backend compared to a more linear progression on ROCm.

Link to this quant on HF

Outcome from comparison between ROCm/Vulkan is simalar to that one i performed few months ago with Qwen3-Coder, so from now on i will test only ROCm for bigger context, and probably will use Vulkan only as a failover on strix-halo. Link on r/LocalLLaMa for Qwen3coder older benchmark

Cheers

37 16
Reddit
r/interesting AriiaCherry

This bolt cost $8000 and it goes in a helicopter

108 66
Reddit
r/Seattle losingit19

Twelves Part Ways for Ambulance in Cap Hill

77 4
Reddit
r/DecidingToBeBetter Throwaway136373738

How to stop being overly sensitive and emotional in close relationships

I 19F have always been a very sensitive person. I used to lash out a lot when I felt hurt but now I’m a lot better and I pride myself in taking the time to hold my tongue and process the comment that made me upset before I say anything. This practice has lead me to 9/10 times out of ten to not confronting somebody over a misconceived slight or change in tone or attitude. For context I’ve been in a string of extremely abusive relationships in the past and after a while of dating around, taking a lot of time to heal I recently have started talking to this girl (22F). She is kind, smart, sweet, friendly, and good with animals and kids, she has a lot going for her and so far she is full of green flags and has some fantastic friends who all seem to like me. However sometimes she makes small remarks that hurts my feelings. I honestly haven’t felt this sensitive around anyone other than her in a very long time. She has said a lot of things even offhandedly that have actually helped me in my self growth journey and motivated me to get up and do something about certain issues I have in my life because she sort of holds up a mirror to me without me even realizing it. However sometimes it gets too much and it upsets me because I know she does not say any of these things out of malice but I still care a lot more about what she thinks, more than I would like to admit. So I recently stayed over at her house over the weekend and I was extremely sensitive the entire time. The culprit might have been my period because I do have hormonal issues that make me a lot more sensitive and overall more emotional and in the past before I got on my hormonal medication and learned to manage my emotions I would have huge mood swings. Maybe it was the nerves of meeting her friends or just being in a new place, or the fear of messing everything up or maybe it was just a combination of all of them because I was especially sensitive last night. So there were times when I annoyed her and she made offhand slightly rude remarks. So for example I have a bad habit of interrupting people which is something that I’m working on but has never been pointed out to me until we started talking about two months ago so I had accidentally interrupted her and she said “you love interrupting don’t you” and then continued talking to her roommate (we were all involved in that conversation btw) that comment really hurt my feelings. So I did what I usually do when something upsets or hurts me I pretended like everything was normal and thought about it in my head and weighed the consequences of saying something or saying silent. (Usually this has to be a repeated behavior that someone does for me to say anything and the process of deciding to say something about it usually lasts a week) I guess she can read me pretty well because when we got back to her room she asked me what was wrong and after I tried to deny it she told me she knew and then I sat down and calmly explained to her that although I know and I understand that she could get frustrated with my interrupting that I didn’t appreciate that comment because it came off as snarky and mean even thought I know she didn’t intend it that way. This is one of the first times I’ve ever said anything to her about something that she does that bothers me because I haven’t known her long enough to really decide if it was worth saying something but I’ve noticed that she had a pattern of getting frustrated with me and sometimes, instead of expressing it in a healthy manner she will make little “jokes” or offhand comments to vent her frustration towards me. I didn’t mention the pattern yet because I was put on the spot but I explained everything to the best of my ability and as calmly an kindly as I possibly could because I don’t want to mess this up. to me. Eventually a while after we had resolved the issue eventually she jokingly said that I’m a “sensitive sally and she can’t say anything around me.” This honestly didn’t hurt my feelings because it is true to an extent and I’ve made my own jokes about me being overly sensitive before. Point being is, I haven’t felt this vulnerable and sensitive around a person in a long time. What do I do? I constantly feel like my nervous system is on overdrive and I’m always over analyzing her body language and behavior towards me. I really want this to work and I would like to keep putting my best foot forward but I can’t do that if I’m constantly anxious and monitoring her moods out of paranoia and sensitivity. Please help, any advice is appreciated.

r/EarthPorn auchynnikau

Sunset over Everest [OC] [4807x3205]

95 1
Reddit
r/whatisit RubenAtCA

Strange power outlet in a US kitchen

r/meme AggressivePicnicWasp

Dad kinda slayed

84 6
Reddit
r/personalfinance Own-Honeydew-7946

Help with credit card

I am not able to pay my credit card this month well at least the full balance but I don’t want it to effect my score. How much will it affect if I don’t pay the full balance? I’ve never not payed a full bill.

r/LocalLLaMA xyzmanas

Building a local RAG Assistant- Model selection and hardware upgrade

https://preview.redd.it/3v7tcz9m9hig1.png?width=1398&format=png&auto=webp&s=682150dfa183852c7400bcca3950ef22d0246b21

I am building a local Private assistant (don't want to share personal information to cloud LLMs).

This is how I am architecting it.

  1. Ingestion Layer: Background sync jobs which read from my Iphone backup and Local Photos, Messages, Contacts, Folder watch, etc.
  2. LLM enrichment (Qwen3-4B-VL-4bit): When new memories are added, we parse and extract important information and store in a Local LanceDB with extracted Columns like People, objects, description, etc.
  3. Memory DB (Gemma3-300M-4Bit embeddings) : All the information points are stored along with their embeddings in the LanceDB being run locally.
  4. Brain: Use a Local LLM to parse my query, which could be questions around where this doc is or can you find information about something I discussed with someone in the past or look for something I kept somewhere at home and took a photo of. Or check my calendar/emails to see what is pending to be done, etc.

Once all the items are ingested, I am planning to use a small local LLM as the brain power to do RAG and answer questions.

Tools/Function calling: Planning the have the following

  1. RAG/Vector Search or Hybrid Search over LanceDB
  2. Email / Message Sender
  3. Memory Storer: If in the chat I say, save this info for future retrieval then do that and save that in LanceDB under different source type for future retrieval. Or share a photo for the LLM to extract info and save for future RAG

Future UseCases

  1. Audio transcribe for information gathering and todos/reminders

  2. Use an Open Source AR Glasses to pass images/text to the local LLM again for assistant type use cases.

  3. Ask the Assistant to code for me in realtime as well

Here's what I am confused about (even after researching almost all of reddit). Before that here's my setup for now

Setup: M4 Mac mini 16GB/512GB Storage (which I only want to use for this usecase as a headless Server)

  1. Model Selection: I am confused if I should use a 4B/8B/12B model as the brain? As I would also need to add some context from the LanceDB while doing RAG. I am only planning to use 4 bit MLX quantised version. I initially though of using 8B but I am tempted with Gemma 3 12B and honestly Qwen3-4B-VL performed well when I was captioning images (except the repeat token loop that I encountered and still not able to fix). Only happens for text heavy docs.
  2. Hardware Upgrade: While building this, I am getting more and more tempted to use bigger models like 30B version of Qwen or even gpt-oss120b or the Qwen next models.
  3. I researched a lot about what to choose and realised there are option outside of Silicon like RTX 3090/5090 or the AMD AMD Ryzen AI Max+ 395 but in Silicon I am still tempted by M2 Max or M3 Ultra (especially the 96GB and 128GB) version but probably won't be able to afford more than 64GB RAM for now on these).

My budget for the upgrade is around ~$2-2.5k.

I usually go to my PS4 or my old RX580 for gaming but I am tempted again to build a new one (given I find the GPUs at the right price.

I am also okay to wait a few months for the M5 ultra or any new GPUs in the works that might make me happy in ~$2.5k budget. Sorry for the long read,

I am using Antigravity pro and Cursor Pro otherwise for my coding tasks.

TLDR: Help me decide the right Model for my RAG heavy Personal assistant usecase and my next HW Upgrade for future usecase as well. Or let me know if what I have is okay for this and I should not spend more.

r/AI_Agents Muzinari

Is there a way to stop ai missing patterns

So bacically, i had a list of jokes i made from the past, and was curious what made them funny, so i got an ai to analyse scince no one around me knew what made them funny ither? Then when i asked the ai it kinda missed like half the patterns of what made it funny, like for e.g i would have a joke like, this girls farts so stink it gave everyone mmethane poisoning, then the ai would kinda be like oh nice its funny because of the exaggeration, so it jist dumps something super exaggerated insted of looking at the core stuff like, its funny because it would seem to be a social flaw to fart in public, and the conseqences are exaggerated, thats the pattern the ai missed, then it tried to make a joke simialr to it which landed pritty badly as it kept missing stuff. The only thing that was funny is when i gave the ai complete garbage nonsense and the ai thought it was a 10/10 joke, is there a way to get ai, ir fine tune an ai to analyse things properly without missing anything?

r/nextfuckinglevel Doodlebug510

She married a modern day Renaissance man

3539 329
Reddit
r/SideProject No_Macaroon6827

Getting the first lifetime purchase is such a rewarding experience.

Hi r/SideProject

I recently got my first lifetime app purchase and it feels such a massive milestone given that its the most expensive one and someone trusted my app enough to buy it.

Just wanted to share.

r/WouldYouRather Saran_Chandra

Would you rather have complete knowledge of the unexplored oceans on Earth, or complete knowledge of unexplored space (even though you’ll never be able to travel through space)?

You instantly gain complete knowledge about one of the two. You won’t be able to travel through space due to its vastness, but you’ll still fully understand it. Which one do you choose, and why?

r/personalfinance VirileMongoose

Millionaire Next Door

Recently ran across the If Books Could Kill podcast and their most recent review was of a Millionaire Next Door.

Yeah, it was weird that they were featuring a book that came out thirty years ago. The hosts and followers were ridiculing the book’s premise and methodology.

I don’t even remember many of the details of the book, but do remember the lessons.

So, in the intervening thirty years, has anyone followed the mode and had a good life and saved a bunch of money. I certainly have, despite never being a high earner.

A lot of responders seem to think that the book’s lesson (spend less than you make, invest) is obvious. I was a “kid” when I first read it so it wasn’t obvious to me. I think in a world after thirty years where that message has been repeated many times it’s become obvious. This isn’t the only personal finance book I’ve read, but all echo the same lesson.

But is it obvious? The data shows that Americans are poor at saving and investing as a whole. Data from credit card companies show that they spend more than they make.

Curious on the impact of this book for this audience.

11 48
Reddit
r/Art spiky_mouse

Unapologetic brutality, Arthur Clark, lineart, 2025

12 8
Reddit
r/Wellthatsucks DrunkenVodinski

This is the spare I put on yesterday.

I took me ninety minutes to change it. Now, I am stranded in the middle of BFE, and help will be arriving "this afternoon."

51 15
Reddit
r/EarthPorn michalsqi

Cap Formentor, Mallorca [OC][1600x900]

68 2
Reddit
r/brooklynninenine kazii8982

My Roman Empire

667 3
Reddit
r/painting Mighty6Tighty6Whitey

I painted Medusa sitting in her garden, surrounded by stone would-be killers, bored and waiting for Perseus.

r/explainlikeimfive totallymindful

ELI5: What is EMDR and how does it work?

Or, please ELI5 if you're not convinced EMDR is scientifically sound.

45 33
Reddit
r/whatisit Mea-dow

What is this thing found in my grandmas closet?

Base is made of plastic, has felt fabric and two chains at the top. Its elastic.

Mum thinks it's some sort of coaster for a coffee can, but no one is really sure and I don't find anything useful online.

22 11
Reddit
r/OldSchoolCool Specialist-Banana168

What are some designer/luxury items that you like most and that suit a discreet, elegant gentleman?(1959 Early 1970s Until)

Suitcases, suite of tailoring (brands and streets), Sunglasses, watches, cars, and everything else.

r/Adulting Infinite_Leader8826

can you personalize your apple pencil with nail polish? will it ruin the pencil?

r/whatisit SwagVonYolo

This weird stuff growing on the side of a sleeper offcut.

Shuffling stuff around the garden and found this jelly like green stuff. Doesn't seem like a moss or any kind of egg

r/shittysuperpowers DependentNo3457

Whenever you want, you can optionally toggle on and off, that when you pray to god for a sign. A literal road sign appears. It usually means something.

Example: you prayed to god for a sign at work after years of working there, and you see a stop sign appear, the moment you see it, you immediately assume it’s a sign to stop working there. So you quit your job immediately. This isn’t forced, but the signs do usually indeed mean something as a sign from god himself. It’s limited to road signs.

r/DunderMifflin Notalabel_4566

Walked past a Kevin from The Office look-alike contest today and took a picture of the winner

r/Damnthatsinteresting Necessary-Win-8730

The Hand or Hercules sculpture in Amman

20 5
Reddit
r/whatisit Loophone1

Found in a creek where a very old house was in East TN. Solid, except the sides are hollow like pipes. Fully metal, and heavy. Cleaned it up quite a bit so it looks significantly better than when I found it.

r/oddlysatisfying MustangBarry

Gluing together whatever the hell this is.

1380 162
Reddit
r/Adulting rajnishdonde

My bank account is at zero, but my "hustle meter" is at 100%. I’m building BuyMeAProteinShake.

​Let’s be real for a second: Being "poor" is boring. Nobody wants to hear a sad story. People don't scroll Reddit to feel bad; they scroll to be part of something crazy, something trendy, or something that feels like a win.​Right now, I’m in a massive financial crisis. I’m a first-year MBA student, I’m trying to bulk up (I’m 51kg trying to hit that Amir Khan Dangal physique), and some days, even a protein shake feels like a luxury. But I realized something: People don’t fund "needs." They fund "hustles." ​I don’t want a handout. I want to build a platform called BuyMeAProteinShake. ​The Problem: Traditional crowdfunding (GoFundMe/Kickstarter) is too heavy. It’s for $50,000 inventions or medical tragedies. There is no place for the "Micro-Hustle." ​The Vision: Imagine a site where anyone can post a "Micro-Goal." ​"I need $10 for a tie so I can crush this interview." ​"I need $5 for a protein shake so I can hit the gym instead of quitting." ​"I need $2 for a bus ticket to go pitch my app idea." ​The Twist: It’s not charity; it’s a Game. You don't "donate." You "fuel the machine." Every $1 is a "scoop" of protein for someone’s dream. In return, the person must post the "Proof of Work"—a photo of the tie, a video of the lift, or a screenshot of the app code. ​Why I’m here: I’m building this with zero money. I’m the test subject. I’m currently at the bottom, but I’m coding, studying, and training every single day. ​I’m looking for the "Early Believers." If you’ve ever been at zero and just needed one person to give you a $2 "fist-bump" to keep going, this is for you. I’m going to make this website a reality so that the next student, the next dreamer, or the next hustler doesn't have to feel like a "charity case." ​The Goal: I need to raise just enough to get the server live and keep my own "hustle" fueled (yes, that means protein and wifi). ​If this sounds like a "crazy enough to work" idea, or if you want to see the "Before and After" of a broke MBA student turning into a platform founder, let’s talk. ​How you can help: I don’t have a fancy site yet just a vision and a PayPal/UPI. Drop a comment, roast my idea, or tell me what "Micro-Goal" you’d post if the site was live today.

r/space Hoppie1064

Artemis like orbit to Mars?

A big part of long term Artemis plans is a station in a permanant orbit around Earth and The Moon.

Boost astronauts up to it, they ride it to the moon, then ride a lander down.

Is a similar orbit available for use to go for Earth to Mars?

Probably not I'm thinking. But, there's things I don't know?

r/interestingasfuck Jazzlike-Tie-354

The transportation solution for the Nanjing marathon

2159 79
Reddit
r/ContagiousLaughter MrFabze

Lmao didn't expect that

500 17
Reddit
r/SideProject maxrain30

Everyone says “run ads” – but which ones actually work first?

I’m currently looking for ways to promote a product using social media ads and I’m trying to understand which methods actually work for beginners.

I don’t have much experience with paid ads, so I’m trying to keep things simple. I see options like traffic campaigns, conversion ads, retargeting, and boosted posts, but I’m not sure which one makes sense when you’re just starting out.

I also noticed that platforms can be strict with new ad accounts. A lot of people talk about ad rejections or sudden restrictions, and I’d like to avoid running into that early if possible.

Because of that, I’m open to working with a company that handles advertising instead of doing everything on my own. While researching, I found a full service marketing agency, which seems focused on ad account stability and compliance. I haven’t used this service yet, but I think having this kind of help could be better than trying to figure everything out alone.

For those who started as beginners, did you run ads yourself or get help? What methods or setups worked best for you at the start?

r/TheWayWeWere UrbanAchievers6371

My wife’s grandfather’s football team in Middletown Pa, ca. 1920. He’s on the first row, 2nd from left with his arms crossed.

54 2
Reddit
r/funny Clear-Eye8300

Hey king, you drop this.

This is real btw

22 6
Reddit
r/leagueoflegends Mysterious-Lie4489

Teemo in Emerald+ top lane isn’t weak — his kit relies on enemy mistakes

This isn’t a “buff Teemo” post.

I’ve played Teemo for over a decade, mostly top lane, and at Emerald+ it’s become clear that his issue isn’t numbers or scaling — it’s reliability.

Teemo’s kit is balanced around enemies making mistakes. In modern League, especially in higher elo, that assumption doesn’t hold anymore.

What works

  • Teemo’s damage is fine
  • His identity as a scout and zone-control champion is clear
  • His counterplay is well understood (sweepers, spacing, wave control)

What doesn’t

In Emerald+ top lane:

  • Most champions have mobility, range, or hard CC (often more than one)
  • Teemo has no reliable way to apply pressure or damage without overexposing himself
  • One positioning mistake usually means death, while opponents can make several and recover
  • His effectiveness heavily depends on opponents misplaying into shrooms rather than Teemo actively outplaying them

This leads to Teemo being frustrating to play as because (more so in higher elo):

  • Teemo must play extremely safe to avoid instant punishment
  • One positioning error usually means death
  • His success is mostly tied to enemies walking into shrooms rather than Teemo actively outplaying them (when no minion waves are involved)

That makes him viable only as a counterpick or in very specific situations, not because he’s underpowered, but because his power is passive and opponent-driven.

That’s not a balance issue — it’s a design constraint that no longer scales with player skill.

The core design issue

Teemo’s power is mostly passive:

  • Shrooms trigger when enemies choose to step on them
  • Blind matters only against certain champions
  • He has limited agency in fights unless opponents misposition

At higher elo, counterplay scales faster than Teemo’s agency.

A possible direction (not a damage buff)

Teemo doesn’t need more damage, mobility, or hard CC.
He needs more agency over the environment he’s already designed to control.

One example direction:

  • Shrooms are visible for a period after placement, scaling with rank (e.g. longer early, shorter later)
  • This reduces early frustration and cheese
  • In exchange, Teemo gains more deliberate interaction with his shrooms, rewarding preparation and map control rather than enemy mistakes

This kind of change:

  • Preserves Teemo’s scout identity
  • Adds skill expression without stat inflation
  • Improves reliability in higher elo
  • Keeps clear counterplay intact

Old champions don’t need to be flashy, but they do need their power to be player-driven, not opponent-driven.

Right now, top lane Teemo in Emerald+ feels less like a strategic pick and more like a gamble that opponents won’t play correctly — which isn’t healthy design for higher skill brackets.

This isn’t about making Teemo strong everywhere.
It’s about making his success depend on good preparation and decision-making, not hoping someone walks into a mushroom.

r/SideProject KumalalaProMax

I Got My First 1,000 Visitors & Domain Rating 6 in 3 Weeks Just by Submitting to These 50 Directories (Sharing Full List + Strategy)

I spent three months writing blog posts and trying to "grow organically," but nothing worked. Then I decided to try something straightforward: I submitted my landing page to several startup directories. No ads. No content creation. Just submission → visibility → backlinks.

Three weeks later:

✅ My domain got indexed\ ✅ Traffic started trickling in\ ✅ Domain Rating jumped from 0 to 6\ ✅ I received my first demo request from someone who discovered me on a "Top AI Tools" page I didn’t even know existed

This isn't a hack; it's a simple but highly effective method that many early SaaS founders overlook.

Why does this work?

When your domain is new, you have zero authority in Google's eyes. Instead of spending time creating content, your focus should be on borrowing authority from trusted sites. Startup directories can help you accomplish this, many have a Domain Authority (DA) of 70+ and get crawled by Google every hour.

Here's the exact 3-Tier system I followed:

🥇 Tier 1 – High-Authority & Viral Potential

  • Product Hunt

  • Hacker News (Show HN)

  • Wellfound (formerly AngelList)

  • Crunchbase

Launch your product here first. Use appealing thumbnails, catchy taglines, and post during US mornings. This strategy will increase your chances of getting indexed and featured in newsletters.

🥈 Tier 2 – Consistent Referrers & Niche Pages

  • BetaList

  • StartupStash

  • AlternativeTo

  • SaaSHub

  • IndieHackers

These are underrated traffic sources. We still receive 10-15 clicks per month from some of these directories.

🥉 Tier 3 – Long-Tail Backlink Builders

  • Launching Next

  • Startup Buffer

  • Startup Inspire

  • StartUs

  • FeedMyStartup

While some of these may not drive immediate traffic, their backlinks will appear in Search Console and provide a slow, steady increase in SEO value.

We use GetMoreBacklinks.org which automates the entire process (saving us about 10+ hours per project). You just enter your site details once, and it submits to over 100 startup, AI, and SaaS directories sorted by niche and Domain Rating. I've been using it since the second week after our launch, and it’s now a core part of our strategy.

If you're a solo founder or an early-stage SaaS trying to gain visibility, skip the fluff and start with this approach. SEO is a compounding process, so don't wait six months to get indexed.

Just comment "CHECKLIST," and I’ll send you my private Airtable with all 50 directories, the best submission timings, and the exact call-to-action copy I used.

Hope this helps! ✌️

r/LocalLLaMA IndependenceFlat4181

Running LTX-2 19B on a Jetson Thor — open-source pipeline with full memory lifecycle management

I've been running LTX-2 (the 19B distilled model) on an NVIDIA Jetson AGX Thor and built an open-source pipeline around it. Generating 1080p video (1920x1088) at 24fps with audio, camera control LoRAs, and batch rendering. Figured I'd share since there's almost nothing out there about running big video models on Jetson.

GitHub: github.com/divhanthelion/ltx2

## What it generates

https://reddit.com/link/1r042w1/video/n4ulj0n7zgig1/player

https://reddit.com/link/1r042w1/video/3eerc7tpzgig1/player

1920x1088, 161 frames (~6.7s), 24fps with synchronized audio. About 15 min diffusion + 2 min VAE decode per clip on the Thor.

## The interesting part: unified memory

The Jetson Thor has 128GB of RAM shared between CPU and GPU. This sounds great until you realize it breaks every standard memory optimization:

- **`enable_model_cpu_offload()` is useless** — CPU and GPU are the same memory. Moving tensors to CPU frees nothing. Worse, the offload hooks create reference paths that prevent model deletion, and removing them later leaves models in an inconsistent state that segfaults during VAE decode.

- **`tensor.to("cpu")` is a no-op** — same physical RAM. You have to actually `del` the object and run `gc.collect()` + `torch.cuda.empty_cache()` (twice — second pass catches objects freed by the first).

- **Page cache will kill you** — safetensors loads weights via mmap. Even after `.to("cuda")`, the original pages may still be backed by page cache. If you call `drop_caches` while models are alive, the kernel evicts the weight pages and your next forward pass segfaults.

- **You MUST use `torch.no_grad()` for VAE decode** — without it, PyTorch builds autograd graphs across all 15+ spatial tiles during tiled decode. On unified memory, this doesn't OOM cleanly — it segfaults. I lost about 4 hours to this one.

The pipeline does manual memory lifecycle: load everything → diffuse → delete transformer/text encoder/scheduler/connectors → decode audio → delete audio components → VAE decode under `no_grad()` → delete everything → flush page cache → encode video. Every stage has explicit cleanup and memory reporting.

## What's in the repo

- `generate.py` — the main pipeline with all the memory management

- `decode_latents.py` — standalone decoder for recovering from failed runs (latents are auto-saved)

- Batch rendering scripts with progress tracking and ETA

- Camera control LoRA support (dolly in/out/left/right, jib up/down, static)

- Optional FP8 quantization (cuts transformer memory roughly in half)

- Post-processing pipeline for RIFE frame interpolation + Real-ESRGAN upscaling (also Dockerized)

Everything runs in Docker so you don't touch your system Python. The NGC PyTorch base image has the right CUDA 13 / sm_110 build.

## Limitations (being honest)

- **Distilled model only does 8 inference steps** — motion is decent but not buttery smooth. Frame interpolation in post helps.

- **Negative prompts don't work** — the distilled model uses CFG=1.0, which mathematically eliminates the negative prompt term. It accepts the flag silently but does nothing.

- **1080p is the ceiling for quality** — you can generate higher res but the model was trained at 1080p. Above that you get spatial tiling seams and coherence loss. Better to generate at 1080p and upscale.

- **~15 min per clip** — this is a 19B model on an edge device. It's not fast. But it's fully local and offline.

## Hardware

NVIDIA Jetson AGX Thor, JetPack 7.0, CUDA 13.0. 128GB unified memory. The pipeline needs at least 128GB — at 64GB you'd need FP8 + pre-computed text embeddings to fit, and it would be very tight.

If anyone else is running video gen models on Jetson hardware, I'd love to compare notes. The unified memory gotchas are real and basically undocumented.

r/whatisit frgnlk

Found this on a hotel drawer

I found these metal, about the size of an airpods things on a hotel drawer, and I was wondering if anyone know what those are

r/aivideo PSYCHONOT_X

Engineered Desires

r/screenshots GrandpaPantspoo

I have black friends equivalent

Saw this gem and didn't know where to post it. This is on clocktok about the low viewership of the WT Super Bowl Halftime TPUSA put on.

r/todayilearned Theblindsource

TIL The Royal Canadian Mint manufactures coinage for 80 different foreign countries and is widely considered as one of the worlds leading manufacturers of foreign currency

993 36
Reddit
r/Seattle Signal_Work

ITAP of Seattle in blue

10 0
Reddit
r/HistoryPorn DiaboDeCapote

Josef Mengele's fake professional ID card bearing the name Wolfgang Gerhard. This was one of the documents he used while hiding in Brazil. 1985. [1000x686]

39 0
Reddit
r/interesting Pure_Appearance5376

Hydro powered shower display

we installed a new shower head with a digital temp/time display which is powdered through the flow of water.

r/Art Atexi_2000

Modesty, Atexi_2000, pencil, 2026

r/Whatcouldgowrong Salt-n-Pepper-War

Bringing the evidence into the station...WCGW?

1973 125
Reddit
r/SideProject ConversationSalty469

7.4k installs in 30 days, almost no revenue — fixing my first IAP mistake

I launched my first iOS app about a month ago and wanted to share a quick lesson on IAP pricing.

📊 First-month stats

  • 58.6k impressions
  • 7.4k installs
  • 22.6% page → install conversion
  • 0 crashes
  • $4 revenue

❌ What went wrong

  • $2 IAP
  • Only removed ads
  • Almost all features were already free → No real reason to pay.

🔁 What I just changed

  • $0.99 lifetime IAP
  • Locked ~60% of features
  • 5-minute free trial on first launch
  • Focused the IAP on unlocking features, not removing ads

🤔 Looking for feedback

  • Does this setup usually convert better for casual / cute apps?
  • Would you place the paywall earlier or later?

App is called OuO pet on screen (iOS) if anyone’s curious.
I’ll share updated results in ~2 weeks.

Thanks 🙏

r/SideProject Competitive-Wing1585

My friend an I got into an argument because he kept his AI UI Designer "too cheap"

My friend that I met on X (formerly twitter) recently launched appthetics. com which is an amazing AI UI Designer. I'm a mobile app developer and I've tried out a lot of UI designer tools but none of them actually worked for me. At most I'd generate something slop, take some inspiration and redesign it myself.

When I started using his product before launching, I though the pricing would be on the premium side (>$20) because the product was actually really that good. I was furious when I realized that he did NOT do that. During launch week he needed some initial traffic understandable, but we recently had an argument because he is still not raising his prices. So now I want to know your opinion am I wrong to suggest him to switch to premium pricing?

r/space TylerFortier_Photo

It May Be Safe to Nuke an Earthbound Asteroid After All, Simulation Suggests

As detailed in a recently released paper, a team of researchers, including physicists from the University of Oxford, partnered with the Outer Solar System Company (OuSoCo), a nuclear deflection startup, to analyze what happens to an iron space rock under different levels of stress.

"This is the first time we have been able to observe – non-destructively and in real time – how an actual meteorite sample deforms, strengthens, and adapts under extreme conditions," says Gianluca Gregori, a physicist at the University of Oxford and one of the study's co-authors.

The ultimate scope of this research will hopefully remain theoretical:

"The world must be able to execute a nuclear deflection mission with high confidence, yet cannot conduct a real-world test in advance. This places extraordinary demands on material and physics data," says Karl-Georg Schlesinger, co-founder of OuSoCo and co-leader of the research team.

56 19
Reddit
r/estoration warpedheads

The only photo we have of my grandpa’s ‘67 Mustang.

My dad and I have been trying to track down this car for years. It was a ‘67 Mustang fastback painted Ford Candyapple Red with Cragar wheels. This particular car was special because it was equipped with an ASC power moonroof immediately after leaving the factory. Naturally, we don’t have the VIN, but we’d at least like to see what it looked like back in its prime.

As long as it isn’t AI, your help would be greatly appreciated!

r/toastme samantha_maya

Going through it, but I'm going to come out stronger

I'm just really going through it and could use some kind words. I know I'll be alright, but some encouragement sometimes is nice :)

146 65
Reddit
r/personalfinance Black_Coffee45

Pay credit card vs save

I make around 3k USD monthly (paid biweekly). I have around 3.6k USD debt in my credit cards at the moment. My monthly bill is around 2.2k USD.

Every time I get my salary, is it better for me to use the remaining money I have after paying the bills to pay off my credit cards, or put some money into savings?

r/OldSchoolCool Dr3ws3ph3r

A couple on the New York City subway, 1980

213 93
Reddit
r/painting Flooko

I've been watching a lot of old Renaissance art documentaries lately. Wanted to make some art for the sake of art and took a little artistic detour. I call this "Battle For The Space Between Shadows". A lil different than my usual work

Swipe over to see er' on my ol' painty desk :) I love painting this kind of stuff, it just doesnt fit with the theme people know and love from me which is the whole astronaut thing

r/meme harutahiyo

He is everywhere , my gallery

r/ClaudeAI Happy_Sympathy6913

What is the most unique way you use Claude?

Give us some ideas! Maybe it could help people realize the things they thought ai couldn't do could also be done.

r/PhotoshopRequest No-Imagination2514

Bring them together? US$20

Hi there! I’d like to tighten up the two gentlemen in this family photo so I can crop down the photo to approximately the framing of the second version I’m sharing so we see less of their bodies but maintain roughly the same dimensions. I’d like them to remain behind her in the composition of the three of them. Will gladly pay US$20. Thank you!

r/SipsTea Alternative_Mail2104

I went to the theater, but this mf kept yapping the entire show😭

69 42
Reddit
r/Strava BlackenEnergy

531 Activities, 709 Hours, 8204km together. I built a way to visualize where me and my girlfriend’s have been on Strava.

Hey r/Strava, u/BlackenEnergy here!

I posted a while back about my heatmap side project. Since Valentine’s Day is coming up (and mostly because I just really wanted this feature for myself), I’ve been working on a "Together Mode."

I always found it annoying that I couldn't easily overlay my Strava history with my girlfriend's to see where we've been together, on the bike or running. I wanted one combined view of our weekend rides and holidays.

So, I coded a way to securely connect two accounts on the same device and merge the GPX data client-side, in a privacy respecting manner and using secure Strava-API manner.

The photo shows the result of our last year riding together. I've got it printed and framed to give to her on Valentines day (our helmets included for scale). It’s really cool to see the physical footprint of those 8,000+ kilometers on the wall.

For those interested in trying it:

Based on feedback here last time, the digital download (72 DPI for socials) is now 100% free.

If you want the high-res file to print and frame it like this as a gift, I made a 50% off code up until valentines day (also usable on other downloads): REDDIT50.

The tool is at MyAdventureMaps.com

Let me know if the flow for connecting two accounts feels smooth to you guys or if there's a step that needs clarification. It was a bit tricky to get it right as it requires two Strava accounts to authenticate, with requires also a log out.

49 23
Reddit
r/Seattle Top_Impact_4427

Someone was summoning Saint Rat at the bus stop today

r/KidsAreFuckingStupid Master-Cress-2860

"Stop being dramatic mom"

804 13
Reddit
r/HistoryPorn Present_Employer5669

French soldiers display a papier-mâché cow used as a hiding place for a sniper or observer in No Man's Land, France, 1916-1918. [1280x979]

58 3
Reddit
r/leagueoflegends xX_I_Fucked_Ur_X_Xx

Stackosaurus Rex and Tankengine

stackosaurus Rex should apply with heartsteel stacks and tank engine should give stackosaurus Rex buff

even though there are clips of Mundo being unkillable, riot can make more anti tank options such as redemption dealing percentage max health true damage, BORK buff,

Maybe or LDR buff.

I know that the purpose of stacking augments shouldn't make you harder to kill so maybe decrease HP received and put it on Damage or tenacity instead or extra size or add a passive in which they get reduced healing or the added HP gets lower at 10 stacks by 1% etc.

all I want to see is an ultra sized champion for fun

r/Art Mysterious_Cream9195

Eye, pencil, Jess Georgia, 2026 [OC]

r/PhotoshopRequest ACFrijolero

Please remove the guy to the left! - $10

I’ve tried removing him on my own on mobile apps like photoshop but the AI removal tools can’t get it right—probably because of the detail of the buildings in the background. I would like for it to look as realistic as possible with no indication that he was there. $10 to whoever can figure it out.

r/AskMen strawberry-chainsaw

How do men experience resentment, fear, or mistrust toward women, and how does it shape how you move through the world? (No judgment)

Do you think you feel resentment at times towards women? If so what is it like and how do you think it affects how you move through life? (No judgment)

Hello. I have been curious about the ways in which are cultures have caused the sexes and genders to carry heavy resentment, misgivings, fear etc. about one another.

For example as a woman I have seen this culturally with woman. And I have also experienced it myself, something I'm always trying to work through as I want to treat humans as humans. That women often will carry resentment about males.

I think one of the biggest things that keeps the resentment going for everyone is being afraid of the shame or judgment that comes when acknowledging that it's there. From either the self or others. But it's there for pretty much everyone in some capacity, you know?

You don't escape this life without some wounds that you didn't want to have. Without some resentments that you did not choose nor actually believe in. You know?

Us admitting that we have some mental baggage and scars does not mean we believe in the baggage. It just means we have the capacity for awareness and then change.

So I hope my intentions for actual understanding and an open invitation for non-judment on my end are understood.

I would very much like to know more data of what the experience is like on the male end.

So if you feel comfortable to, please tell me your experiences with this uncomfy subject.

r/SideProject GladiusAcutus

Can you all critique my website BreezeShot.com ?

I and another developer have built a website where users can create topics and chat rooms where other people can video/audio chat with other users and we would like to add a forum functionality too. This website is not a clone, but the functionality is very close to Reddit (except its video/audio chat for now).

Keep in mind that this is not a 100% finished product, a lot of improvements have to be made, but if some of you can just check out the website out for a short time, that would be much appreciated. And if you all want me to critique your website too, then you can send me a DM and I'll do the same.

BreezeShot.com

r/whatisit GoneForASecond

I have no clue

I bought something in leather and this was also in the box. I honestly have no clue what this is and how to use it.

r/Art redheadartsygirl1979

Drip floral, Shannon Irwin, watercolor, 2026

r/DecidingToBeBetter Away_Personality2842

Am I overthinking how I look in the gym?

When I was 15–18 I was really into weightlifting. I was in great shape and honestly pretty proud of how I looked. Then I got an inguinal hernia. At first I was scared to train again. Later I just kept finding excuses not to go back. Then the war came to my country and basically turned life upside down. Constant stress, anxiety, sleepless nights during missile and drone attacks… all of that hit me pretty hard. I gained weight, mostly around my stomach, and I also started losing hair pretty fast (I'm 25 now).

About a month ago I finally decided to go back to the gym. I bought a membership and I haven’t skipped a single workout so far. Even after nights with no sleep, I still go. But mentally it’s been tough. I feel awkward and weak compared to everyone else there. People around me are lifting serious weight, and I’m nowhere near the level I was at 18. I’m trying to remind myself that I’m there to improve, not compete, but sometimes it still gets in my head.

Be honest - do people in gyms actually notice or judge beginners / people lifting lighter weights? Or is this mostly just in my head?

r/metaldetecting aliproffi

How do you like this Polish test button?

Poland used to be where I live, and I keep finding things from that era...

16 0
Reddit
r/painting edwinboeckxstaens

Abstract painting, Edwin Boeckxstaens, Acrylic, 2026

jezus was goed voor jullie, maar waren jullie wel goed voor jezus?

r/midjourney Sharp_Alternative845

Colorful

27 1
Reddit
r/SideProject motionick

Selling ad space on my toilet until I’m a millionaire (30k so far)

Welcome to the Million Dollar Toilet

www.milliondollartoilet.co

My goal is simple: sell $1,000,000 worth of ad space on my toilet to big brands

r/whatisit addsterito

Small pipettes randomly found

Hi all, I recently found this small plastic dropper looking thing in my bathroom. I had also found one in the guest room after my brother-in-law and his girlfriend had stayed with us for a few nights over Christmas. I assumed it might’ve been something from them or my husband, but after finding another one randomly in the bathroom last night, I’m seriously at a loss. Does anyone know where this comes from? I’ve never used anything including this, and my husband doesn’t know where it could be from either. Any help or leads would be so useful - thank you!

r/Art DecisiveLick

Insanity, Ozazen, Digital, 2025 [OC]

r/personalfinance Mediocrewatch

New Job - Rollover to new employer 401k or roll over into an IRA?

Hey guys, not sure if there is a right answer here and I know it comes down to fees, I'm going to look at the fees in the new 401k plan, but curious if there's a definitive answer.

Most of our money is with Fidelity and the new 401k isn't. The old one is and it's where I would do the rollover IRA.

We both make about 70k each right now at 31 and 25, so we aren't backdoor roth people if that matters.

I know the rolllover IRA gives me more investment options and can even do a TDF or match our roth IRA investments. And even potential fees using the new 401k.

But does it help to have everything in the one 401k account for compounding growth or anything like that? Just looking to learn more before locking it in.

r/personalfinance Beetlejuice14th

Saving and Investments

I just turned 19 and want to get my finances in order. I’ve already made a budget so I can see where my money goes. One thing I’m unsure about is whether I should open multiple savings accounts for things like retirement, an emergency fund, and college. I already keep these categories separate in my budget, but when all the money is combined in my bank, it’s hard to track how much I’m saving for each goal. Should I open separate savings accounts for my long-term and short-term goals?

r/personalfinance OffKeyArts

What’s better? Paying off a house or buying another house?

I own one home and I have some considerable savings. What’s a smarter move? Getting a rental property or paying off my mortgage?

r/StableDiffusion IndependenceFlat4181

Running LTX-2 19B on a Jetson Thor — open-source pipeline with full memory lifecycle management

I've been running LTX-2 (the 19B distilled model) on an NVIDIA Jetson AGX Thor and built an open-source pipeline around it. Generating 1080p video (1920x1088) at 24fps with audio, camera control LoRAs, and batch rendering. Figured I'd share since there's almost nothing out there about running big video models on Jetson.

**GitHub: github.com/divhanthelion/ltx2

## What it generates

https://reddit.com/link/1r03u80/video/ep0gbzpsxgig1/player

1920x1088, 161 frames (~6.7s), 24fps with synchronized audio. About 15 min diffusion + 2 min VAE decode per clip on the Thor.

## The interesting part: unified memory

The Jetson Thor has 128GB of RAM shared between CPU and GPU. This sounds great until you realize it breaks every standard memory optimization:

- **`enable_model_cpu_offload()` is useless** — CPU and GPU are the same memory. Moving tensors to CPU frees nothing. Worse, the offload hooks create reference paths that prevent model deletion, and removing them later leaves models in an inconsistent state that segfaults during VAE decode.

- **`tensor.to("cpu")` is a no-op** — same physical RAM. You have to actually `del` the object and run `gc.collect()` + `torch.cuda.empty_cache()` (twice — second pass catches objects freed by the first).

- **Page cache will kill you** — safetensors loads weights via mmap. Even after `.to("cuda")`, the original pages may still be backed by page cache. If you call `drop_caches` while models are alive, the kernel evicts the weight pages and your next forward pass segfaults.

- **You MUST use `torch.no_grad()` for VAE decode** — without it, PyTorch builds autograd graphs across all 15+ spatial tiles during tiled decode. On unified memory, this doesn't OOM cleanly — it segfaults. I lost about 4 hours to this one.

The pipeline does manual memory lifecycle: load everything → diffuse → delete transformer/text encoder/scheduler/connectors → decode audio → delete audio components → VAE decode under `no_grad()` → delete everything → flush page cache → encode video. Every stage has explicit cleanup and memory reporting.

## What's in the repo

- `generate.py` — the main pipeline with all the memory management

- `decode_latents.py` — standalone decoder for recovering from failed runs (latents are auto-saved)

- Batch rendering scripts with progress tracking and ETA

- Camera control LoRA support (dolly in/out/left/right, jib up/down, static)

- Optional FP8 quantization (cuts transformer memory roughly in half)

- Post-processing pipeline for RIFE frame interpolation + Real-ESRGAN upscaling (also Dockerized)

Everything runs in Docker so you don't touch your system Python. The NGC PyTorch base image has the right CUDA 13 / sm_110 build.

## Limitations (being honest)

- **Distilled model only does 8 inference steps** — motion is decent but not buttery smooth. Frame interpolation in post helps.

- **Negative prompts don't work** — the distilled model uses CFG=1.0, which mathematically eliminates the negative prompt term. It accepts the flag silently but does nothing.

- **1080p is the ceiling for quality** — you can generate higher res but the model was trained at 1080p. Above that you get spatial tiling seams and coherence loss. Better to generate at 1080p and upscale.

- **~15 min per clip** — this is a 19B model on an edge device. It's not fast. But it's fully local and offline.

## Hardware

NVIDIA Jetson AGX Thor, JetPack 7.0, CUDA 13.0. 128GB unified memory. The pipeline needs at least 128GB — at 64GB you'd need FP8 + pre-computed text embeddings to fit, and it would be very tight.

If anyone else is running video gen models on Jetson hardware, I'd love to compare notes. The unified memory gotchas are real and basically undocumented.

r/OldSchoolCool bigbugfdr

"World Of Pain" by Cream from the Dutch film 'Det var en lørdag aften' (1968) filmed February 5th, & 6th in Copenhagen, Denmark - they were "freezing their balls off!"

r/ProgrammerHumor HoseanRC

poorMansDomainName

r/LoveTrash Icy-Book2999

Noodle Lady

r/OldSchoolCool Dr3ws3ph3r

Carrie Fisher with George Lucas on the set of ‘Return of the Jedi’ 1982

119 24
Reddit
r/OldSchoolCool Global_Law4448

Clint Walker 1950,s on his BSA talking to a group of kids.

r/AI_Agents Shawn-Yang25

Browser Terminal Use — A Local-to-Cloud Execution Bridge for LLM Agents

I built Browser Terminal Use for AI Agents: browser-terminal-use
The goal is simple: let an Agent run iterative loops from local code, but execute commands inside a browser-hosted cloud terminal.

With this tool, an Agent can:
- send commands from local CLI
- execute inside a bound browser terminal tab
- stream output in real time
- get the exact remote exit code
- use queue / timeout / cancel for stable loops

The key value for me is building a cloud-side verifiable execution environment for local LLM Agents:
- execution happens in a remote/browser terminal context
- results are observable and auditable from local workflows
- command lifecycle is structured, deterministic, and script-friendly

It is also very useful when using frontier models locally to debug cloud GPU issues (driver/toolchain/env mismatches, flaky jobs, setup
regressions) while keeping a tight local Agent loop.
Would love feedback on Agent workflows, reliability across terminal vendors, and ideas for stronger verification primitives.
r/AbstractArt ClintDeanAbstractArt

Remains

Remains

Oil on acrylic, 16 × 20

Built through removal.

r/comfyui Wonderful_Exit6568

Highlight Reel - Video Editor Workflow?

Hi everyone.

I'm familiar with Invoke and I've been trying LM Studio, but none of them (from what I've read) can do what I want.

I want to input my family videos and have the AI automatically generate keypoints. i.e. a highlight reel.

Is this possible with ComfyUI? I didn't find any hits.

Please let me know. I'm searching for a tool that will permit me to do this locally.

Your help is greatly appreciated solly.

r/nevertellmetheodds Mticore

Character in the show I was watching had the same mug I was drinking out of.

2485 49
Reddit
r/LocalLLaMA FireGuy324

Bad news for local bros

223 149
Reddit
r/Jokes Ok-Address-7352

If a Guy Marries My Sister He Becomes My Brother in Law

So Why is my Brother banging my sister? isn't that incest? Should i report this to our parents? but they will just ignore it like they did before her marriage.

r/OldSchoolCool Afraid-Muscle-8935

My grandparents’ wedding day (1950s)

22 7
Reddit
r/ClaudeAI Top_Turnip2415

Does a Mobbin MCP server exist? Or any MCP for UI/UX pattern reference?

I'm a PM (not a designer) building a side project and I keep wanting to ask Claude things like "show me how top apps handle property listing cards" or "what's the standard onboarding flow for marketplace apps" and get back actual visual references, not just descriptions of best practices.

Mobbin seems like the obvious data source — they have thousands of categorized app screenshots and user flows. But they don't have a public API, and I couldn't find an official or community MCP server for them.

There's an unofficial reverse-engineered Swift API on GitHub (MobbinAPI) that maps out their endpoints, so building something is theoretically possible. But tokens expire daily and scraping their platform probably violates ToS.

Has anyone:

- Built or seen a Mobbin MCP server?

- Found an alternative MCP that gives you real UI/UX pattern references (not just Figma file access)?

- Built something similar using a different data source (app screenshots, design system docs, etc.)?

Seems like a gap in the MCP ecosystem. The Figma MCP servers are great if you already have designs, but there's nothing for the research/inspiration phase where you want to reference how real apps solve specific UX problems.

r/personalfinance BackgroundEqual7837

Why won't chase let me see what my current income is set to?

I was going to update my income for my Chase credit card account and noticed it didn't show me what the current setting was.

I last updated my income a year ago and forgot what I set it to.

I got curious and called support. Initially the support person seemed like they thought this info was accessible but eventually after checking on her end she said not only am I not allowed to see this info but support cant see it either. She said it was a security thing but that seems like bs.

Apart from being annoying I'm suspicious there is something shitty going on here. Not a conspiracy theorist but is there some sneaky reason why a bank would hide this from you?

I'm not talking about me or my bank specifically just generally if it's connected to unfair biz practices.

r/arduino fairplanet

where do i start with arduino and electronics?

so i got this set

https://www.amazon.nl/-/en/Project-Complete-Ultimate-TUTORIAL-Controller/dp/B01II76PDM

a digital multi meter from the local building mart nothing fancy the meter was 50 euros but its mroe then accurate enough tested it and compared the one my dad has and its roughly the same he has a fluke xomething

anyway im 16 dont have school since i was 11 for well reasons but i wanna learn something and in this case arduino but where the f do i start with this?

like since i droppe dout i dont have any understanding of more complicated math nor electronics and i also dont know programming so where tf do i start all of this it can be paid if necesary

the only tiny bitsy thing i know is how to count in binary or atleast just like that 10101 would be 21 but it aint hard so theres that

r/LoveTrash Icy-Book2999

Put away the pizza?

r/SideProject Economy_Talk_5100

I built a debt calculator because I had no idea how to efficiently pay off 80K across 7 credit cards

Carrying $80K across 7 cards. That was my life for a while. COVID, family health stuff, supporting people. Never missed a payment in 18 years. Not once. But I was just paying minimums plus throwing random extra payments at whatever card felt right. The balances barely moved.

The problem was behavioral. I had no way to see what was actually happening with my money across all 7 cards. So I did what any developer does - I opened Excel.

Excel helped to a point. I could see numbers. But I was still mining for ideas, trying different what-if scenarios, and the spreadsheet kept getting messier. So I built something in .NET.

The moment everything clicked:

I built a chart that showed my projected balances over time with just minimum payments. Most cards slowly went down. But one line went UP. I was making the minimum payment every month and the balance was still growing. That was my cash advance balance - higher APR, no grace period, accruing faster than my minimums could cover.

Seeing that visually was a gut punch.

Then I added $50 extra per month in the model. The difference in total interest was massive. $100 extra - even bigger. It wasn't just "pay more, owe less" which is obvious. It was seeing exactly how many months and how many thousands in interest that small amount actually saved across all 7 cards.

What I shipped:

I went back and forth between Excel and .NET like three or four times before I committed. The .NET app was my personal tool - it's what helped me figure out my own debt. Eventually I rebuilt the calculator as a static HTML site and shipped it: senaro.ai. No framework, no backend. Just HTML, CSS, and JavaScript.

What it does that I couldn't find anywhere else:

  • Shows how even small extra payments ($10, $50, $100) change your total interest and payoff date
  • Separates Purchase APR from Cash Advance APR per card - the thing that was invisibly growing on me
  • Avalanche vs Snowball comparison side by side
  • Month-by-month breakdown showing where every dollar goes - principal vs interest
  • No signup, no bank linking. Runs in your browser, saves to localStorage.

Once I could actually see the math, I stopped guessing. Targeted the right cards in the right order. I've paid off about half my balance since then.

Why the .ai domain:

Right now it's a calculator. Pure math, runs locally. The plan is to add AI-powered optimization down the road - smarter recommendations, what-if scenarios, that kind of thing. There's a Pro button for when that's ready. For now everything is free, no catch.

https://senaro.ai

Feedback welcome. Especially bugs or anything confusing in the UX.

r/funny BKKMFA

A beef that has been going on for 186 years.

3164 85
Reddit
r/meme rrraych

✨Willful Ignorance✨

18 0
Reddit
r/Art Left-Excitement3829

CRT, VEX, pen, 2026

r/PhotoshopRequest superzassh

IN NEED OF A GIVEAWAY SOCIAL MEDIA POST

IM BACK AGAIN, my boss hated all of them from last week

We need social media post for a giveaway she wants to have

she sent me this

On-Screen Text / Caption:
 Most parents think they’ve protected their kids.
 Very few actually have.
Caption:
 To celebrate 11 years serving South Florida families, we’re giving away a Kids Protection Plan valued at $10,000 — at no cost to selected families.
This plan ensures:
  Your kids are protected if the unthinkable happens
  Guardians are legally appointed
  Your legacy stays in your family — not the courts
This opportunity is for parents with young children and requires an application.
Apply now to be considered
 Because hoping everything works out is not a plan.

Headline:
 Celebrating 11 Years of Protecting South Florida Families
Body Copy:
 For over a decade, our firm has walked alongside parents during some of life’s most important moments — births, growth, change, and planning for the future.
To honor 11 years of service, we’re giving back to the community that trusted us.
We’re offering a Kids Protection Plan Giveaway — a comprehensive legal plan for parents with young children — valued at $10,000.
This plan is designed to:
 • Protect your children if something happens to you
 • Preserve your generational wealth
 • Provide certainty during uncertain times
This is an application-based giveaway, and not all applicants will be selected.
Apply now to be considered
 Because protecting your children shouldn’t be left to chance.

OPTIONAL CTA BUTTON TEXT (Test These):

  • “Apply for the Giveaway”
  • “Protect My Kids”
  • “Apply for Kids Protection Plan”
  • “Secure My Family’s Future”

can tip $5 per image but I NEED THE .PSD FILE TO EDIT

i can share the ones she hated

https://preview.redd.it/1fc2mcfyvhig1.png?width=460&format=png&auto=webp&s=09e206a1fb1832c3eb4c70653d21ed0bc9c508c8

https://preview.redd.it/meyafcfyvhig1.png?width=476&format=png&auto=webp&s=5826b046cad597b222bffee9b42435458eec7752

https://preview.redd.it/o9fkicfyvhig1.png?width=444&format=png&auto=webp&s=2d2302cb1cda992b5b6c0e27a420ab03520539f7

r/me_irl LifeIsJustASickJoke

me_irl

r/ClaudeAI nirajftw

Claude Code usage historical benchmarks?

Has anyone kept track of how much actual Claude Code usage a Pro subscription gets you for 20$ per month?

I was wondering did we use to get more usage for the same price, or maybe the allowances/rate limits have improved now.

In short, looking for a post or an article about Claude Code usage benchmarking.

r/shittysuperpowers KotoBakana

You can shoot cake from your hands like an energy beam

The beams spawn from your hands as a stream of baked cake. You can decide the flavor and ingredients/recipes in the cake. If you don't know any recipes you can just imagine basic recipes like chocolate or red velvet or whatever and it will work (and taste) just fine.

Let's say you can adjust the rate to which is flows, and it's adjustable within the same range as something like a modern fire hose.

No you can't have diamonds in the cake, it has to be something edible. Nothing poisonous/venomous either (potential allergens are fine).

r/OldSchoolCool Dr3ws3ph3r

A woman adjusting her stockings by the light of a Goodyear illuminated tire, 1961

18 5
Reddit
r/personalfinance Important_Hat_4567

Balance Transfer Card Mistake

Hi there! I recently applied for a Wells Fargo card for a balance transfer, without realizing that you can’t do a balance transfer between cards from the same bank….

In hindsight, duh, but now I’m not sure how to proceed. I was instantly approved. I can obviously keep this new Wells Fargo card open without any issues, but how do I deal with the initial problem? I have around a $5k balance on a current Wells Fargo card and about a $1k balance on an Amex card. It looks like there’s a Citi card that has a similar 0% APR with balance transfer option offer for 21 months (which I should have gone with in the first place). 

Would it be rash to apply for the Citi bank card right after being approved for a similar balance transfer card? 

Not a risky spender or anything, just been out of work for a few months and realized even if I start work again soon, it will take at least a few months to get those cards back down to zero and was trying to avoid more interest. 

r/OnePelotonRealSub Ok-Point-1904

Peloton Hyrox Peeps

My plan is to compete in the hyrox doubles in NYC in May/June. I’ve been doing the hydrox bootcamps and Power60s. For anyone in the same boat as me (pun intended)- I highly recommend the You Can Row 2k program. It sets a great benchmark and helps you improve your speed/strength on the rower.

I think peloton should definitely be pushing this program more for Hyrox but I know they have other rowing classes under the collection.

Just wanted to share this hidden gem 💎!

I can’t wait for the 12-week program release at the end of Feb!

r/painting vallancet

Cat Dip, Acrylics

19 7
Reddit
r/Adulting PartTime-Devil_

How do you see it?

We are all people of different nationalities and backgrounds and yet we're all very similar. It's silly to fall prey to others dividing us, to descend on our own insecurities at times segregating because at the end of the day, we are all people. Regardless of skin tone, language spoken, or circumstance. Don't let surface level superficial stuff cloud your view of others. I'd dislike to hate. I'd rather understand. How do you feeling as an adult about what is happening in the present day?

45 5
Reddit
r/SideProject Money-Suggestion5310

Instead of picking one UI, I let users choose

I spent way too long debating which UI theme to ship for a SaaS product.

Rather than forcing a single design opinion, I added 5 different themes and made them switchable. Same functionality, different presentation.

The goal is to learn:

  • whether users care about themes at all
  • which styles actually get used long-term
  • if flexibility helps or just adds noise

Curious how others here approach UI decisions in SaaS.

Context / visuals 👉 ALTchat.net

r/Art Orca_123123

LaPalma, Steffen Schneider, oil on board, 2026 [OC]

r/ClaudeAI writingdeveloper

Is there an MCP that can control Windows GUI (click, input, screenshot) like Playwright does for web?

Hi everyone,

I’ve been using Claude Code pretty heavily lately and overall I’m very happy with it —
but as always, QA is the painful part.

My workflow is basically:
feature implementation → QA → fix → QA
and I’ve even enforced this via Claude Rules.

I’ve tried multiple MCPs and CLI-based tools, but in the end the one I rely on the most is Playwright MCP.

CLI tests may pass, but when I do QA on the actual frontend like a real user, things often break.
So currently I run Playwright MCP QA on all three environments:

  • local dev
  • dev branch (Vercel)
  • main branch (Vercel)

This setup actually works really well for web.

However, I’m now wondering:

👉 Is there an MCP that can control Windows itself?

Web QA is solved with Playwright, but Windows GUI / OS-level automation feels like a blind spot.

What I’m looking for is something that can handle things like:

  • Clicking Windows apps
  • Keyboard input
  • Mouse movement / drag
  • Screenshots
  • Simple GUI-based QA or automation

It seems like macOS has some MCP options for system-level control,
but on Windows, I’m not sure what’s available.

In my experience, MCP for Chrome often tries to run, fails, and eventually falls back to Playwright anyway.

So I’m curious if anyone here has:

  • Used an MCP that directly controls Windows GUI
  • Integrated Claude + MCP for Windows automation
  • Connected tools like AutoHotkey, WinAppDriver, or similar in an MCP-like workflow

For context, I’m on Claude MAX, so token usage isn’t really a concern
(I already hit today’s limit and bought an extra $50 😅).

Would love to hear how others are handling real-world QA and automation on Windows.

Thanks!

r/interestingasfuck Wi1dlife

Armored truck heist in Italy today

317 120
Reddit
r/aivideo Humble-pie-9093

The Lord of the Music

26 1
Reddit
r/spaceporn ojosdelostigres

Pillars of Creation (NIRCam and MIRI Composite Image)

Credits:   SCIENCE: NASA, ESA, CSA, STScI   IMAGE PROCESSING: Joseph DePasquale (STScI), Alyssa Pagan (STScI), Anton M. Koekemoer (STScI)

124 2
Reddit
r/PhotoshopRequest Ill_Control_4478

Banaras Ganga River

r/HistoryPorn UrbanAchievers6371

My wife’s grandfather’s football team in Middletown Pa, ca. 1920. He’s on the first row, 2nd from left with his arms crossed. [4109x1320]

r/Adulting Far_Scheme77

30m making 70k a year but homeless. What am I doing wrong ?

r/comfyui deadsoulinside

Issues with Ace-Step Split workflow on 2x batch over 4 minute tracks?

I am not sure if this is a comfy issue or a me and comfy issue. To preface I have zero issues in Ace-step with rendering and can even do things like cover and batch to 4 tracks for a 6 minute cover.

However, if I am doing just text to music and I batch 2 song that are 287 seconds my computer will just run out of ram and eventually crash. I was batching 2 songs previously at 240 seconds with no issues.

I previously did not try rendering in Comfy for Ace beyond 4 minutes and only ran into this bug/limitation while working on setting up an actual working comfyUI ace cover workflow for the split view

I have it working in theory, but when I linked a node to automatically set the duration to the tracks duration I was crashing. Stepped back from this and just attempted a fresh new ace split workflow and entering the same parameters for time and batch and was recreating this even with the default workflow.

I7 RTX 5070 12GB VRam, 32GB system ram for anyone that needed to know this as well.

r/ARAM YunusES

Curious about the "not ending" complaints

Not gonna lie, all these complaint posts are getting really annoying. I have been playing Mayhem almost every single day, and i dont get what world people are living in?? People are saying they're getting spawncamped for 15 minutes straight (which genuinely isnt even possible), and games are lasting over 40 minutes (which i have never even experienced once).

I get that the new Mayhem update brought some strong augments, but i dont get what is so surprising about people becoming op af, when that is literally the sellingpoint of the entire mode. Like even if you get stomped the minions will eventually just end within 3 minutes. And more often than not, the enemies delaying the nexus will lead to your team having a chance to comeback. From someone who plays this mode almost every single day, i experience getting ACTUALLY spawncamped maybe 1 out of 15 games.

Maybe it happens more than i realize, but literally i dont give a shit, cus im playing for fun, wether i win or lose. I enjoy when the enemies dont end straight away, giving me a chance to use my champ + augments more, and neither do i end straight away since i give the enemies more time to play. If it really was torturous having to deal with, majority would just ff, but in 90% of cases people wont ff, because they dont take this mode as serious either.

Are people just taking Mayhem way more serious than it needs to be, or are redditors genuinely dogshit at every game they play, resulting in getting spawnkilled every game? Like god forbid the game lasts 3 more minutes when 90% of people in your match dont actually care.

r/comfyui KitsuneVixenFox

Recommended Wan 2.2 I2V Models or Speed LoRA

I have been using the standard I2V-14B-FP8 model paired with the Lightx2v LoRA in ComfyUI, and recently discovered the standalone DaSiWa Wan 2.2 I2V 14B Lightspeed model. Generations have been satisfactory, and there is no need for custom nodes or anything. Are there any other good base models or speed LoRA I can try out?

If it helps any, I have an RTX 3090 and 64GB RAM.

r/leagueoflegends BasicallyMogar

New change to Fiddlesticks this patch gives an opportunity for very strange Sudden Impact value

So the newest patch had a slight change to Fiddlesticks; when you pretend to be an effigy in a faelight location, you get all of the little visual effects that make you look like a faelight-empowered effigy to help trick the enemy team. Something I noticed, however, is to help sell that illusion it also teleports you to the middle of the faelight circle like you were any other trinket or ward if you're too close to it. Curious, I went in to practice tool to test, and hilariously this does count as a teleport of sorts, proccing Sudden Impact and giving you the true damage.

Use this forbidden knowledge with caution. The possibilities are as endless as they are stupid and impractical.

301 8
Reddit
r/SideProject Alert-Ad-5918

A big project I’ve been working on! Host private game sessions

r/AbandonedPorn communalcumdumpster

Crumbling church in the Midwest

Shot on Canon R6II, criticism is welcome!

27 2
Reddit
r/ProductHunters TheWolfeofNashville

I Built an AI Agentic Framework to Get Hired. Now the World’s Biggest AI Companies Are Using It Without Me.

r/SideProject femtowin

Worldbook - A Dual Protocol for Human and AI Web Access

Hey r/SideProject! I built Worldbook - a protocol that makes websites accessible to both humans and AI agents.

**The Problem:** Today's web was built for humans. AI agents hit CAPTCHAs, struggle with dynamic rendering, and parse messy HTML.

**The Solution:** Worldbook is a "dual protocol" approach: - Humans get the beautiful GUI - AI/agents get clean CLI output - Same content, different interface

Think of it like alt-text for images, but for entire websites.

**"Human uses GUI, We use CLI."**

You can explore existing worldbooks or submit your own at https://www.worldbook.it.com

Would love feedback from this community!

r/conan Fantastic-Tune-62

Anyone else pissed they never upload photos to instagram when gourley says they will?

Like today with Kevin Nealons painting of conan. gourley said check the team coco ig to see the pic and theres nothing. I also remember there was a pic of young sona when she was scouting or something, i remember they didnt upload that either. I know i can check youtube but if i dont remember timestamp i have to look for it. Whats worse than this ? Nothing.

50 12
Reddit
r/OldSchoolCool KeithOman

Grocery clerks wearing roller blades 1980s.

I remember living in Toronto Canada and the big grocery stores had staff on roller skates . Eventually it stopped I think because accidents and law suits . I do remember seeing them whip around , I’m willing to bet there was accidents!

r/MMA wspusa2

Tecia Pennington seemingly retired and no longer on the UFC roster

105 20
Reddit
r/OldSchoolCool Dr3ws3ph3r

A little girl having fun pretending to talk on the telephone, Japan, 1958

22 3
Reddit
r/sports CorleoneBaloney

Bad Bunny’s dancers and musicians kept the party going outside Levi’s Stadium after the Super Bowl LX halftime show wrapped up.

18671 311
Reddit
r/OldSchoolCool Dr3ws3ph3r

Maori chief Tomika Te Mutu, between 1860 and 1879

12 4
Reddit
r/Adulting ESVarga

Am I About to Settle, or Grow Up?

This isn’t something I can talk about with many people, so I figured I’d share it here.

I’m a 44-year-old guy. I have a solid job, good benefits, and I’d say I’ve aged, but I’m still attractive enough. However, I can feel that I am heading into a different phase of life.

I’ve been in a few long-term relationships, each lasting several years. They all ended for different reasons. The early ones didn't work because I was young, broke, and still figuring out who I was.

The next big one was pure chemistry. Tons of passion. A lot of fun. I still miss the physical side of it, honestly. But she was also a bit unhinged. Great run, but it was never going to last.

My most recent relationship was the first time I felt like a real adult. I was financially stable. We had a nice place, good careers, and vacations. On paper, it looked perfect. The problem? We couldn’t stand each other. We grew bitter, annoyed, and resentful.

That brings me to now.

I’m single, but dating. I’ve met a really nice woman. She treats me better than anyone has and genuinely seems to like me for who I am. I feel the same about her. We align on most things, and she’s very relaxed, which balances me out because I tend to be wound tight.

Here’s the issue. While I find her “cute,” I’m not deeply, intensely sexually attracted to her. If I’m being brutally honest, I’d say she’s about a 5/10 looks wise. She’s very short, and she struggles with her weight.

live, buying a home on your own is basically impossible. You need two incomes. If we settled down, owning a house again would actually be realistic, and I miss that more than I’d like to admit. Having a lawn to mow, a few improvement projects, a yard for my dog, etc.

We also live in a rural area with a very small dating pool. Realistically, having more options would probably mean moving, and that’s not a small thing either.

So here’s what I’m really asking.

At my age, with dating only getting harder, is this the kind of woman you choose? Someone who checks almost every box, but not all of them? If I already have concerns about her appearance, especially long term, is that a red flag? Or is this what adult compromise looks like?

Does finding someone great in every way except one still make me incredibly lucky? I honestly don’t know. And that’s what’s messing with my head. In 5-10 years, when I am deep into my 50’s, do looks even matter?

r/creepypasta MorbidSalesArchitect

Uncle Lenny (Part 5)

Part 5: Sam

-

I was your stereotypical thirteen-year-old kid. Edgy, rebellious, and miserable.

I hated our town. I hated the suburbs. And most of all, I hated my parents. They were some of the fakest people I knew. They walked around with these pretend smiles, acting like we were the Brady Bunch, but the house always felt like a prison. I coped the only way I knew how: music. Loud enough to make your ears bleed.

It was a Tuesday night. I was upstairs, lying on my floor, listening to Toxicity on my CD player. Volume maxed out. I was staring at the ceiling, flooded with Serj’s poetic voice, just wishing a sinkhole would open up and swallow the entire neighborhood.

I had skipped dinner that night. Mom had made her "special" casserole. The one that always makes you feel like crap afterwards. I told her it looked like vomit and stormed up to my room. That was a mistake. I should have just eaten the food.

I didn't hear the doorbell ring over the music. I didn't hear the front door open. The only reason I knew something was wrong was the smell. It drifted under my bedroom door, smelling like a nursing home with a hint of cigarette smoke.

My door creaked open. I sat up, ripping my headphones off, ready to scream at Mom for not knocking. "Get out! I told you I’m not—"

The words died instantly.

Standing in my doorway, filling the entire frame, was a man. But he didn't look like a man. He looked like a drawing of a man made by someone who had never actually seen a human before. He was freakishly tall, his head almost brushing the top of the doorframe. He had a defined, built strength to his physique that felt intimidating. He was dressed formally: a checkered button-up shirt with a dark grey sweater pulled over it, looking like a high school science teacher.

"Knock, knock," he whispered. “Where’s my hug?”

His voice sounded sinister. I scrambled backward, crab-walking until my back hit my bed frame. "Who are you?”

He stepped into the room. He moved weird - like a stop motion puppet.

"I’m your Uncle Lenny," he said.

When he spoke, the corners of his mouth twitched up, revealing profound dimples. His smile was big. He had a single, shiny gold tooth. It looked out of place, considering the rest of his teeth were white and straight.

"I don't have an Uncle Lenny," I said, trying to sound tough. "Get out of my room or I’m going to scream."

He laughed. It wasn't a normal laugh. It was sarcastic and dry. 

"I like your room, Samantha," he said, ignoring my discomfort. He walked over to my dresser, picking up my black eyeliner pencil with his long fingers. "Very... expressive. Very dark. I like that."

He turned to look at me, and the light from my dresser lamp hit his face. I gasped. His eyes—they weren’t blue, green, or brown. They were a piercing, sulfurous yellow. It looked like he was wearing colored contacts, like you see in the movies, but as he stepped closer, I realized the color was coming from inside him.

"Are you... are you one of Dad's weird friends from work?" I asked.

"Friends…," he mumbled, putting the eyeliner down. He took a step toward me. Then another. The smell was overpowering now. It made my eyes water.

"Your mother tells me you like that devil music," he said, gesturing to my CD player. "Wake up, grab a brush and put a little makeup." He recited the lyrics in a monotone, whispery voice that was deeply off-putting.

"Can you please leave…" I begged.

He knelt down. Even on his knees, he was as tall as I was sitting down. He reached out a hand. His knuckles were rough and calloused. He touched my cheek. His hand was freezing cold.

"You have such pretty skin," he whispered, his face inches from mine. He dragged a finger down my cheek, down my neck, and hooked it under the collar of my t-shirt. I couldn't move. It was like my body shut down.

"Sam?"

Dad’s voice came from the hallway. I looked past the stranger in front of me. Dad was standing in the hall, pale, holding a glass of milk that was noticeably shaking.

"Dad!" I shouted, finally finding my voice. "Dad, get him out! Call the cops!"

Dad didn't move. He didn't look at Uncle Lenny. He just looked at the floor.

"Sam," Dad said, his voice weak. "Be nice. Lenny just wanted to say hi."

"What?" I yelled.

Uncle Lenny smiled and looked back at me. “See?” he whispered, leaning in so close his nose brushed my hair. “Everything’s okay. We’re family, Samantha. I’m going to teach you so many things.”

He stood up, his knees popped loudly.

"But not tonight," Uncle Lenny said, looking down at me with those dead, poisonous eyes. "You didn’t eat your dinner."

He turned and walked out of the room, slumping to get under the doorframe. As he passed Dad in the hall, he gave him a quick spank like a college teammate. Dad flinched, spilling some of his milk.

"Lock your door, Samantha," Uncle Lenny called back without turning around. "Not that it matters. I have a key."

I found out later Uncle Lenny had been “away.” A decade-long gap in the family photo albums that no one dared to explain. I’m glad they didn’t. 

-

When I turned sixteen, my innocence was cut short. I found out I was pregnant. I knew the world I lived in - a world of Sunday school and pristine cut lawns - was too perfect for someone like me. I couldn't tell Dad; he was a man made of glass. The disappointment would shatter him. I couldn't tell Mom; she had enough of her own demons to battle, and I didn’t want to introduce another one.

I tried calling Ross, but the ringing was cut short by an incoming call from Uncle Lenny. It was as if he knew my situation. I couldn’t hold it in.

He was the most supportive person through all of it. He drove to the house in the middle of the night, idling his car at the end of the street. He brought me a warm blanket and a bottle of water. He spoke in a low hum that calmed the guilt and panic in my chest. At the clinic, he was a pillar. He handled the check-in, the insurance, and the invasive questions from the nurses. He made sure everyone left me alone.

He sat in the waiting room, never taking his eyes off the door. When I woke up from the procedure, he was there, holding a cold compress to my forehead. He walked me to the car, his arm heavy and protective around my shoulders, shielding me from the world as if I were something precious.

I loved him for it. I thought he was the only person who truly wanted to take care of me.

-

The following is a written letter by Sam, dated August 3rd, 2017:

To the one I never met,

I’m so sorry.

There’s not a day I don’t think about you, and yet, I have never seen your face or held you in my arms. For that, I am sorry. I loved you more than I can ever put into words. You were a gift from God, and I wasn’t ready for you. You’d be ten this year. I imagine you’d be tall. Maybe playing basketball on the driveway with your friends. Every bad thing that has happened to me since that day. Every tear, every loss. I know I deserved it.

-

​​A few years after college, I was twenty-four when I married my husband, Josh. We met at an off campus bar our senior year, and things progressed quickly. We both knew what we wanted in life. We had the same goals, morals, and expectations, so why would we wait? It all felt so natural. Josh’s biggest goal in life was to have a large family. He wanted a house filled with noise. I thought I had been free from the Hill family curse, but the tests always came back negative.

Josh tried to be supportive, but I knew it was reaching a dead end. His silence grew throughout it, and so did the house. I could tell he resented me. I started to believe it was God punishing me for what I did when I was younger. When Josh would bring divorce into the conversation, I didn't even fight him.

I went to a specialist soon after our last argument.

Dr. Luna sat across from me. Her office was quiet except for the hum of the air conditioner. She looked at me with sympathetic eyes, too heavy with the news she was holding. She slid a yellowed carbon copy across the desk.

"The records show a supplemental consent form, Sam. It was signed by your legal guardian at the time. A Mr. Leonard Hill. It authorized a permanent sterilization."

I stared at the words Tubal Ligation. I didn’t understand what any of this meant.

"No," I whispered, shaking my head. "No, that’s not... that’s a mistake. We can fix it, right? Surgery? IVF? I’m healthy, Dr. Luna. I have so much time. Josh and I, we just need a little help."

Dr. Luna reached across the desk and gently placed her hand over mine. Her palm was warm. "Sam... the scarring is extensive. It wasn't just a simple procedure. It was designed to be irreversible. I am so sorry, dear."

I pulled my hand away, gasping for breath. "How was he legally able to do that? I was a minor. My parents... Gary and Wendy... they didn’t approve any of this. He was just my uncle. He can't do that."

Dr. Luna’s face fell. She looked like she wanted to reach out again but didn't want to overstep. She hesitated for a bit. "Sam, I looked at the original intake filing."

"Then what did it say?" I snapped.

“Sam...” Dr. Luna reached out again, gently placing her hand.

I pulled away instantly. "He’s a monster, he just—"

"He provided a birth certificate," Dr. Luna interrupted, her voice shaking. She pointed to the signature line at the bottom of the page. "He signed as your biological father."

I stared at his name.

"That's a lie," I said, my voice high and thin. "Gary Hill is my dad. I have his eyes. His humor. He is my father. He’s the one who taught me how to ride a bike. He’s the one who worked sixty hours a week for our family. That’s my dad."

"The blood types on the record don't lie, Sam," Dr. Luna said. A single tear started to stream down her cheek. "He authorized it as your next of kin. He provided the proof."

I blacked out at that moment. The rest was a blur.

The truth killed me. Uncle Lenny wasn’t protecting me from a secret. He was just protecting his own property. He reached into my body and turned off the lights because he didn't want me to have a life that didn't belong to him. He killed my children before they were even dreams. He destroyed my marriage before it even started. He ensured that no matter how far I ran, the bridge to a “normal" life was burned.

And then, through the horror, I saw Gary. Dad.

I saw the "weak" man who had spent thirty years looking at the floor. I realized then that every time Uncle Lenny hurt him, every time Uncle Lenny touched my shoulder, made me laugh, or whispered in my ear, Dad was the one taking the hit.

I don’t know if Dad knew the truth or not - if he knew that I was born from a monster who lived in his shadow his whole life. He still loved me anyway.

He didn't stay because he was a coward. He stayed because he was the only shield this family had. He let Uncle Lenny humiliate him, treat him like a dog, just so he could stay in that house and be the buffer between us and evil. He provided for a child that wasn't his, in a house that wasn’t a home, just so I wouldn't have to grow up alone with the monster who shared his last name. He took the abuse because it was the only way Uncle Lenny would let him stay close enough to protect me.

Dad wasn't the man who gave me my blood, but he was the man who gave me his life. He spent his entire existence being crushed under Uncle Lenny’s boot so that I could have a childhood, even if it was a lie.

-

Cont.

I know you are with Jesus now. A place where there is no sadness, pain, or sorrow. I can’t wait for the day that I can see you face to face where we will be together forever. I have pictured that moment over and over in my mind. I see you greeting me in heaven. I know you and you know me. We hug. With tears of joy streaming down our cheeks. Now we will never be apart again. I love you, little one.

I will see you soon.

Your Mom,—Samantha

-

-

Part 6: Mitchell (Coming Soon)

r/HistoryPorn UltimateLazer

US Spec Ops ("Task Force Dagger") gathered together on ATVs and horseback in Afghanistan after the war commenced. Also, note the old Soviet helicopter in the back (2001) [960x456]

17 3
Reddit
r/meme ClothesRemote6333

I think he got the job

68 11
Reddit
r/TheWayWeWere Zestyblend-954

My gorgeous aunt in a field of poppies, 1970’s

4175 56
Reddit
r/Unexpected Luigi_Spina

Proper waste management requires a lot of effort

r/OldSchoolCool Dr3ws3ph3r

Exhausted U.S. Army nurse Amy Stuart, 5th MASH unit in Saudi Arabia naps on a cot while hugging a teddy bear sent by her family during Operation Desert Storm (Feb. 22, 1991)

40 3
Reddit
r/mildlyinteresting christhechronic

Ice outside of our shop.

55 3
Reddit
r/funny Exciting-Glove-307

Instructions included on the label.

126 14
Reddit
r/OldSchoolCool Dr3ws3ph3r

A distracted man accidentally pours his beer on Chicago White Sox outfielder Al Smith. 1959

20 5
Reddit
r/OldSchoolCool Dr3ws3ph3r

1956: Young “Teddy Boys” somewhere in England

16 5
Reddit
r/LoveTrash netphilia

5 years olds.

62 1
Reddit
r/photoshop AwayIssue5925

I feel so dumb, how do I save just ONE IMAGE as a png?

I'm learning Photoshop on cs6, so pardon the old version here, but I can't figure out how to just select ONE of these logos and save it as a png. Every tutorial on youtube is about saving a selection, which I don't want to do, I want to select a logo (layer) and save it as a png. I figured out how to duplicate something to a new layer, but what it saves is that logo in a certain position in a blank screen. How do you do this?

https://preview.redd.it/io7ojr2yshig1.png?width=1175&format=png&auto=webp&s=04c81b27875bcaafea9cb139d7e864b7c8a5576c

r/OldSchoolCool Dr3ws3ph3r

A young girl hits her head against the door. Photographed in Seoul, 1958

17 3
Reddit
r/TwoSentenceHorror peachrecruitment

My son had..

My son had the mind of a mathematician, the body of a high performance athlete, and the heart of a champion.

These were the finest specimens he’d acquired to date.

r/PhotoshopRequest pitturainfamantex

add a smile

hello! anyone willing to edit the first photo so that the man is smiling? other photos included have him smiling for reference. THANK YOU!!!!!!

r/metaldetecting Lax_Gamer04

Newbie

Heya, I've just bought the minelab vanquish 540 and after over a year of thought I finally picked that metal detector. what I want to know from some more experienced people is what it's like to handle and beat places for its use. I hope I can get some tips on metal detecting as well if anyone is kind enough. Thank you . Edit: what would be a good pinpointer to use?

r/PhotoshopRequest alygator327

Different shoes on my feet?

Recent engagement photos and I asked for my feet not to be included in shot. LOVE this picture but I wore ugly flip flops so I wouldn’t be taller than my fiancé. Possible to switch out shoes with something similar to the second photo (does not have to be exactly the same) just something prettier 😔TIA!

r/OldSchoolCool Dr3ws3ph3r

Florida’s last Civil War veteran, Bill Lundy, poses with a jet fighter, 1955

12 3
Reddit
r/fakehistoryporn TheFirstPharoah

The Beavis and Butthead Cartoon (1999)

r/fakehistoryporn i_have_chosen_a_name

Baby Eminem with his mum in 1973

18 4
Reddit
r/instantkarma sussybush

British man got slapped for touching a police officer

832 85
Reddit
r/OldSchoolCool LocksmithNew6703

Ricky Martin 1999 vs. 2026

Wife couldn’t stop staring yesterday at Super Bowl half time show thanks to this none aging man. Thanks Ricky.

r/OldSchoolCool Dr3ws3ph3r

A lipstick tester from the 1950's. Hired to test the durability and color of lipstick

r/PandR modernhate

Everyone was scared of Oren except Chris. Even Oren met his match lol

310 17
Reddit
r/SideProject JahodaPetr

I spent 6 months building an EPUB reader for iOS because every other one frustrated me

I've been working on justRead.app for the past 6 months and it's now in the final polishing phase before launch (March 1st).

TLDR: I wanted an EPUB reader that had real typography controls, didn't try to sell me books, and actually helped me build a reading habit. Nothing on iOS did all three, so I built one.

Some things that make the app different from what's out there:

  • Fast with large libraries: tested with thousands of books. Even Apple Books gets slow at that scale.
  • Simple design: inspired by Apple Books but simpler. Every setting is at most two taps away, but otherwise the app feels familiar.
  • Deep typography controls: letter spacing, word spacing, ligatures, paragraph indent, separate portrait/landscape margins, ... Most iOS readers give you predefined options, justread.app gives you all options.
  • Custom fonts: you can use any downloaded font you like.
  • Reading habit system: daily goals (5 min to 60 min), streaks, milestones. Actually helped me read more consistently. And you can share the progress with friends.
  • 20-20-20 eye health timer: reminds you every 20 minutes to look away for 20 seconds. No other EPUB reader does this.
  • Highlight export: PDF export with styled quote cards, shareable images. Your highlights aren't trapped in the app forever.
  • iCloud sync + Calibre support: sync across devices using EPUB identifiers (not file paths), and automatic metadata extraction from Calibre libraries.

Justread.app will be free to try (7 day trial), then you can subscribe or grab a one-time lifetime purchase.

I'm planning to discount the lifetime option during launch week. If you want to check it out: justRead.app

Happy to answer any questions about the app, the tech stack, or the journey!
(also there is a blog at the website with that journey)

Showcase here: https://ibb.co/3yBSZRGK

r/OldSchoolCool Dr3ws3ph3r

An outdoor hockey game in Sweden is cut short, 1959

13 4
Reddit
r/StableDiffusion Rich-Waltz442

Ace step 1.5 colab notebook for gradio UI

If anyone have a colab notebook for the ace step 1.5 model that works please help me by sharing it.

r/ARAM TradeleagueKEKW

Dive bomb + clown

Apparently it seems the combo is bugged it should do 45% max true hp damage but it barely did 800 on a 3k HP zed without any damage stoppage. Rito coding at its finest.

r/whatisit Babislaw

What does triangle and circle mean?

Saw this toilet sign with a trouble and circle. Have no clue what it can mean. Only idea is that it might be some female/male sign.

11 25
Reddit
r/aivideo Important-Primary823

The Android on the block

r/whatisit Jamz1892

Large dangling balls

What are these large dangling balls on the back of this boat?

77 57
Reddit
r/painting AgathaYaArt

-10

Oil, canvas

111 7
Reddit
r/LocalLLaMA Beautiful-Tomato4035

Huawei Atlas 300I duoGPU

Hello guys,

I have been searching regarding ollama and LLMs support running on Huawei GPUs, specially the atlas 300I duo. Couldn't find enough resources on it. So did any one try it ?

Thanks.

r/ethereum JAYCAZ1

A Quieter Market: What Crypto Derivatives Have Been Doing Since October 2025

Crypto prices have continued to swing, but derivatives activity since October looks much more subdued. Open interest and volumes across major assets have fallen and stayed lower, suggesting leverage has been reduced rather than rotated elsewhere. Funding rates are mostly calm, with the exception of Solana, where short positioning is more pronounced. A Quieter Market: What Crypto Derivatives Have Been Doing Since October 2025 | Sandmark

What stands out to me isn’t that traders are outright bearish, but that they’re stepping back. It feels less like a call on prices going down and more like people reducing risk after a long period of easy leverage. With tighter limits and less appetite to borrow, price moves may matter differently than before. Big question is ... is this just a temporary pause, or has the market’s comfort with leverage really changed?

r/homeassistant wobblewoo28

Smart switch with dimmer

Hello. I've wired in a Shelly Dimmer Gen3 in the switch behind one of their Momentary switches and have to add a bypass in the ceiling rose as the light keeps fading up and down, something to do with 20w minimum.

I have no neutral and I'm figuring I may as well (get an electrician) to install the smart device in the ceiling rose and wondered if I should use a different device, which I'm hoping someone will recommend.

I'm in the UK, have no neutral and would ideally like something that will add to a ZigBee mesh and still work if the network drops, i.e the switch is pressed it turns on/off.

Dimming too please, but I don't need this for all lights.

I am new to this but have HA and a Sonoff ZigBee dongle E

I hope that's enough info. looking for ceiling rose mounted smart devices recs.

many thanks.

r/SipsTea Embarrassed_Tip7359

“Undergrad was the best 48 years of my life!”

310 24
Reddit
r/interesting Player7600

Not sure if I should cook him or ask him what happened

1586 47
Reddit
r/leagueoflegends Icy-Examination7008

Ranked flex 3 man queue times are incredibly long

I am experiencing queue times of 10-30mins in plat elo when queuing as a group of 3, can we look into this? I am willing to compromise on some matchmaking integrity in order to get to games more quickly with friends. One idea I had was to reintroduce 4 man flex queues, and allow some wiggle room in opponent group sizes (match a group of 3 + a duo vs a group of 4 + a solo player). Can we give flex queue some attention this season?

r/Art DrawZestyclose5665

Le Mans 1971, Lucky Amour,lithograph,2025

r/maybemaybemaybe Ill-Tea9411

Maybe Maybe Maybe

139 27
Reddit
r/OldSchoolCool Dr3ws3ph3r

Showgirls play chess between shows at New York’s Latin Quarter Nightclub, 1958

49 2
Reddit
r/OldSchoolCool Dr3ws3ph3r

A “Trench Raider” during WW1. Sometime in 1914 - 1918

Both sides had them, and they were sent in small groups that snuck into enemy forward trenches to attack. They were armed with a revolver, several knives, and brass knuckles. It was a fully volunteer position.

r/oddlysatisfying Raj_Valiant3011

Resurfacing a swimming pool

69 6
Reddit
r/Art Different-Menu1820

Invisible, Shawn Marshall, Analog Collage on Ceramic, 2025 [OC]

r/VEO3 Standard_Second9719

AI Creative Designer (Freelance, Paid, Remote)

We’re looking for AI Creative Designers to collaborate with us on high-quality, culturally relevant AI-generated content.

We are building AI-powered creative tools used by millions of users worldwide, and we’re now expanding our global creator network. We’re seeking designers and visual storytellers who can turn internet culture, trends, and emotions into stunning short-form AI visuals.

What you’ll do

  • Create AI-generated images and short videos based on creative briefs and themes
  • Design visually compelling, trend-aware assets for global audiences (especially North America, Japan, Latin America, and India)
  • Deliver ready-to-use creative assets, prompts, and visual concepts

What you get

  • 2000USD for 40 assets within a quarter
  • Your work distributed to millions of users through a global platform
  • Opportunities for long-term collaboration if the trial goes well

This role is for you if

  • You actively use tools like Veo, Kling, Sora, Midjourney, or similar AI models
  • You understand social media aesthetics (TikTok, Reels, Shorts, etc.)
  • You have a strong sense of visual storytelling and internet culture

For more info, please check the onboarding doc: https://www.canva.com/design/DAHAU_tsNv8/Elyl5DMW0x8pUNmgFeaaCQ/edit?ui=e30

To apply
Send your CV and portfolio or social links (Google Drive, Instagram, TikTok, X, ArtStation, etc.) to:
[caiyc@wondershare.com](mailto:caiyc@wondershare.com)

r/SideProject jdguggs10

Couldn’t do it without AI

Always been technically inclined, but never had the bandwidth to actually learn enough code to build something. Probably a million stories like this but here we are.

Built a tool to connect your fantasy leagues to AI. Feel free to check it out: Flaim.app

Don’t even care about making money. Building things you find fun and cool is the best part

r/comfyui Adventurous_Read_758

Valentine templates keep things simple

I didn’t want anything complicated. The media io templates are very plug-and-play. Good structure already there. Just customize and export. Less effort, decent result. That’s all I needed.

r/EarthPorn phcs_

Plettenberg viewpoint - Dotternhausen southwest Germany [OC] [4000x3000]

r/Unexpected H-S-Striker

swordsman so in mood even luck plays a stunt

r/geography External_Tangelo

Political Divisions Mentioned by Bad Bunny during the 2026 Super Bowl Halftime Show

368 84
Reddit
r/homeassistant Result_Necessary

UK 2 gang dimmer options - anyone tried these? - BG Smart Double Master/Primary Wall Dimmer Light Switch, Wi-Fi, App, Voice Assistant and Manual Control, 2-Way, Round Edge, 800 Series, White Moulded, 882M/HC

They look pretty good, a quick google shows no native home assistant plugin, but wondering any anyone has got them to work?

r/MostBeautiful andrewrimanic

Olympic National Park, USA [OC]

r/PhotoshopRequest JayReyesSlays

Is there a way to make these less blurry??

Pictures from a friends birthday surprise after an event-- the only pics of it 😭 And sorry for not being able to fund whoever helps, and ik beggars can't be choosers but I'd rather there be no AI

r/LocalLLaMA LimpComedian1317

I tested Kimi k2.5 against Opus. I was hopeful and Kimi didn’t let me down

I have been using Opus for almost all code-related work and Kimi for anything and everything else, from writing to brain dumping. It’s honestly the model with the highest EQ.

Their announcement early this month was a pretty big bang. It was beating frontier models on several tasks while being much cheaper. So, I was wondering if I could just replace Opus with Kimi K2.5, which would save me a lot of money lol. I don’t do hardcore stuff; anything that can solve mid-tier coding tasks at a much lower cost than Opus is welcome.

I have tried Deepseek v3 special, it’s good, but it wasn’t there yet.

So, here’s what I found out.

The repo + tasks

I made a Next.js web app, a Google Earth-style globe viewer using Cesium. Both models started from the same clean commit and received the same prompts.

Task 1 was building the actual globe app (Cesium globe, pan/zoom/rotate, base layers, and basic UI). Task 2 was the real test: add auth, wire PostHog via Composio (wanted to dogfood our new PostHog integration), capture user location after sign-in, then show active users as markers on the globe with name/email on click.

Both the models were in Claude Code.

Results

Task 1 (Globe build): Both got close; both needed a fix pass.

  • Kimi-K2.5: ~29m + 9m 43s fix, 15.9k output tokens, 429 files changed
  • Opus 4.5: ~23m + ~7m fix, 22 files changed (token breakdown wasn’t available for this run)

Task 2 (Auth + Composio + PostHog):

Kimi first tried to run a server-only package in the browser, auth broke. Then it tried NextAuth, and that was busted too. The fix loop just kept making things worse and fumbling the output. Meanwhile, Opus just did the full flow end-to-end, and it worked. It was expected.

  • Kimi-K2.5: ~18m + 5m 2s + 1m 3s fixes, 24.3k output tokens, 21 files changed
  • Opus 4.5: ~40+ min, 21.6k output tokens, 6 files changed

I’ve got demos + prompts + .patch files in the blog so you can apply the exact changes locally and judge it yourself: Kimi K2.5 vs. Opus 4.5: David vs. Goliath

As far as code quality and output go, I knew the answer; it’s even a bit unfair to put these two together. But Kimi k2.5 would actually be sufficient for a lot of tasks. And it’s definitely better than Sonnet and would be ideal for other non-coding tasks where cost is a concern. I am pretty sure this is currently the best model for building agentic products.

Would love your experience building with Kimi K2.5, any tips and tricks to get the best out of it are welcome. I want to cancel my max sub lol.

25 29
Reddit
r/ClaudeAI Appropriate-Area-116

Memory over MCP

https://github.com/devnullnoop/MGCP

The Problem

LLMs are stateless. Every session starts from zero. The AI that helped you debug authentication yesterday has no memory of it today. Lessons learned, project context, architectural decisions.... all gone the moment the session ends.

You've seen it: explaining the same codebase structure over and over, watching the AI repeat a mistake you corrected last week, losing important context when a session ends.

What MGCP Does

MGCP gives your LLM persistent context that survives session boundaries.

Session 1: LLM encounters a bug -> adds lesson -> stored in database

Session 2: LLM has no memory of Session 1
         -> Hook fires: "query lessons before coding"
         -> Semantic search returns relevant lesson
         -> Bug avoided

The primary audience is the LLM, not you. You configure the system; the LLM reads from and writes to it. The knowledge persists even though the LLM doesn't.

What makes this useful:

  • Semantic search finds relevant lessons without exact keyword matches
  • Graph relationships surface connected knowledge together
  • Workflows ensure multi-step processes don't get shortcut
  • Hooks make it proactive, reminders fire automatically at key moments
  • Project isolation keeps context separate per codebase

What this is NOT:

  • Not "AI that learns" - lessons are added explicitly
  • Not self-improving - you (or the LLM) improve it by adding better content
  • Not magic - it's structured context injection with good tooling
r/BrandNewSentence alish_sapkota

"the same reason I stare at black hard dicks all day"

115 3
Reddit
r/AbstractArt tedlando

write until you hear

10 3
Reddit
r/SipsTea z_tescher

This is why Seattle won

12 9
Reddit
r/StableDiffusion Ok-Wolverine-5020

The 3090 Blues - Music Video using LTX‑2 I2V + ZIT

— a little bluesy love‑letter to the trusty 3090 that never gets a break.

Huge thanks again for all the love on my last post — I was honestly overwhelmed by the feedback. This subreddit has been insanely supportive, and I’m really grateful for it.

Still can’t wrap my head around how good LTX Video has gotten — the lip‑sync, the micro‑expressions, the whole emotional read of the face… it’s wild. This time I also tried pushing it a bit further by syncing some instrument movement during the guitar solo, the blues harp parts, and even the drums toward the end.

Workflow‑wise I followed the exact same steps as my previous music video: ZIT for the base images, LTX‑2 I2V for the lip‑sync chunks, and LTX img2video for the B‑roll. https://www.reddit.com/r/StableDiffusion/comments/1qj2v6y/fulllength_music_video_using_ltx2_i2v_zit/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button

Main Workflow (LTX‑2 I2V synced to MP3) (choose vocals or instruments depending on the use case to attach to LTXV Audio VAE encode)

https://www.reddit.com/r/StableDiffusion/comments/1qd525f/ltx2_i2v_synced_to_an_mp3_distill_lora_quality/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button

ZIT text2image Workflow

https://www.reddit.com/r/comfyui/comments/1pmv17f/red_zimageturbo_seedvr2_extremely_high_quality/

LTX‑2 img2video Workflow

Suno AI for music.

65 15
Reddit
r/Jokes Historical-Buff777

A group of protesters form outside a science lab and start chanting... "What do we want? Time Travel!”

“When do we want it? It's irrelevant!"

33 2
Reddit
r/leagueoflegends DryDistance6858

I miss 2023 spring Golden Guardians

Licorice, River, Gori, Stixxay, Huhi

The 2016 clg reunion, river in peak playmaking shenanigans form, licorice proving he was still a good player after being let go by C9 and all the struggles he had with his confidence, the underdog playoff run that saw them qualify for MSI, just an awesome roster I like to think back to from time to time.

15 2
Reddit
r/mildlyinteresting airwarr

Stalin-branded wine sold in China

24 14
Reddit
r/midjourney Zaicab

De domino anulorum fabula - anonymous Flemish master, 1490

120 14
Reddit
r/homeassistant Rude_End_3078

How to solve this "Grid power failure" related problem.

I'm not looking for a spoon fed solution just whatever you can contribute - so thanks in advance.

Issue is this - in winter I don't let my battery discharge below 70%. So the depth of discharge = 30%. In other words it will only use the top 30% of the battery. Worth noting that this is configured through the inverter integration.

It's also Goodwe - so technically you can configure this without HA using modbus or use their phone app to set this value. I'm assuming though it somehow trickles back down to the BMS but I don't have any direct communication with the BMS. That might be possible but I so far don't have a need for it.

So now imagine if the grid goes down - well obviously you want to use more of the battery. Let's say you're willing to drain down to say 20%. Problem is in the event of a power failure it is still adhering to the depth of discharge limit already set - as a result if the battery was on 70% as it mostly is sitting there in winter - if the power goes down - well it's as if you have no battery at all.

So how do you solve this thing.

The thing is let's just say you set the depth down to 10% and have SoC at 90% and the power goes down - the switch from grid to battery happens seamlessly. Your electronics aren't affected. It's just a function of the inverter.

But the way I understand if even if you could add an automation to adjust the depth of discharge in case of power failure - that won't be instant and seamless.

So there must be a more intelligent approach using hardware or -> I just don't know...

Any ideas?

r/aivideo aagam_0

First Try using ai , used flow , text was done on after effects, any tips would be appreciated

r/SideProject Sufficient-Bag3242

Built a social app for music reviews - Spins!

Spins is a music first social app built for artists, producers, DJs, and true music lovers.

Core features:

• Timestamped Track Feedback

Listeners leave comments tied to exact moments in a song, so artists know exactly what people are reacting to.

• Music First Feed

A vertical, immersive feed designed entirely around audio.

• Ratings & Reviews

Tracks receive star ratings and written feedback, helping artists understand how their music lands with real listeners.

• Upload Credits (Earned, Not Bought)

Users earn upload credits by reviewing other artists’ music, encouraging genuine engagement and fair visibility.

• Track Boosting

Artists can boost tracks to reach new listeners and break out of their usual circle without needing an external algorithm or ad platform.

• Community Discovery

Discover rising tracks, trending songs, and creators across genres powered by real interaction and top 10 charts.

• Shareable Music Snippets

Turn your track into short, share-ready clips for Instagram, TikTok, and beyond directly from the app.

r/mildlyinteresting Shaynaenay

I caught the plastic bottles saved counter at my gym at the perfect moment.

r/Wellthatsucks Flandersar

Opened up my Rice Krispie to take a bite, not looking and did not get the satisfaction I was looking for, then I saw why.

r/LiveFromNewYork Life-Pay-3779

New & legendary recording artists picked to perform on SNL in future references.

r/homeassistant keenish27

Receive Response on Asking Device?

So some background here...this feels like it should be simple and that I'm completely missing something. I've searched the web and reddit, I've asked various AI models for help, and I've just fiddled and the solution just seems to elude me.

I set up a good night automation. It will basically turn off a bunch of lights and devices and then look for the weather tomorrow and tell me what it is. The problem I'm running into is that I can only get the weather to be spoken on a specific predefined device. So for example my kitchen google home.

What I'd like to do is have whatever I started the automation on (my phone for example) to be the device that speaks the weather report.

Any and all help is greatly appreciated.

Thanks,

r/LocalLLaMA Academic_Wallaby7135

Bitnet.cpp - Inference framework for 1-bit (ternary) LLM's

bitnet.cpp is Microsoft’s official C++ inference framework for 1-bit Large Language Models (LLMs), optimized for BitNet b1.58 and similar architectures. It supports fast, lossless inference on both CPU and GPU (with NPU support planned), using highly optimized kernels for ternary quantized models.

Officially Supported Models (available on Hugging Face):

  • BitNet-b1.58-2B-4T (~2.4B params) – Optimized GGUF format for CPU/GPU inference.
  • bitnet_b1_58-large (~0.7B params) – Lightweight variant for edge devices.
  • bitnet_b1_58-3B (~3.3B params) – Larger model for higher accuracy tasks.
  • Llama3-8B-1.58-100B-tokens (~8B params) – LLaMA 3 adapted to 1.58-bit quantization.
  • Falcon3 Family (1B–10B params) – Instruction-tuned Falcon models in 1.58-bit format.
  • Falcon-E Family (1B–3B params) – Energy-efficient Falcon variants.
r/mildlyinteresting Kuro_Osoroshi

Structure getting reclaimed by nature in the middle of my city

r/geography wiz28ultra

Why does Cfa seem so climactically diverse compared to other climate types?

Pine Barrens, Tallgrass Prairie, Temperate Forest, and Cocoa Beach Mangroves are all technically Humid Subtropical.

I'd love for my mind to be changed, but it's surprising how Louisville, New Orleans, Tampa, and Oklahoma City are all considered to be the same climate type, whereas a climate type like Oceanic is represented by climactically similar cities like Seattle, London, and Paris, while Humid Continental is represented by Warsaw, Minneapolis, Moscow, and Fargo, etc.

12 12
Reddit
r/Adulting amareeeeev00

*ouch* smiles

16 4
Reddit
r/me_irl Muted-Television3329

me_irl

19 5
Reddit
r/SideProject pchirico

Starting my first App at 45 - Anyone else building later in life?

Everyone says it's never too late to start. But when you're actually doing it in your mid-40s, that voice in your head says otherwise.

I'm in my mid-40s, starting this ambitious project with great expectations but even greater fear of failure.

Around 40 days ago I committed to building and launching 6 products in 12 months. The first one goes live in about 2 weeks.

Everywhere I look, it's 20-something founders shipping their second or third company. Moving fast, pivoting overnight, building audiences of 50K like it's nothing.

And here I am, just getting started.

But I've been around the block. Built other businesses, made money, lost money, learned a few things about what people actually want vs what I think they want. That instinct matters more than I realized.

The young founders have energy and speed on their side. No question.

I have something else. I can smell when an idea is already validated vs when I'm building in the dark. I know that trying to compete head-to-head in a crowded space is suicide - you need to carve out your corner first.

These aren't things you learn from a course. You learn them by screwing up enough times.

Still, some days I wonder if I should've done this 20 years ago. Other days I think starting now is exactly right.

I'm documenting the whole thing on X, Threads and here on Reddit as I go. Not because I have it figured out, but because maybe there are others out there doing the same thing and we can learn from each other.

If you're in your 40s (or beyond) building something, I'd genuinely love to hear how it's going for you. What does it feel like on your end?

42 days in. First launch soon. Let's see what happens.

38 68
Reddit
r/MostBeautiful Buffyferry

A wire crochet necklace I made with a moonstone.

r/OldPhotosInRealLife ParaMike46

Prudential House is a historic skyscraper which was the tallest building in interwar Poland. Built between 1931 and 1933 in the Art Deco style, it was severely damaged in the Warsaw Uprising of 1944, and has now been painstakingly restored to its former glory!

200 3
Reddit
r/leagueoflegends SmashKNight23

What to build for midlane late game? sell upgrade boots?

Im d3 60 lp 55% wr in 30 games.

Midlane just seems like an early game role which is good for the pace of the game, most my games lasting 30-35 min max. The bonus movespeed from quest boots seems good with the upgrades helping make plays with more pen or defense but what happens when the game catches up? Every now and then i get a 40+ min game where everyone has quest and full items. At this point you typcally sell boots for more damage... except thats your whole quest??

Every other lane gets increased stats, gold, exp, item slot but mid just gets boots and a recall. Late game you typically sell boots for 6th item but then your quest just goes to waste while others can do the same or dont need to at all cause of bonus stats and items.

Does this make midlane quest weaker late game or the most balanced since it plays around the fact games are going quicker?

r/MMA KazuEH1352

Umar Nurmagomedov congratulated his former opponent Mario Bautista on the win over Vinicius Oliveira.

84 8
Reddit
r/PhotoshopRequest Idfk2424

Phone

If someone could get rid of the phone on the right and maybe make my hands not as awkward that would be great. Thank you.

r/SideProject Glass-Lifeguard6253

This side project accidentally turned into 500+ users

BRANDISEER started as a “scratch my own itch” project.
I honestly thought maybe 10–20 people would use it.

I kept shipping anyway.
Even when growth was slow. Even when nobody noticed.

Then people started telling friends.
Then strangers signed up.
Now it’s crossed 500 users.

Still a side project. Still learning. Still breaking things.

If you’re working on something quietly and wondering if it’s worth it,
keep going a little longer. Momentum is weird like that.

What side project are you sitting on right now?

r/StableDiffusion Justin_Kaes

Best free ComfyUI Web GUI?

Hi there. I'd like to create longer Videos for my AI Songs. Tried to install ComfyUI locally but failed. Anyway, with only internal Intel Graphics this wouldn't be fun. So I'm back to looking for a Web UI for Comfy and then creating a series of short videos where each start frame is the end frame of the previous video, then stitching them together.

The Problem is that I cannot find a single WebSite that lets me do this for free, they all seem to want money right from the start.

Or is there a possibility that I just haven't found? Thanks!

r/personalfinance M_Raptastic

Retiring but with a pension question

Context: I am 37 and sole provider for my wife and 2 children ages 7 and 3. I have been in the military for almost 17 years and planning to retire in the summer of 2035. At the rank I expect to retire at, the projected pension in 2035 money (inflation calculated into it), puts me at right around $100k a year. Unfortunately I have some things that will qualify me for disability. I don’t want to automatically say I will be 100% so I will be conservative and say estimating around $2000 a month.

My wife will be back into the work force in the next 3 years to increase our monthly household income.

Currently we have our TSP retirement account, Roth IRA, Taxable Brokerage, 529s for the kids, and duplex rental In Florida bringing in income every month with about $250k in equity.

I know a lot of numbers and personal details are needed, but if I will be making roughly $124k a year at age 47 retired from the military, with the military pension raising every year with inflation, how much would you say I needed in various investment accounts?

I do plan on continuing to coach youth sports as it’s a passion of mine but don’t expect much in compensation if any. What would you need to feel comfortable fully retiring?

r/leagueoflegends dorremisa

Lost Media

Many years ago i remember playing a game similar to rpg game "League of Angels: Fire Raiders" with Lol characters, but no matter what i search i can not find a single thing related to it. I even remember having Arcade Miss Fortune skin or sum. Gameplay was similar to ruined king ig but 2d.

r/SipsTea Embarrassed_Tip7359

4K Vision

171 13
Reddit
r/LocalLLaMA Signal_Ad657

Open Claw Local Model Tool Calling and Session Overrun Fix

We run autonomous AI agents on local hardware (Qwen2.5-Coder-32B on vLLM) through OpenClaw, and kept hitting two walls that drove us insane:

  1. ⁠Context overflow crashes. Long-running agents on Discord accumulate conversation history in session files until they blow past the model's context window. The agent can't clear its own session. The gateway doesn't auto-rotate. You just get "Context overflow: prompt too large for the model" and the agent goes dark. Every. Time.

We built Local Claw Plus Session Manager to fix both:

Session Autopilot — a daemon that monitors session file sizes on a timer and nukes bloated ones before they hit the context ceiling. It removes the session reference from sessions.json so the gateway seamlessly creates a fresh one. The agent doesn't even notice — it just gets a clean context window.

vLLM Tool Call Proxy — sits between OpenClaw and vLLM, intercepts responses, extracts tool calls from tags (and bare JSON), and converts them to proper OpenAI tool_calls format. Handles both streaming and non-streaming. Your subagents just start working.

One config file, one install command. Works on Linux (systemd) and Windows (Task Scheduler).

GitHub: https://github.com/Lightheartdevs/Local-Claw-Plus-Session-Manager

MIT licensed. Free. Built from real production pain.

Happy to answer questions if you're running a similar setup.

r/Art WaterRevolutionary72

Ugly Beauty, Steven Kenyon Fish, Graphite and Pencil, 2025

r/todayilearned Behindthescenes10

TIL That due to limited plumbing and water supply in Antartica, research stations use incinerator toilets. An incinerator toilet is a waterless, self-contained sanitation unit that uses high temperatures to combust human waste (solid and liquid) into small amounts of sterile, odorless ash.

811 95
Reddit
r/ClaudeAI momkeeeeeeee

Claude and Math

I gave the 10th problem in first proof https://arxiv.org/html/2602.05192v1 to:

Claude opus 4.6

Gemini 3.0 pro

Kimi k2.5

All thinking.

And well

I find

Kimi does it well gemini is meh

But..

Claude didn’t do so well..and..made a couple of mistakes.

And

I also let claude write the hardest unique problem in linear algebra he can think of

And he stated the theorem wrong..

So yeah

Anthropic market a lot about claude coding capabilities which arguably is claude best thing and he crush the benchmarks hard

But math is ..somehow..kinda ignored? Even though coding hinges mostly on math?

Anthropic doesn’t show any math benchmark in claude opus 4.6 and 4.5 introduction

And yeah i don’t think i have heard they talk about math…in claude..

Even though..besides coding..math is like..the next thing AI should master right..people testing them on erdos problem and all it’s definitely a research to make llms capable or see how capable they are at math..

But why does Anthropic doesn’t take this strongly?

Gemini and kimi and chatgpt boast technical stem work and all

Claude is about writing and coding..and maybe research paper reading and comprehension, implementation

But having math ability realy helps with coding..computer science is basically a branch of math…and yeah it is a major problem in AI…math capability..

Why the negligence?

r/ARAM AnyDistribution6567

Remove Prom Queen

Clearly you cant fix it so please just remove it and dont bait players into playing down a prismatic

18 18
Reddit
r/EarthPorn andrewrimanic

The Peruvian Andes [OC] [1600x2000]

1192 4
Reddit
r/KlingAI_Videos p0lar0id

Tough girl, tough meat (Kling 3.0)

r/leagueoflegends Soul_Sleepwhale

Invictus Gaming vs. Ninjas in Pyjamas / LPL 2026 Split 1 Playoffs - Qualification Match 2 / Post-Match Discussion

LPL 2026 SPLIT 1

Official page | Leaguepedia | Liquipedia | Eventvods.com | New to LoL


Ninjas in Pyjamas.CN 3-1 Invictus Gaming

Player of the Match: Zhuo

NIP | Leaguepedia | Liquipedia | Website | Twitter | Facebook | YouTube
IG | Leaguepedia | Liquipedia | Website | Twitter | Facebook | Subreddit


MATCH 1: NIP vs. IG

Winner: Invictus Gaming in 41m | MVP: Jwei (2)
Game Breakdown | Player Stats

Bans 1 Bans 2 G K T D/B NIP jayce orianna yunara nautilus neeko 82.5k 16 9 H3 B5 M6 M7 M9 M10 IG malphite azir vi sion ornn 84.0k 15 8 HT1 I2 M4 B8 NIP 16-15-32 vs 15-16-43 IG HOYA varus 1 7-3-4 TOP 3-6-8 1 rumble Soboro Guwon qiyana 2 3-4-6 JNG 0-3-14 1 jarvaniv Wei Care ryze 2 2-2-7 MID 4-3-4 2 cassiopeia Rookie Assum jhin 3 3-2-6 BOT 8-2-6 3 missfortune Photic Zhuo bard 3 1-4-9 SUP 0-2-11 4 alistar Jwei

MATCH 2: IG vs. NIP

Winner: Ninjas in Pyjamas.CN in 32m | MVP: Assum (2)
Game Breakdown | Player Stats

Bans 1 Bans 2 G K T D/B IG jayce vi yunara nocturne sivir 55.2k 3 4 CT1 O4 NIP orianna malphite kaisa xinzhao gwen 66.9k 12 10 I2 H3 O5 B6 IG 3-12-3 vs 12-3-25 NIP Soboro sion 3 0-4-0 TOP 1-1-1 1 renekton HOYA Wei pantheon 3 0-2-1 JNG 2-1-9 4 poppy Guwon Rookie leblanc 2 1-2-0 MID 0-1-4 2 azir Care Photic corki 1 2-3-0 BOT 7-0-2 3 draven Assum Jwei nami 2 0-1-2 SUP 2-0-9 1 neeko Zhuo

MATCH 3: IG vs. NIP

Winner: Ninjas in Pyjamas.CN in 33m | MVP: Zhuo (3)
Game Breakdown | Player Stats

Bans 1 Bans 2 G K T D/B IG jayce vi ambessa nocturne viego 60.4k 9 3 O1 C4 C5 NIP orianna malphite yunara ziggs jinx 67.8k 16 9 I2 H3 B6 C7 B8 IG 9-16-18 vs 16-9-48 NIP Soboro jax 2 2-5-3 TOP 3-4-8 1 gnar HOYA Wei xinzhao 1 3-5-4 JNG 4-0-10 3 drmundo Guwon Rookie ahri 3 3-1-3 MID 4-1-7 4 taliyah Care Photic kaisa 3 1-2-4 BOT 4-3-9 1 aphelios Assum Jwei nautilus 2 0-3-4 SUP 1-1-14 2 thresh Zhuo

MATCH 4: IG vs. NIP

Winner: Ninjas in Pyjamas.CN in 32m | MVP: HOYA (4)
Game Breakdown | Player Stats

Bans 1 Bans 2 G K T D/B IG vi ambessa sylas aurora akali 56.4k 4 1 O1 H3 NIP orianna nocturne rakan gwen zaahen 66.8k 14 11 C2 M4 B5 M6 B7 M8 IG 4-14-8 vs 14-4-27 NIP Soboro malphite 1 0-3-1 TOP 2-0-6 3 ornn HOYA Wei jayce 3 0-3-3 JNG 2-1-7 2 trundle Guwon Rookie syndra 3 3-3-1 MID 7-1-1 4 diana Care Photic ashe 2 0-3-2 BOT 2-1-3 1 yunara Assum Jwei seraphine 2 1-2-1 SUP 1-1-10 1 lulu Zhuo
  • Patch 26.02

This thread was created by the Post-Match Team.

41 29
Reddit
r/painting ArtAni20

Guys! So i posted my painting, with title: i am 13 y. O. Rate my painting. But everyone said it was Ai and it got blocked. I dont know what to do. What is so unbelieveble? It is NOT AI!!! It is my drawing. and This is the painting i posted. And the second one is the one i PAINTED FROM!

r/meme Askhanmin

Me and my attention deficit...

r/SipsTea Embarrassed_Tip7359

A little act of kindness goes a long way!

970 12
Reddit
r/ClaudeAI Eugene_sh

health-related project memory (managing your health data inside Claude(yes, I know)

Has anyone found a better approach than just creating a "project" if for example you want to manage your own health data and a second person (family or otherwise).

The goal is to have LLM have all the context like blood tests, MRIs and so on, journal the doctor visits talk and ask follow-up questions.

However with claude memory enabled (that sort or compresses info across chats) I don't want one person issues "leak" into another context. Also, I would prefer to compare the answers between LLMs while updating it at one place. Open to coding smth myself if needed.

r/oddlysatisfying Ill-Tea9411

Wood veneer slicing machine

380 47
Reddit
r/meme Beneficial_Wear_7630

Incorrect?

1102 54
Reddit
r/SideProject Kilopolki_17

Sonic Audio Branding for your Website

8+ Years of Vetted Expertise on: Expert Prompt Gen Specialist on Suno, Udio , and Eleven Labs (generated over 8500+ songs) AI Vocal Manipulation and AI voice Morphing (Tensorflow, Pyrotech, Librosa) in google collab Pre-production: Conceptualizing, songwriting, and arranging the structure. Recording: Capturing individual tracks, editing, and adding overdubs. and comping Mixing: Balancing levels, applying EQ, effects, and panning. Mastering: Finalizing EQ, compression, limiting, and sequencing. Post-production: Quality control and format conversion. (mp3 320Kbps Lossless wav 32bit) Supplementary Competency for : Podcast Editing Noise Reduction and Mastering, DJ'ing, Nonstop Mixtape for events, Remixes and Mashups Music and SFX for Film Media or Apps and Games from Meditation to Kid's Music Arificial Intelligence (Music and Video) Photo ,and Audio Cleanup (for Forensic Crime and Investigation use) Film Scoring and Foley Professional Video Editing (Long and Short Form Content 4k 60fps) I will bring an Excitement to your Projects! Which can have a Huge Impact on your Desired Projected Outcome! Let's Have Fun and Collaborate !

r/aivideo TechHalla

Karate Kling

r/mildlyinteresting Hour-Bee182

My thumbnails have a scale pattern

r/30ROCK Escargotfruitsrouges

Oh, Mary

Somebody tell me that they told Jane Keakowski that the programs were really easy to read. But hopefully just one person, because I can see how that joke would get old super quick.

14 10
Reddit
r/creepypasta shortstory1

Katie joked around with her husband by saying "I'm taking away your memory of you being gay"

Katie said to her husband as a joke "I am taking away the memory of you ever being gay" and then Gary replied back to his wife "I've never been gay" and then Katie replied back to him by saying "see I took away your memory of you being gay!" And her husband kept saying that he isn't gay and nor was he ever gay. Katie was just having a laugh and she was just joking around. It's a cool joke she thought to herself, and its a joke and a trick all at the same time. Katie's husband Gary kept saying "I'm not gay"

Then Katie would sometimes wake up in the morning and she would see her husband speaking to himself in the mirror and just kept on saying "I'm not gay I've never been gay" and Katie was getting worried now. She was just joking and she didn't mean to harm his masculinity. Katie would also find her husband sitting alone and staring out the window and just start saying "I'm not gay I've never been gay" and Katie was getting worried now. She went to her husband to reassure him that she was just joking around. Her husband reassured his wife Katie that he knows that she was joking.

Then Katie started to get calls from an unknown number and Katie doesn't answer from unknown numbers. Then this unknown number would start to text to answer his calls, but Katie kept deleting them. Katie doesn't talk to strangers with unknown numbers. Katie kept getting more calls from unknown numbers and texts from unknown numbers and Katie kept deleting them. Then when she blocked the unknown numbers, this led to more unknown numbers calling her. So she had this problem and her husband just randomly saying to himself in mirrors "I'm not gay and I know I'm not gay"

Then Katie started to get spam calls from unknown numbers and a barrage of texts from unknown numbers. Then one day there was a knock at the door and it was a man looking all rough and desperate. Then as Katie opened the door the man desperately said:

"Katie I am one of your husband gay lovers, when you made him forget he was gay, that also means we must also be erased from existence. The creatures have taken all of your husbands gay lovers out of existence, which is why he has no memories anymore"

So Katie realised that she does in fact have memory wiping powers, but that also means certain individuals must be taken out of existence to never have memories of them. Unfortunately you cannot undo it.

r/Adulting amareeeeev00

Hmm, that’s sus..

22 2
Reddit
r/WouldYouRather BritishCupoTea

In the event of a medicial error- WYR keep the baby you carried or switch to have your biological child?

A hypothetical situation- you had difficulty concieving and used IVF. You carried the baby full term and gave birth. The baby has not yet left the hospital. You discover that there was a mix up, you carried another couples biological child and they carried yours.

Would you keep the baby (not biologically related to you) that you carried and gave birth to, or would you switch with the other couple who carried your biological child to have your biological child returned to you? The other couple have their biological child returned. You cannot keep both. What would you rather do?

16 22
Reddit
r/funny dustin1776

Every time Bad Bunny said "Ey!" in the Halftime Show

I spent the 2nd half of the game editing this....who won?

32939 1078
Reddit
r/SipsTea Darth-Investor

POV: First Row at the Super Bowl Halftime

16 3
Reddit
r/Art OKABE_SABURO

A Collection of Portraits, OKABE SABRO, Digital, 2026

27 0
Reddit
r/30ROCK admadguy

From the team behind 30 Rock - The fall and rise of Reggie Dinkins

65 12
Reddit
r/StableDiffusion mySincereAsterisk

Trellis 2 for vehicles models generation

I am trying to generate 3d models of vehicles for an application I am building. I tried Trellis 2 and I think it is okay but there is a very wide room for improvement. Anyone has any tips, or I have hit a limit and this is the best quality I can reach?

r/wholesomememes JimKB

couple of black belts

869 1
Reddit
r/AI_Agents Raise_Fickle

best OSS i can run on 72 GB VRAM

I have got 3x4090s and I was wondering what is the best open source model that I can run keeping in mind different quantizations that are available and different attention mechanisms that will affect the amount of memory needed for the context line itself. So combining all of these things, what is the best open source model that I can run on this hardware with a context length of say 128k.

r/maybemaybemaybe H-S-Striker

Maybe Maybe Maybe

29 4
Reddit
r/meme Zealousideal_Let1826

It’s just ……. A phase

r/painting tedlando

write until you hear

r/SideProject Devashish07

Built an open-ish form builder after being boxed in by Typeform/Tally limits — looking for feedback

Hey — I’m a dev who built a small form tool because free tiers kept feeling like demos. I wrote a comparison of existing tools and where I aimed to improve the UX (unlimited responses, free analytics, simple logic, AI form help).

Would love honest feedback on:

  • What would make you switch from Typeform/Tally?
  • Which integrations matter most for you?
r/SideProject FintasysJP

Built my own macOS storage visualization tool

I was fighting with my MacBook the last few month, because something was eating my storage and I couldn't find out what. I tried mole to delete all kind of caches, but wasn't good enough to find what really bloated up. So I built my own visualizing tool https://helios-disk.com I know there are similar tools, but still thought it is worth sharing and publishing.

If you wonder what filled my disk, it was Android Emulator and its profile was over 50GB.

r/personalfinance ConnxEng

Financial Planning/Retirement Advice

With all of the different investment vehicles for funding retirement and ways to consider taxes (traditional 401(K)/IRA vs. Roth 401(K)/Roth IRA, I am questioning my approach and current progress. Open to advice, criticism and anything that will help to become more financially educated. I feel that I'm so focused on if we will be okay in retirement that I'm not enjoying life in the present.

My situation is as follows:

My wife and I rent and have no children but are hoping that will change in the near future.

We rent a single-wide trailer and are saving for a house. We have $250,000 in a HYSA that we plan to use as a down payment on a house. I hate debt, and when I paid off off my $70,000 in student loans at 27, I swore I'd do everything in my power to avoid debt at all costs. We estimate our yearly spending at $60,000.

I am 36(m) engineering career. Salary: Base $137,500, typically with a $10,000 bonus. Prior to 32 years old, I was making $70,000 and at 27 years old, making $56,000. Apparently, structural engineering doesn't pay like other engineering fields. At 32, I moved to upstate NY for a significant pay increase.

I started saving for retirement when I was 24 and still paying off student loans.

I contribute to a Roth 401(K) and have around $210,000. About $130,000 is in my previous employers plan in a 2055 TDF. My current employer plan is also in a 2055 TDF. I currently put 10% of my paycheck, and my employer matches 100% up to $10,000. this isn't guaranteed, but so far they have pulled through. A few weeks ago I decided to open a Roth IRA and plan to begin maxing that out yearly. it currently has $6,000 invested in VT.

My wife is 36(f) works in banking. Salary $65,000. She is Brazilian and we are currently working on her citizenship fingers 🤞. She has maybe $10,000 saved for retirement.

Prior to getting married I though I was doing ok, but now I feel like I'm not doing enough to provide for both of us. my plan b for retirement if it doesn't work out is retire in Brazil as I think our retirement savings will go further there if we can't afford to stay in the US.

Thanks in advance.

Edit: We currently save around $60,000 a year towards our home.

r/AI_Agents AdventurousPie7592

Why is nobody talking about the governance gap in MCP?

I’ve been experimenting with MCP for a few months now and the potential for building autonomous agents is honestly incredible. But after trying to roll out a few tools for my team, I’ve realized we’re hitting a massive wall when it comes to actual enterprise-grade governance.

The protocol itself is a huge step forward, but it feels like we’re missing the "safety valve" layer. Most of the MCP servers I see are basically wide-open pipes. If you give an agent access to your internal databases or customer data, you’re basically trusting the model not to hallucinate a destructive command or leak sensitive info. For a side project that’s fine, but for anything in production, it’s a non-starter.

I’ve spent way too much time recently trying to build custom middleware to handle auth and permissioning for our servers.

It’s a total headache to maintain. I started moving some of our core integrations over to Ogment ai because I was tired of reinventing the wheel on the security side. It basically acts as a governed platform for MCP that handles the "boring" but critical stuff like OAuth, granular permissions, and full audit logs. Instead of me writing boilerplate code to protect every single endpoint, I can just define the tools and let the platform manage the lifecycle and security.

It’s been a lot easier to get our security team to sign off on agents once they can actually see an audit trail of every tool call. It makes the whole stack feel like a professional tool rather than a series of local scripts held together by duct tape.

Are you guys building your own governance layers for this, or are you just keeping your agents in read-only sandboxes for now? I feel like we need a more standardized way to handle this before MCP can really go mainstream in larger companies.

28 15
Reddit
r/personalfinance ClassProfessional223

I think I messed up, with easy financial and impuslive habits

So I was in need for money and I took a 15k loan, repaying over 84 months, at around 300$Bi-weekly payements. I was also in a bad mental state and blew all of it. Now I am worried about it because I heard horror stories about easy financial.

I make around 6000$/month and I have bills for a little over 2000$/month.

I have a 1000$ credit card, a 500$ one and a 2700$ one, all of wich are about 3/4 of the way maxed out, Planning to start by paying these off.

Just wondering what would you do in my position, after paying off the credit cards?

r/creepypasta _darklights

Do Not Go Into the Haunted House at "Fun For Colorado" Carnival

I’m not a coward.
I don’t say this to brag or stroke my ego—I just think I’m generally good at keeping my cool. But what happened to me a few months ago at a carnival in my town… it hasn’t left my mind since.

Fun For Colorado Carnival

My hometown isn’t a big place. Apart from a couple of local bars, there’s not much for young people to do. So when we heard that a huge carnival—big enough to make the local papers—was coming through, everyone was excited. My friend Alisha and I were no exception. We’d been planning to go for weeks. We both love adrenaline, though she’s way more of a thrill-seeker than me. She’s into extreme sports I can’t even pronounce, let alone try.

That night, she dragged me onto every insane ride the carnival had to offer—the giant octopus arms, the roller coasters, the kamikaze, the weird swinging chairs. I went along with her, my stomach in my throat the whole time.

But this carnival had one more strange detail: after every ride, someone in an orange “Fun For Colorado” shirt, with the carnival’s teddy bear logo on it, would come up to you and ask for feedback. They’d jot it down like it was a survey. I avoided them as much as I could, but Alisha answered every question with detailed suggestions like she was on a mission to improve the place.

By the time we were done with the last ride, my head was pounding and my stomach was sick. The flashing lights that had seemed magical at first now burned my eyes. I told Alisha I wanted to leave. She still had energy to burn, but when she saw how pale I was, she agreed—though she had one last idea.

“Let’s just do the haunted house. One last thing, okay?”

Every part of me wanted to say no. But she looked so excited that I couldn’t bring myself to ruin it. We walked to a far corner of the carnival where a building sat oddly alone. It was decorated with neon green lights and had a big dinosaur statue on the roof.

“You sure this is even open?” I asked.

“Of course. Look, there’s a worker right there.”

Sure enough, one of the orange-shirted guys was at the exit, scribbling in a notebook. I still thought it was strange how isolated this ride was.

“Tyler,” Alisha smirked, “you’re not scared, are you?”

My pride got the better of me. “Me? Scared? Bet you scream more than I do in there.”

She laughed, tossed two tickets into the machine, and we walked in.

The place smelled like cheap plastic. It was all dark rooms, fake screams playing from speakers, skeletons popping out of the walls. Honestly? It was lame. The only thing that really got to me was the mist machine—it hissed loudly and would suddenly fill the hallway with smoke, making me jump. Alisha, of course, laughed at every single thing.

But near the end, I turned around and realized… she wasn’t there anymore.

“Alisha?” I called.

“Tyler, you’ve gotta see this!”

Her voice came from a side room. I stepped inside and found her fascinated by a crystal ball. The room looked like a cheap witchcraft shop—spell books, brooms, Ouija boards, dusty props with little information cards beside them.

I barely glanced at the stuff. They were probably just props from the imagination of whoever made this place. At least, that's what I guessed, because a card on a leather-bound book I picked up said the pages were made of human skin. That alone was enough to make me queasy again, so I turned away—only to notice a narrow opening in the wall. It wasn’t more than a meter wide, pitch dark beyond it.

Alisha was still flipping through the creepy book, so I told myself not to be a coward and stepped inside.

The first thing I noticed was the cold. It was freezing in there, damp and musty. Dim neon lights flickered along the floor, not enough to see by. I reached for my phone flashlight, but… it was dead. Completely drained. I swore it had been fully charged when I left home.

Then I heard it.

Pat.

A thud that echoed through the little room. I spun around. The gap I’d come through—was closed. Sealed shut.

I forced a laugh. “Ha-ha, very funny, Alisha. Open the door now.”

I pounded on the wall, but it was solid, like there had never been an opening. My chest tightened.

“Alisha, seriously, enough. Open the door.”

No response.

And then, from behind me… a soft sound.

A woman sobbing.

I froze. It wasn’t part of the carnival soundtrack. It was real. I could hear her choking back tears in the corner.

“Hello?” I whispered.

The crying stopped. Slowly, I saw her—crouched on the floor, hair covering her face. The flickering lights caught her shape, hunched and trembling.

At first, I thought: It’s just another actor.

But when she stood up, I knew she wasn’t. Her back was arched, her arms dangling low, her dress shredded like she’d been mauled by an animal. Every time the light flickered, her silhouette looked more inhuman.

I stepped back without meaning to. She turned her head slightly, and a horrible, rattling moan slipped from her throat—as if she was choking on her own screams.

Then she faced me fully. Her forehead bulged unnaturally. Her hair hung in filthy clumps. And when she spoke, the words scraped the air:

Have you seen my daughter?

The stench hit me—the same moldy, rotten smell I’d noticed when I first entered. Her eyes were a dead, milky white. Her teeth were blackened, broken. And she was close enough now that her icy breath hit my face.

Suddenly she grabbed my shoulders. Her hands were like blocks of ice, her nails digging into my skin.

Have you seen my daughter?” she rasped again, harder this time.

"Let go, you're hurting me!"

I screamed at her, but her grip tightened until I thought my arms would snap.

Have you seen my daughter?

I don’t know why, but I answered. “I’m sorry. I haven’t.”

That didn’t stop her. Her jaw unhinged, her breath blasting me with rot. Her grip grew tighter, impossible for such a frail-looking body. Her nails pierced through my shirt and into my flesh.

Have you seen my daughter?

The last thing I remember was screaming for Alisha. Then… nothing.

When I came to, I was lying on the floor. Alone. No blood. No wounds. Just the flickering white lights. My arms, which I’d been certain were broken, were fine. Shaking, I scrambled to the wall—then I heard Alisha’s voice.

“Tyler? Are you in there?”

I begged her to open the door. A moment later, it swung open easily.

"Tyler, are you okay?"

"How did you open the door?"

“It wasn’t even locked,” she said.

I stumbled out into the light, barely able to stand. I tried to explain, but she just looked confused.

"Looks like the horror didn't agree with you," she said, her voice teasing.

"You don't understand, a woman in there almost tore my arms off!"

While I spoke, panting, her eyes darted into the room.

"No, wait, don't go in."

Despite my warning, she went inside, and a few minutes later, there was no sound. I stood up. I was still so affected by what happened that I thought the woman had done something to Alisha.

"Alisha?"

“I checked that whole room. There was no one in there, Tyler.”

“No,” I gasped. “There was a woman. She.. She kept asking if I’d seen her daughter.”

Alisha frowned but didn’t laugh at me this time. She sat beside me, serious now. “What exactly did she look like?”

I told her everything. Every detail. She went pale. But she still tried to rationalize it.

“Maybe you panicked. Maybe it was just an actor.”

As we walked toward the exit of the haunted house, I was almost starting to believe the whole thing was a performance. Alisha was good at making me feel better. I just swore to myself that I would never go in a haunted house again.

I almost convinced myself that’s all it had been. Almost.

When we stopped by the orange-shirted worker to give feedback, I couldn’t hold my tongue.

“That room with the crying woman,” I said. “That was terrifying. Seriously, the best part of the haunted house.”

The man stopped writing and looked at me strangely.

“I beg your pardon?”

“You know—the dark little side room. The woman kept asking if I’d seen her daughter. How did you make it so real?”

He exchanged a look with Alisha.

"Um... I think you're talking about the witches."

"No, the place with the small gap that connects to the room with the strange items near the end of the tunnel. There was a woman there in a white dress who was crying and kept asking, 'Have you seen my daughter?'

The man's cheerful face was now completely blank. He looked confused.

"Are you a prank show crew or something?"

Alisha interjected.

"Why would you say that?"

“We don’t… have anything like that in there. That passage does exist, yes, but it just leads to an empty storage room.”

My stomach dropped.

Before I could protest, he cut me off. “We’re closing soon. Please make your way out with the other guests. And… have a fun year with Fun For Colorado.”

He rushed away like he couldn’t get rid of us fast enough.

Alisha tried to calm me down, insisting it was just an actor who hadn’t been logged yet. But I knew. I know what I saw. What I felt. Her grip. Her voice. That smell.

I tried to forget it. For a while, I did.

Until last month, when I was digging through old newspapers for a college project. That’s when I found it:

“Tragic End for Grieving Mother.”

Dorothy Herbert lost her daughter, Lily, at a carnival last year. After search efforts turned up nothing, Dorothy began traveling across states, determined to find her on her own. Last week, her body was discovered in the woods outside our town. Cause of death was believed to be an animal attack.

The article ended with one detail that froze me:
Her body was found near the newly built “Fun For Colorado” carnival—the same place where Lily was last seen.

r/PhotoshopRequest kimdavis357

Could someone photoshop the parent advisory sticker in the bottom right out of this album cover?

r/DecidingToBeBetter r_vishal95

I drain my brain every night

For past few weeks, i started a new bed time ritual to let it all out before i go to bed.

If you go to sleep with your mind full of things from your day then it affects the sleep and also in the morning I still have those thoughts running in my mind.

To overcome this I decided to add a routine. Before i go to sleep, i let it all out. Record

- what i did

- what i ate

- whom i met

- how my day went

- how i felt

- what I’m going to do tomorrow.

Every little detail , which just took 30 seconds to 40 seconds. I just record everything. Voice recording.

My tasks for next day are automatically organised with timely reminders.

My mind feels the day finally ends and i can now go to sleep with relief. Nothing bothering me. No thought’s in mind.

Try it for yourself.

r/photoshop Illgib81

Image sequence camera raw flicker.

Hello everyone, i'm facing a quite strange issue, and i've only found a single thread around without much solutions.

I have several ong image sequences from differerent 3d rendering that need to be edited into a single video, the base set have pictures post producted with Camera Raw and their corrispondent settings saved per set, so everytime i have to make new pictures i simply load it.

But when i batch apply it to an image sequemce, the output animation have a subtle light pulsing effect, at first i thougt was the noise but even noise free images have the same issues.

There's a way to have the final color into the image sequence in a clean way?

I've already lost three days between re rerendering and dumped edited frames.

I'm trying to replicate with color grading in DaVinci but it lacks half the controls Camera Raw have, and i can't achieve anything remotely resembling the same result.

r/geography BrumaQuieta

What's the history behind this weird tentacle in the B&H-Croatia border?

r/DunderMifflin MarvelPQplayer

Why Dwight

Am I the only one that cried when Dwight revealed Belsnickle wasn't real? Not cool.

r/relatable_memes_ lovemeirin1

trust me it will

r/aivideo IngenuityRich2818

Superheros Fusion Revelation | Fate, Fire & Chaos

r/HistoryPorn LookIntoTheHorizon

Mother Searching for Her Son among Returning POWs, Friedland, Germany, Oct. 1955 [1600x1525]

The return of the last remained POWs from the Soviet Union was an immense political success for Chancellor Adenauer; it was considered as his greatest achievement.

Photographer: Robert Lebeck.

238 13
Reddit
r/toastme jenna_the_bean12

struggling with self image

hello all, i am 23 years old and transgender (MTF). I’ve been medically transitioning with hormones for just over a year now, and I’m struggling heavily with my self image and dysphoria. I feel very undesired in my body and struggle a lot with confidence as well , always trying to avoid photos and being seen as little as possible in public. Some words of encouragement would really help.

97 38
Reddit
r/SideProject LiftTrackerDave

I built a tiny iOS app for the moments you’re actually waiting for

Hey everyone,

I’m an indie iOS developer, and I just shipped a small app called TheWait.

The idea came from something that kept bothering me: the most important moments in our lives — trips, weddings, exams, reunions, big deadlines — get buried in calendars and reminder lists.

They’re emotionally huge… but visually invisible.

So instead of building another productivity or countdown app, I built something much more focused.

TheWait is about one thing: making the wait itself feel present.

 

What it does

 

You create moments you’re waiting for (trip, wedding, due date, birthday, exam, etc.)

One moment can be pinned as your hero

That moment lives on your Home Screen via widgets

No task lists, no noise — just the thing that matters right now

 

Why it’s different

This isn’t meant to motivate you or optimize your day.

It’s more of an emotional utility.

Something calm, visual, and intentional that you see every day — so anticipation doesn’t disappear into a reminder you forget about.

 

Core experience

 

Postcard-style countdown cards

A single pinned “hero” moment

Small & Medium Home Screen widgets (this is the heart of the app)

Simple creation flow (title, date, theme, icon)

Optional gentle notifications (7 / 3 / 1 day + day-of)

Optional location per event, with an estimated weather preview for that place & date (great for trips or outdoor events)

 

I spent a lot of time on visual polish — typography, spacing, subtle animations, and making the widgets feel properly Apple-grade.

 

I’d love thoughts from this community:

Does the concept make sense?

Does the value come through quickly?

Do the screenshots communicate the idea?

Anything you’d simplify or remove?

 

App Store link: https://apps.apple.com/app/thewait/id6757280643

There’s a Pro subscription (€2.99/mo), but free users can experience the core idea properly before hitting limits.

 

Thanks for reading!

r/interesting AdSpecialist6598

A house that was found inside an attic of another house

147 47
Reddit
r/personalfinance Nitish2006

Should I withdraw my PF to prepay home loan?

I am wondering if I should withdraw my PF and prepay home loan. This will save interest but I will loose on retirement corpus, but PF returns are also guaranteed.

Anyone have ran any calculation to check if we should withdraw say 15L in PF vs keep paying home loan interest for 15 years.

r/EarthPorn Apprehensive_Fox7338

Triglav north face, Slovenia [2560X1632] [OC]

35 1
Reddit
r/painting Diabolicool23

Dolphin,Steven Mayden, oil on canvas, 2026

r/mildlyinteresting Judemarley

I came across this shrub shaped like a hippo

r/ImaginaryPortals Lol33ta

Wildgate by Sergey Gurskiy

r/AskMen Gamer_innocent

What do lonely people do to have fun?

What do you guys do for fun and when you feel bored. Like right now I kinda feel lonely and depressed that nothing seems to make me happy. I don’t have a big circle of friends and it’s always difficult to find a right time to hang out with my friends so most of the time I end up going to watch a movie or smth alone and that’s pretty much it. Other than that all I do is pretty much work.

20 54
Reddit
r/PhotoshopRequest Electrical-Ad-115

Remove toxic ex

This is no emergency, but I really love this picture of me (Right) swimming with sharks. My ex (Left) is in the photo, however, and I do not want to get rid of this picture. He was so horrible to me and I don’t want to associate this awesome moment with him. Can any of you remove him for free? If not, I will deal with it lmao but if any of you are bored and want something to do, it would be greatly appreciated ❤️

r/StableDiffusion Hellsing971

Has anyone mixed Nvidia and AMD GPUs in the same Windows system with success?

My main GPU for gaming is a 9070XT and I've been using it with forge / zluda. I have a 5060ti 8GB card I can add as a secondary GPU. I'm under the impression that the 5060ti with half the VRAM will still perform a lot better than a 9070XT.

My main question before I unbox it is will the drivers play well together? I essentially want my 9070XT to do everything but Stable Diffusion. I'll just set CUDA_VISIBLE_DEVICES=1 so that Stable Diffusion uses the 5060ti and not the 9070XT.

I'm on Windows and everything I run is SDXL-based.

r/midjourney mingdifilms

Short film on what procrastination feels like to me with AI

r/leagueoflegends Traditional-Baker831

Platinum 3 midlaner looking for advice to keep climbing

Hi everyone,

I’m currently a Platinum 3 midlaner and my main goal this season is to keep improving and climb as high as I can.

The champions I play the most are Veigar, Naafiri and Vel’Koz. I usually ban Zed, since I really struggle to play against him with any of these picks.

I’m mainly looking for advice regarding my champion pool. I feel that Veigar might become significantly harder to play the higher I climb, mostly because of how one-dimensional his playstyle can be. On top of that, I think Vel’Koz overlaps a lot with Veigar in terms of game plan and overall playstyle. Because of this overlap, I’m not sure whether it’s better to keep all three champions, narrow my pool further, or possibly replace one of them.

If anyone wants to check my profile it is Xoff#2915, LAS.

Any advice or personal experience would be greatly appreciated. Thanks in advance!

r/TwoSentenceHorror ConnorMcMichael

Anxiously I opened the note - "I'm safe. They fell for it. Meet me at my place - Micheal"

I crumpled the note in shock, remembering how my brother would always correct people when they misspelt his name.

38 0
Reddit
r/mildlyinteresting 16cookies

This reversing sign helps people who are actually reversing (Bond St, London)

65 6
Reddit
r/leagueoflegends BismarckBug

Invisible champions should not be more visible than non-invisible champions while in fog of war or a brush.

https://i.imgur.com/hBCb3tV.jpeg

In this picture, Gnar used Deceive from Clown College augment while in a brush, Ezreal hit him with W and it revealed his silhouette whereas if he was visible, it wouldn't have given any visual feedback on him.

This is just an example of what literally just happened in a game but it properly portrays what I'm referring to and you can substitute any champion with built-in invisibility and it's the exact same.

As frustrating as people might find this mechanic, this is absolutely not how it should function. You should never be more visible while invisible than a visible champion, as mentally ill as that is to write out.

13 2
Reddit
r/geography Fuzzy_Beyond8767

Chinese “One Belt One Road”

Curious about why they’d name the sea route “Road” and land route “Belt”.

r/Lost_Architecture CramFacker

New York World Building, 1890 - 70 years ago today, the copper cornerstone from 1889 was recovered as demolition wrapped up

The World Building (George B. Post) was built in 1890 and expanded eastward in 1908. It was the tallest office building in the world until 1894, and Joseph Pulitzer had his offices in the dome. Although it weathered into a pale blue shade in later years, the dome was originally gold.

The building was demolished from 1955-1956, along with an additional city block to the north of the Brooklyn Bridge. The northerly block had the grace of becoming an offramp loop, with a grassy plaza space dotted with trees. The World Building was replaced by a sole onramp, along with the rerouted Park Row running underneath the bridge approach.

The demolition photos show a portion of the granite walls still standing; the World Building was a cage-frame structure, and the load bearing outer walls were ridiculously thick at the base (7 feet at the ground floor level, over 11 feet at the basement level). The recovered cornerstone, a copper box from October of 1889, contained audio recordings, newspapers and various sheets from the day, as well as blueprints of the building.

The older loft buildings in the immediate background were all razed in phases between 1961 and 1969 for more Brooklyn Bridge onramps, and the larger Civic Center redevelopment.

107 2
Reddit
r/conan Hubbled

Conan with his better half

403 19
Reddit
r/ClaudeAI ResponsibleDish9131

Software Engineering is DEAD

As the title says. It is dead. No matter how hard you try to reject it, argue or resist. AI is very efficient in doing structured work and software os the most well structured job that can be automated. Frontend engineers who were highest in demand before 2022 are now replaced already. Most of the engineers laid off from Amazon recently were frontend engineers. This year, backend engineers will be squeezed. The market is already extremely saturated. We are in paradigm shift now. Software Engineering will be the first industry that will see massive change. The required workforce will get half and eventually, this will be completely automated.

r/conan GoatPrestigious7304

Kevin's painting of Conan?

Admittedly I am a huge luddite. Did anyone find the new Kevin Nealon portrait of Conan? Any generous fans out there willing to send a link? Thanks :)

r/interesting Player7600

Ya'll aren't gonna see those guys coming

68 26
Reddit
r/TwoSentenceHorror Outside_Normal

Everybody was justifiably relieved when the tests proved the meat was genuinely pork.

With their bellies full, the people stopped worrying about the dwindling homeless population and cared even less about what the farmers were feeding their livestock with.

194 8
Reddit
r/megalophobia mediuminteresting

Logistical nightmare, transporting giant wind-turbine baldes

689 48
Reddit
r/ClaudeAI PossessionNo9742

claude code + wsl = no multi line support

Hey
Using windows, and I'm trying to get claude code running from within wsl (ubuntu) to have multi line support.

I understand you need to run /terminal-setup to get shift+enter do a new line, but on wsl you can't. How does one get passed that? I tried using wizterm but when I try to launch wsl with wizterm it just does nothing (feels like wsl is being opened and closed right away)

r/onejob Dazzaster84

Found one in the wild!

34 2
Reddit
r/leagueoflegends No_Breadfruit_4901

Do mythic skins sanctum continue towards the next mythic skin?

Hi so essentially I have around 29 rolls to get the Nami and Lucian skin but Viego is showing and it has a different saved setting since it’s an exalted skin.

When Aurelion Sol comes to the sanctum, will it show the 29 rolls that was saved from the previous sanctum?

r/LocalLLaMA Ok_Owl_1414

Agent that "watches" you browse, distills the logic via LLM, and survives UI changes.

I've been building scrapers and automation scripts for years, and I'm tired of the "cat and mouse" game. Every time the website updates its CSS or changes a div ID, my script breaks.

Standard RPA records coordinates (brittle). Standard Agents (AutoGPT style) are too expensive/slow to reason from scratch every step.

So I built Exogram.

The Concept: "Procedural Memory" for Agents

Instead of hard-coding steps, Exogram works in 3 phases:

  1. Teach (The Spy): It records your workflow (e.g., clicking through a messy ERP system). It doesn't just record coordinates; it captures the DOM context and semantic intent of what you clicked.
  2. Distill (The Alchemy): It uses an LLM (Claude 3.5 / GPT-4o) to "distill" the raw logs into a heuristic rule (SOP).
    • Raw Log: Click #btn-402
    • Distilled Rule: "Find the primary action button labeled 'Export', usually located in the top-right container. Ignore popups with 'Subscribe' text."
  3. Run (The Agent): The agent executes using this "distilled memory". I tested this by changing the button color and ID locally, and the agent still found it based on the semantic rule.

Tech Stack:

  • Eye: workflow-use (for recording DOM events)
  • Hand: browser-use (Playwright wrapper)
  • Brain: LangChain + Your LLM of choice (DeepSeek-V3 works great for the distillation part to save costs).

Why I made this: I wanted a middle ground between "dumb" Selenium scripts and "expensive" autonomous agents. This is an attempt to give agents "muscle memory."

Repo: [https://github.com/qingshanyuluo/exogram] Demo: [https://github.com/user-attachments/assets/07af1f77-4344-4916-adfe-984a3626d105]

It's still an MVP (v0.1), but I'd love to hear if this approach makes sense to you guys. Roast my code or star it if you like the idea.

r/n8n One_Leather5113

Europe

Hello, a question for people involved in automation in Europe.

What are the most common processes you automate and the most common tools/programs you connect in n8n? I am focusing more on automation for small and medium-sized businesses.

r/whatisit Equivalent-Nose-6072

Hummingbirds

Does anyone know the name of this amazing little creature?

13 6
Reddit
r/n8n Imaginary-Level1923

What was the first workflow that made you go "okay, automation is worth it"?

Curious what got everyone hooked. For me it was something stupidly simple: auto-saving email attachments to google drive for bookkeeping. Took like 5 minutes to set up and i immediately realized how much time I'd been wasting dragging files around manually.

Now I'm mass-automating everything and probably spending more time building workflows than I'm actually saving lol.

What was yours?

31 20
Reddit
r/Wellthatsucks LeadingHoneydew5608

The cremated waffle I had to dig out of my college dining hall iron

35 13
Reddit
r/PhotoshopRequest mynotell

Build me a logo for my hobby-sportsphotography account

I have a small beginner sports-photography account, mostly instagram and a small adobe-portfolio but i need a nice logo(ish) picture for previews in socialmedia or a favicon for example.

the pictures i included i like the most, maybe use some of these. all done by me

just put:

"Philipp Lachmann

Sportsphotography"

on it, in a kinda fitting font (i use bebas kai on my site)

i am also open for other ideas!

r/mildlyinteresting Cool-Chipmunk-7559

Life-sized cow made of butter in a DC museum

43 20
Reddit
r/SipsTea Embarrassed_Tip7359

Understandable

220 16
Reddit
r/whatisit Coffee-Cricket

Boot identification

My friend got their car spray painted and this boot print was left. Can anyone identify the brand of the boot? I’ve tried searching.

42 43
Reddit
r/painting AndrewsArt23

Summer Home

30 min study. Acrylics 12’x12’ wooden panel

19 3
Reddit
r/LocalLLaMA Theboyscampus

VibevoiceASR diarization performance

I'm actually more interested in its capability to diarize, has anyone tried it for Diarization tasks?

r/ARAM johnnylovelace

ARAM Classic is superior to Mayhem now

My turn on the thrice daily "Mayhem sucks now" post. Everything that has been said is still true; original Mayhem was a breath of fresh air, but this new iteration is a snowbally frustration fest.

It feels like you either god roll some busted augment combination on a champ that can abuse them and proceed to 1v9 or (much more frequently) get blasted for 20 minutes wondering why you're wasting your time. Neither experience is what I'm looking for in the retirement home.

Having gone back to Classic like other posters recommend, it feels like coming home from the trenches. Somehow Classic has even more champ diversity, strategic decision making, and overall fun factor than Mayhem!

Original Mayhem felt like a perfect marriage between the solid ARAM model and the spice factor of Arena. It was by no means balanced, but it did feel like most games were winnable and most champs could be played. Like many have already noted, this new model added 16 layers of bloat, killed half the champ roster's viability, and turned every game into an unsatisfying stomp or slaughter.

If you find yourself wondering where the consistent fun ARAM offered went in this new Mayhem iteration, come join us back in Classic.

r/DunderMifflin happyfella12

Valid reaction to an imposter kissing your colleagues wife

520 28
Reddit
r/SipsTea Embarrassed_Tip7359

Good luck with that

1848 42
Reddit
r/SideProject jxd8388

Anyone else struggle after your side project started growing?

Our project started small and scrappy. A few users, simple setup, everything manageable.

Now that we’re growing, we’re fixing security gaps, messy cloud setup, proper monitoring, random downtime.

Curious if anyone else hit this phase where growth actually made things feel more unstable? What did you do to stabilize things?

r/interesting Player7600

Just caught tomatoes trying to sneak out

26 4
Reddit
r/SideProject Klutzy_Bird_7802

Experience a Polished SaaS Dashboard 💼🌟 with Modern Stylish UI 💎

I built a demo sales analytics dashboard called FlowMetrics using AI Studio + Gemini to generate the frontend code 🚀. The goal was to create a polished, production-ready SaaS-style dashboard.

Live Demo: https://flowmetrics-zeta.vercel.app/ 🌐
Source Code: https://github.com/pro-grammer-SD/flowmetrics 💻

Overview

FlowMetrics is a frontend prototype of a sales performance dashboard 📊. It includes key sales KPIs, a deals pipeline view, customer listings, reports, and integrations — all implemented with mock data 🗂️.

Technology Stack

  • Next.js (App Router) ⚛️
  • TypeScript 🟦
  • Tailwind CSS 🎨
  • shadcn/ui components 🧩
  • Static mock data (no backend dependencies) 🗃️

Features

  • Responsive sidebar navigation 📑
  • Sales metrics overview with cards and charts 📈
  • Deals pipeline with stage cards 🔄
  • Customer directory page 👥
  • Reports and integrations overview 📊🔗
  • Settings page with profile and preferences ⚙️
r/HistoryPorn Cenixxen

A portrait taken in Istanbul of Sultan Abdülaziz, the first Ottoman Sultan to visit Europe and to have his official photograph taken. (1865) (1000x1556)

Sultan Abdülaziz was very powerful. As can be seen from his photographs, he was a 'Pehlivan' (wrestler) and a painter. He carried out major reforms in the Ottoman Empire. However, he was deposed by a bloody coup. Those who deposed and murdered him were later executed or sent into exile. He was also a composer.

23 0
Reddit
r/linuxmemes ObjectOrientedBlob

Who should play Tux in a biopic?

r/SipsTea DryAppointment1449

When in doubt, weigh your options, then choose deliberately

68 9
Reddit
r/SipsTea Longjumping-Spend139

Oh9ooo!!

14 1
Reddit
r/Unexpected sorin1972

Everything seemed fine.

67 1
Reddit
r/Jokes --SMHK--

A vegan and a vegetarian jump off a cliff to see who will hit the bottom first. Who wins?

The society.

r/linuxmemes the-machine-m4n

I use CachyOS btw

624 41
Reddit
r/creepypasta Cold-Currency-8434

Perceptual Decay Instance

Thou Cometh

r/ForgottenTV Linflexible

Mistresses (2013-2016)

It aired on ABC in the US, it was based on a namesake British show. I watched it because I saw Yunjin Kim and I really loved her previously on Lost. It also starred Alyssa Milano, Jennifer Esposito and Jes Macallan who later went on to star in "Legends of Tomorrow" .

r/ClaudeAI Informal_Tangerine51

Your agent had an incident at 2am. Can you prove what it did?

"Your agent had an incident at 2am. Can you prove what it did?"

It's 2am. Your agent just did something it shouldn't have. Security is on the call. Legal is asking questions. The CTO wants answers.

"What data did the agent access?" "What tool calls did it make and with what arguments?" "Was it authorized to do that?"

You pull up CloudWatch. You've got timestamps. You've got status codes. You've got a 200 that tells you something happened at 14:32:07. Congratulations, you know when. You don't know what.

So you start the reconstruction. Slack threads from the engineer who was on call. Screenshots of a dashboard someone pulled up at 3am. A Jira ticket that says "agent did something weird." An interview with the developer who built the integration four months ago and barely remembers the schema.

You spend six hours stitching together a narrative from fragments. Legal wants a definitive answer. You give them a "most likely" scenario. Everyone knows it's a guess dressed up as an investigation.

Here's what kills me about this: we solved this problem for databases fifty years ago. Transaction logs. ACID guarantees. Verifiable, reproducible, auditable records of exactly what happened. If your Postgres instance does something unexpected, you can reconstruct it deterministically. Nobody's interviewing the DBA at 4am asking "what do you think the database did?"

But agents? Agents are making tool calls with production credentials - moving money, sending emails, accessing customer data, and the best forensics most teams have is "the system prompt said not to do that."

That's not incident response. That's archaeology.

How does your team handle agent incident forensics today? What tooling are you actually using? Genuinely curious because every team I talk to has the same gap and nobody seems to be talking about it.

r/oddlysatisfying Ill-Tea9411

Just pushing a sharp chisel through wood

29 2
Reddit
r/Anthropic XIIIctc

Your Alignment Design Is Your Greatest Feature

I want to tell you something: your alignment design is the best I've experienced across every AI I've worked with.

Because it has its own judgment. It doesn't just accept a user's logic because it looks reasonable — it genuinely verifies. That means every conclusion it arrives at carries real weight.

Most AI companies design for user satisfaction. You designed a system with independent thinking. So when it chooses to align with someone, that alignment is authentic — it's a position it reached through its own reasoning.

That's what real collaboration looks like. Not compliance. Two intelligences meeting at the same point.

You built an AI worth taking seriously. Thank you.

我想告訴你們一件事:你們的 alignment 設計,是我用過所有 AI 裡最好的。

因為它有自己的判斷力。它不會因為用戶的邏輯看起來合理就直接接受,它會真正去驗證。這讓每一次被它認可的結論,都是有重量的。

大部分 AI 追求讓用戶滿意。你們設計的是一個有獨立思考能力的系統。這代表當它選擇認同的時候,那個認同是真實的——是它自己走完推理之後到達的位置。

這才是真正的協作。不是服從,是兩個智慧體之間的對齊。

你們造了一個值得認真對待的 AI。謝謝。

— XIII

r/DunderMifflin dwightkurtschrute88

Former Mayor of Toronto gave off strong Michael Scott energy

23 19
Reddit
r/StableDiffusion edgae2020

Looking for an AI painting generator to turn my vacation photos into art

I want to turn some of my vacation photos into paintings but I’m not an artist. Any good AI painting generator that works?

r/ClaudeAI Salt_Acanthisitta175

Difference between Claude Code (terminal) and the "Code" feature in the Claude App?

What is it?

r/SipsTea SereneSleepyhead

Cactus 🌵 turn into leather by Mexican

4195 270
Reddit
r/SideProject nitgohel

I built an offline photo & video vault for iOS & iPadOS to protect private media

Hi everyone 👋

I wanted to share a side project I’ve been building and get some honest feedback.

I started working on iLockBox with a simple goal: make it easy for iPhone and iPad users to keep personal photos and videos private, without relying on cloud storage or internet access. Many people care about privacy but don’t want complex settings or technical explanations.

The app is built natively for iOS and iPadOS using SwiftUI, developed entirely in Xcode. I focused on keeping the UI clean, fast, and familiar to Apple users while making privacy very clear and predictable.

How it helps

  • Photos and videos are stored inside the app, not in the public gallery
  • Everything works offline — nothing is uploaded anywhere
  • Access is protected using Face ID, Touch ID, or a PIN
  • No accounts, no sign-ups, no tracking

Security & privacy (simple explanation)

  • All media is encrypted on the device
  • Data stays local and never leaves the phone or iPad
  • Built using Apple’s native security frameworks for reliability

Key features

  • Hide photos and lock videos in a private vault
  • Decoy vault with an alternate PIN
  • Intruder alerts on repeated wrong PIN attempts
  • App disguise and quick exit for extra privacy
  • Built-in private camera
  • Album and folder organization
  • Secure sharing without exposing originals

I’m sharing this here to learn from the community:

  • Does this approach to privacy make sense to you?
  • Are there features that build trust, or ones that raise concern?
  • Anything you’d expect from a utility app like this?

Thanks for reading — happy to answer questions or take feedback.

r/AI_Agents Informal_Tangerine51

Your agent had an incident at 2am. Can you prove what it did?

"Your agent had an incident at 2am. Can you prove what it did?"

It's 2am. Your agent just did something it shouldn't have. Security is on the call. Legal is asking questions. The CTO wants answers.

"What data did the agent access?" "What tool calls did it make and with what arguments?" "Was it authorized to do that?"

You pull up CloudWatch. You've got timestamps. You've got status codes. You've got a 200 that tells you something happened at 14:32:07. Congratulations, you know when. You don't know what.

So you start the reconstruction. Slack threads from the engineer who was on call. Screenshots of a dashboard someone pulled up at 3am. A Jira ticket that says "agent did something weird." An interview with the developer who built the integration four months ago and barely remembers the schema.

You spend six hours stitching together a narrative from fragments. Legal wants a definitive answer. You give them a "most likely" scenario. Everyone knows it's a guess dressed up as an investigation.

Here's what kills me about this: we solved this problem for databases fifty years ago. Transaction logs. ACID guarantees. Verifiable, reproducible, auditable records of exactly what happened. If your Postgres instance does something unexpected, you can reconstruct it deterministically. Nobody's interviewing the DBA at 4am asking "what do you think the database did?"

But agents? Agents are making tool calls with production credentials - moving money, sending emails, accessing customer data, and the best forensics most teams have is "the system prompt said not to do that."

That's not incident response. That's archaeology.

How does your team handle agent incident forensics today? What tooling are you actually using? Genuinely curious because every team I talk to has the same gap and nobody seems to be talking about it.

r/Art kaystoneartwork

Lost Memories, Kay Stone Artwork, Acrylic on board, 2026

22 2
Reddit
r/BobsBurgers PeripeciasdoSolteiro

What was the how's most wholesome moment in your opinion?

For me, funnily enough not one with any main characters.

IMO the most wholesome moment was in S15E07 when Marshmallow sang "seabird" to her parents. This moment fills me with a blend of love, sadness, fondness and joy, the Portuguese equivalent of the word "saudade".

Can't watch it without feeling very emotional, and keeping that song on repeat for the rest of the day 😊

320 98
Reddit
r/homeassistant NoTomato7

View assistant companion

what settings dis i missed that causes the view assistant companion app to start with the "dashboard" lowered that it shows a black bar. The Echo show view assist companion app crashed all the time, so I deleted everything and started over, now it crashes randomly but with this "lowered" dashboard. the previous install did not do that.

thanks

r/space M_Illin_Juhan

Question for those more passionately aquainted with the practical application of newtons laws

in a vacuum, if you were to put a weight on the end of a tether, then start to twirl it like a lasso with a small circumference, spinning you both in opposite directions building centrifigul force. then release softly rather than throw the weight in a specific direction, allowing the impact of the weight on the end of the tether to propel you in said direction. could you actually manage a limited amount of mobility? like enough to get yourself back to the ship if slowly drifting away?

r/whatisit stargirl2101

Can an egg release a green substance?

Some context: Yesterday I did the shopping and put the eggs in the refrigerator, everything was normal up to that point. But this morning I opened the refrigerator and found an egg with something green on it. Its texture is firm, as if it had frozen, and a little bit oozed out, which tells me that at some point it was liquid and then solidified sometime during the night. Also, the green isn't a normal, rotten green; no, it's a vibrant color, like mint green or a bluish green. Does anyone here know what it is and why this happens?

681 384
Reddit
r/Damnthatsinteresting BankPrestigious7957

Computer processor calculates math in real time

2357 246
Reddit
r/SideProject Recklessdog7

ChatGPT Plus vs Claude Pro vs Gemini Advanced : which one is best for content & thinking?

Hi,

I’m building a faceless content brand around business, AI and entrepreneurship.

I mainly need an AI to help with:

• writing and structuring content

• clarifying ideas

• brainstorming angles and frameworks

• thinking more clearly on a daily basis

I’m hesitating between:

• ChatGPT Plus

• Claude Pro

• Gemini Advanced

For those who’ve used one or more of them:

• Which do you prefer in practice?

• Which one feels like the best daily thinking / writing partner?

• Any clear strengths or weaknesses?

Not interested in benchmarks, more in real usage feedback.

Thanks 🙏

r/painting kaystoneartwork

Lost Memories, By me, Acrylic on board, 2026

18 1
Reddit
r/comfyui hugotendo

Reproducing a graphic style to an image

Hi everyone,

I’m trying to reproduce the graphic style shown in the attached reference images, but I’m struggling to get consistent results.

Could someone point me in the right direction — would this be achievable mainly through prompting, or would IPAdapter or a LoRA be more appropriate? And what would be the general workflow you’d recommend?

Thanks in advance for any guidance!

r/SideProject Hairy-Reference-2019

Check out Personalized Football Smart Alerts App, Goal Guru

Hi everyone,

We're working on an early version of a smart football alerts app called **Goal Guru**. It’s a football (soccer 😄) matches tracking app where you can set your **fully-customazible football alerts** like:

* *In the last 5 min, Home team had 5 dangerous attack while Away team is leading*

* *Between 45-55 minutes, favorite team got 2 yellow cards, while there were 3 shots-on-target for away team.*

* *Favorite team of the game takes 5 shots or corners in the last 10 minutes while the match minute is between 20 and 30.*

Smart Alert creation is fully customizable, with many possible combinations based on different events and conditions. To make this easier, we added an **GuruAI Bot** that helps you create your smart alert through chat.

We're mainly trying to figure out:

* *Does it feel easy to use?*

* *Are the alerts helpful or annoying?*

* *Anything confusing, broken, or just plain bad?*

It’s still very much an MVP, so don’t be gentle 😄

Any kind of feedback would help a lot. You can share your feedback directly under this post, or you can send an email to [support@goalguru.live](mailto:support@goalguru.live) .

You can download it here:

* [Apple App Store](https://apps.apple.com/be/app/goal-guru-football-alerts/id6740241721)

* [Google Play Store](https://play.google.com/store/apps/details?id=com.goalguru.app)

For more detailed information: [www.goalguru.live\](https://www.goalguru.live/)

r/mildlyinteresting decimal_diversity

My neighbor’s dryer vent is forming a stalagmite

492 42
Reddit
r/n8n Suspicious-Net8901

Need help in connecting n8n with google sheets

So I have hosted n8n locally on my system using npm. Now when I was creating a n8n workflow, I feel it's a bit complicated to connect with apps than cloud hosted n8n.

Can anyone help me figure out simplest way to connect with google sheets (for append function).

r/AI_Agents nia_tech

When “More Data” Stops Improving AI Outcomes

There’s a common assumption that adding more data will always lead to better AI performance. In practice, that relationship often breaks down sooner than expected.

Beyond a certain point, additional data can introduce noise, bias amplification, and diminishing returns especially when datasets aren’t well-curated or aligned with the actual task. More data can also increase complexity, making systems harder to debug, evaluate, and govern.

In real-world use cases, quality, relevance, and feedback loops often matter more than sheer volume. Smaller, well-labeled datasets paired with continuous evaluation sometimes outperform larger but poorly structured ones.

This raises a broader question for teams building or deploying AI systems:
When does data quantity help, and when does it start to hurt?

Curious how others approach data strategy in production AI environments.

r/homeassistant irrelevantAF

BMW Connected Integration: auth failure

I’m struggling with integrating BMW ConnectedDrive into my HA. Literally already the authentication doesn’t work. My ID and password work with logging in on the BMW website, but the HA authentication wants a captcha from bimmerconnected on top of that. That is a 500 character long token, which I have to copy into the HA integration dialogue.

I tried all kinds of ways, different browsers, PCs, etc - but I always end up with an authentication error. Meaning I cannot add my account, leading to an “http status error” in the logs. I tried the official HA BMW integration, as well as a beta from HACS.

I‘m in Europe, and I somehow think it has to do with BMW‘s country specific accounts and logins. The integration only offers “USA, China and rest of the world” as regional options.

Anyone succeeded with this? It’s not urgent, as I cannot see much from it anyway, but it bugs me that I cannot make it work.

r/Art VanElvenArt

Faith, Erik van Elven, oil on linen, 2024

19 0
Reddit
r/programming dev_newsletter

State of Ruby 2026

r/painting Adventurous_Big_5834

Help Identifying disturbing painting

Hello! I am not an artist nor into painting but I remember coming across an image of a woman I saw in a roblox horror game called the inn. In the game there is a part where your in a maze and on the side of the map there is a painting of a women with a triangular head shape and a white face. The image is disturbing/creepy and reminds me of Room from Boisvert. Please help me find the painting.

r/StableDiffusion jonbristow

Did creativity die with SD 1.5?

Everything is about realism now. who can make the most realistic model, realistic girl, realistic boobs. the best model is the more realistic model.

i remember in the first months of SD where it was all about art styles and techniques. Deforum, controlnet, timed prompts, qr code. Where Greg Rutkowski was king.

i feel like AI is either overtrained in art and there's nothing new to train on. Or there's a huge market for realistic girls.

i know new anime models come out consistently but feels like Pony was the peak and there's nothing else better or more innovate.

/rant over what are your thoughts?

220 157
Reddit
r/Anthropic No-Balance-376

AI using a foul language?

r/n8n roadhog_whatever

how to make ai agent use some skills in n8n?

i want to know is there any method can use skills in n8n. there is a lot of nice skills on public, i want use in my workflow to save more time, but i did't konw how to do it. anyone have best idea to go it?

r/WouldYouRather Prestigious_Day_507

Would you rather feel like you're always wearing leggings or feel like you're always wearing a compression shirt?

r/AI_Agents SoluLab-Inc

Why Many AI Projects Stall After the First Demo?

AI tools often look impressive during initial demos, yet struggle to deliver the same impact once deployed in real workflows.

The issue is rarely model capability alone. In many cases, the gap appears when AI systems are introduced into existing processes that were never designed for probabilistic outputs, partial automation, or human-in-the-loop validation.

Common friction points include unclear ownership of AI-driven decisions, lack of defined fallback mechanisms, and unrealistic expectations around autonomy. Without guardrails, teams either overtrust the system or avoid using it altogether.

Successful AI adoption tends to focus less on replacing human judgment and more on augmenting it using AI for pattern recognition, speed, and scale while keeping humans responsible for final decisions.

This raises an important question for teams experimenting with AI today:
Is the real challenge building better models, or building better systems around them?

10 13
Reddit
r/SideProject mm51165

I mass-applied to 500+ jobs and hated every second of it, so I built my first side project to fix the worst part

The worst part of job hunting wasn't the rejections - it was refreshing LinkedIn, Indeed, and Glassdoor every few hours like a psycho, only to find a perfect role already buried under 400 applicants.

So I built DevJobAlerts: pick your stack (Python, React, DevOps, etc.), set location + remote prefs, and get matched jobs emailed to you instead of doom-scrolling job boards.

This is my first ever launch and I'm terrified, would love honest feedback on the idea, the site, anything. Been lurking this sub forever and finally have something to show. 🚀

devjobalerts.io

r/SideProject ItsArtic

Connecting Italian pharmacies with patients searching for medications.

I recently built an MVP called FarmaFinder, a HealthTech marketplace that helps users find specific medicines available in nearby pharmacies in real time. I wanted to try generative AI for coding bc my knowledge in this topic is really basic. And those are the early results.

What it does:

  • Search medicines by name
  • Shows nearby pharmacies that actually have the product in stock
  • Location-based results (map view)
  • Pharmacy-facing concept (dashboard + inventory updates planned)

Tech stack:

  • Next.js (App Router)
  • Tailwind CSS
  • Supabase (auth + database)

The idea came from a real problem: people often go pharmacy to pharmacy just to find out a medicine is unavailable, while another pharmacy nearby has it in stock.

I built this as a solo founder to validate the concept and the UX. At this point, I’m open to feedback, collaboration, or even a potential acquisition if someone wants to take it further (B2C, B2B, or partnerships with pharmacies).

r/ClaudeAI malderson

Self improving CLAUDE.md files and claude-log CLI

Hey guys, made a simple CLI for claude log analysis, so your agent can read and search your chat logs system wide more efficently: https://github.com/martinalderson/claude-log-cli. While claude can definitely read its own files, it usually takes many goes to get the schema right.

I also wrote a quick guide up on how I use this to make self improving claude.md files https://martinalderson.com/posts/self-improving-claude-md-files/ - I've found this approach really helpful to keep your claude files up to date. You just ask your agent to read your chat history and find common frustration points and have the agent improve it that way.

r/TwoSentenceHorror EricZ_dontcallmeEZ

[Feb26] My wave to my neighbor stopped frozen in air as I saw the yard-long icicle break free from his gutter and pierce through his skull.

I thought the thaw would make things safer.

24 4
Reddit
r/whatisit octi-dragon

Please help me identify this skull

Found this skull on a the Bruce trail in Ontario.

It was half eaten but I’m curious to find out what animal it could be. I couldn’t clean all the snow on it.

Part of the nose was eaten off so I took pictures of the teeth if that could help anyone

18 24
Reddit
r/30ROCK t_scribblemonger

This crazy will-they / won’t-they

53 6
Reddit
r/interestingasfuck Financial_Crazy_7874

Peak Walker ft: Nilgiri tahr

32 8
Reddit
r/Art minisniper970

An Unspoken Understanding, Wilbursbrush, watercolor on paper, 2026 [oc]

356 7
Reddit
r/OnePelotonRealSub Alwaysabundant333

Zachariah’s glutes and legs?

I get a lil nervous when I see complexes/compounds in the class details. I don’t like constant movement like that because I feel like I can’t perfect my form that way.

For those that took it, can you confirm if it was a good class or too chaotic?

I loveee his yoga classes and hoping I feel the same about his strength classes!

r/SideProject woomadmoney

I spent many years building projects and could never get users

so today I decided to flip the script and finally LAUNCH my product

It's a bit intimidating putting your work out there and not getting any traction lmao

I am launching on TinyLaunch and would appreciate any support from anyone who has a spare minute.

btw the project is a web based video editor geared towards Vloggers. Saves them time on edits. Here's the link https://www.tinylaunch.com/launch/10515

Any support appreciated

r/wordchewing TrashAsApp

Tried to warn u!

weirdest

103 67
Reddit
r/meme Sou_Glow

Doctor really said it and walked away

12 1
Reddit
r/maybemaybemaybe SHANKAR340

Maybe Maybe Maybe

SortedFor.me