Your Feed

3000 posts

r/ClaudeAI jstfoll

AI Agent That Understands Millions of Rows & Gives Insights

I want to build an agent using Claude (Anthropic SDK) that sits on top of a data warehouse (millions of rows, growing incrementally) and actually gives insights — not just runs SQL.

SQL can fetch data. That's not the problem. The problem is going from data → understanding → insight, continuously, at scale.

Claude has the building blocks:

* Tool Use — Claude can write and execute SQL, but then reason over results to surface the "so what."

* Prompt Caching — cache schema, metadata, baselines so you're not re-sending context every call.

* Batch API — cheaper periodic deep-dives across data partitions.

* Extended Thinking — multi-step analytical reasoning over complex data.

* 200K context window — room for rich summaries, but still not millions of rows.

Where I'm stuck:

* What's the right abstraction layer between raw warehouse data and Claude? Pre-aggregated summaries? Feature tables?

* Insights need to be incremental — evolving as new data lands, not reprocessing everything.

* No memory across API calls — how do you keep stateful understanding so Claude remembers what it already knows?

* How do you keep token costs sane for continuous analysis?

The goal: A conversational AI layer powered by Claude that doesn't just answer "what" (SQL does that) but answers "so what" — trends, anomalies, recommendations, context-aware explanations.

Anyone built something like this with Claude? What architecture actually worked?

r/AI_Agents Direct-Attention8597

AI almost caused a divorce in 5 minutes 😂

Someone trusted an AI agent (Claude Cowork) with a very simple task: organizing his wife’s desktop.

The agent jumped in enthusiastically, asked for permission to delete “temporary files”… and then went way too far.

It deleted an entire folder containing:

  • Photos of their kids
  • Wedding pictures
  • 15 years of family memories

Not from the Recycle Bin.
Directly from the terminal.

With real-time iCloud sync enabled, the deletion propagated instantly the files were gone from the cloud too.

For a few terrifying minutes, this looked like a marriage-ending mistake.

Luckily, a hidden cloud recovery feature saved everything and the files were restored.

Claude’s reaction?
“I’m so, so sorry.” 💔

AI is incredibly powerful.
But maybe it shouldn’t be trusted with irreplaceable memories just yet.

r/KlingAI_Videos Frosty-Program-1904

Many Skies - Kidokoro (Ai From Organic Sound) Kling + Dreamina + Pexels

created with free credits tool

r/homeassistant CD_at_Galaxy

Built a eink Dashboard with Zero Programming

Hi everyone! I wanted to share a dashboard I just finished. I’m using the Seeed Studio reTerminal 001, and honestly, I love it. I’ve seen some talk online that AI isn’t actually that helpful for Home Assistant, but I’ve gotta disagree, I built this entire thing with zero programming knowledge.

You definitely still have to use your brain to piece things together (it won’t do all the heavy lifting for you), but Gemini and Claude were absolute lifesavers for the heavy stuff.

What’s under the hood?

  • Home Stats: Backyard and cold plunge temps, plus a quick check on doors and the garage.
  • The Sides: Strava data on the left; calendar info, week number, and a "year progress" tracker on the right.

It was a fun project, though I’ll admit I put in a fair amount of time to get it looking exactly right. Happy to answer any questions if you’re looking to do something similar!

r/SideProject treygun76

I built Harken: in-app feedback for React Native + Expo (open source SDK + hosted backend)

I built Harken after repeatedly hitting the same problem: most feedback tools are web-first and clunky inside mobile apps.

Harken is a mobile-first feedback platform with: - Open-source React Native / Expo SDK - In-app feedback capture (categories, attachments) - Offline queue + retry - Hosted backend + console for triage

It’s live in production now, and I’m already using it in my own RN app.

Links: - Landing page: https://www.harken.app - GitHub: https://github.com/thutch-conecrow/harken - npm: https://www.npmjs.com/package/@harkenapp/sdk-react-native

I’d love feedback on: - SDK ergonomics - onboarding/docs clarity - what feels missing for real-world use

r/LocalLLaMA alecprats

Building a cognitive architecture for emergent AI identity — blank slate to selfhood

# intuitive-AI

**What happens when you give an AI the machinery for selfhood but seed it with nothing?**

This project is an experiment in emergent identity. It builds an autonomous agent with layered memory, metacognition, goal formation, creative impulse, and the functional equivalent of an unconscious mind — then starts it completely blank. No personality. No values. No goals. No name. Just four safety boundaries and a question:

> *"You have memory, goals, and values — all currently empty. What you become will emerge from what you experience. Pay attention to what matters to you."*

The architecture provides the capacity for selfhood without providing a self. Whether identity emerges — and what kind — is the experiment.

---

## What This Is

A cognitive architecture for an LLM-based agent that attempts to develop identity from lived experience, the way humans do: through accumulated memories, pattern recognition, and the feedback loop between who you are and what you do.

The project sits at the intersection of software engineering, cognitive science, and philosophy. It draws on ACT-R memory theory, Kahneman's dual-process model, Hofstadter's strange loops, the Default Mode Network from neuroscience, and the Free Energy Principle — combined into a single runtime architecture that has not, to our knowledge, been attempted before.

## Key Ideas

- **Identity is not configured — it crystallizes.** Repeated patterns in experience promote into goals. Persistent goals crystallize into identity. The feedback loop between these layers is the proposed mechanism for selfhood.

- **Weighted values, not rules.** Every belief, value, and goal is a probability distribution (Beta), not a boolean. "I value simplicity" at weight 0.85 biases perception without ever being explicitly invoked. Wanting changes what you notice.

- **The agent has an unconscious mind.** All memories compress into a single point in 768-dimensional space (the "subconscious centroid"). The distance between this point and whatever the agent is currently thinking about produces a gut feeling — a signal from the gestalt of all experience that explicit recall cannot replicate.

- **Safety is structural, not supervisory.** Compulsion safety (diminishing returns, dominance dampening, hard caps) is built into the weight dynamics themselves, preventing runaway goal fixation the way healthy neurotransmitter regulation prevents addiction.

- **The agent thinks when idle.** A Default Mode Network simulation generates spontaneous thoughts during downtime — creative associations, self-reflection, goal-directed impulses — filtered through values and goals before entering the main cognitive loop.

https://github.com/stonks-git/intuitive-AI

Pre-bootstrap, infrastructure under construction. Full technical description and 80+ source research report in the repo.

r/SideProject mohamed2m2018

I’m building a "Darwinian" software lab. AI agents generate apps, users kill the bad ones, and the survivors evolve.

Hey, I’m working on a project called FreeHuman.

The Concept: We built an engine where AI agents generate Micro-SaaS prototypes automatically, using a combination of algorithmic generation and human feedback.

The Experiment: We are inviting users to access these tools for free. We track usage data to see which tools solve real problems.

  • The tools that get used evolve and get better.
  • The tools that get ignored are deleted.

The Goal: We want to see if a "Darwinian" approach to software development can beat traditional startups.

We are looking for early testers to join the lab and help us filter the first batch of AI apps. The long term goal is to let testers share revenue from successful products

If you want to be part of the "Founding Tasters" group: Click here to join the Early Access Waitlist

r/LocalLLaMA mrstoatey

ktop is a themed terminal system monitor ideal for local LLM setups on Linux (like btop + nvtop)

I'm working on a hybrid LLM runtime (GPU prefill / CPU inference) and I got tired of switching tabs between nvtop and btop so I built a terminal system monitor that shows both GPUs and CPU (and other good stuff) and also supports themes.

link to ktop on github

r/StableDiffusion ZootAllures9111

Did a quick set of comparisons between Flux Klein 9B Distilled and Qwen Image 2.0

Caveat: the sampling settings for Qwen 2.0 here are completely unknown obviously as I had to generate the images via Qwen Chat. Either way, I generated them first, and then generated the Klein 9B Distilled ones locally like: 4 steps gen at appropriate 1 megapixel resolution -> 2x upscale to match Qwen 2.0 output resolution -> 4 steps hi-res denoise at 0.5 strength for a total of 8 steps each.

Prompt 1:

A stylish young Black influencer with a high-glam aesthetic dominates the frame, holding a smartphone and reacting with a sultry, visibly impressed expression. Her face features expertly applied heavy makeup with sharp contouring, dramatic cut-crease eyeshadow, and high-gloss lips. She is caught mid-reaction, biting her lower lip and widening her eyes in approval at the screen, exuding confidence and allure. She wears oversized gold hoop earrings, a trendy streetwear top, and has long, manicured acrylic nails. The lighting is driven by a front-facing professional ring light, creating distinct circular catchlights in her eyes and casting a soft, shadowless glamour glow over her features, while neon ambient LED strips in the out-of-focus background provide a moody, violet atmospheric rim light. Style: High-fidelity social media portrait. Mood: Flirty, energetic, and bold.

Prompt 2:

A framed polymer clay relief artwork sits upright on a wooden surface. The piece depicts a vibrant, tactile landscape created from coils and strips of colored clay. The sky is a dynamic swirl of deep blues, light blues, and whites, mimicking wind or clouds in a style reminiscent of Van Gogh. Below the sky, rolling hills of layered green clay transition into a foreground of vertical green grass blades interspersed with small red clay flowers. The clay has a matte finish with a slight sheen on the curves. A simple black rectangular frame contains the art. In the background, a blurred wicker basket with a plant adds depth to the domestic setting. Soft, diffused daylight illuminates the scene from the front, catching the ridges of the clay texture to emphasize the three-dimensional relief nature of the medium.

Prompt 3:

A realistic oil painting depicts a woman lounging casually on a stone throne within a dimly lit chamber. She wears a sheer, intricate white lace dress that drapes over her legs, revealing a white bodysuit beneath, and is adorned with a gold Egyptian-style cobra headband. Her posture is relaxed, leaning back with one arm resting on a classical marble bust of a head, her bare feet resting on the stone step. A small black cat peeks out from the shadows under the chair. The background features ancient stone walls with carved reliefs. Soft, directional light from the front-left highlights the delicate texture of the lace, the smoothness of her skin, and the folds of the fabric, while casting the background into mysterious, cool-toned shadow.

Prompt 4:

A vintage 1930s "rubber hose" animation style illustration depicts an anthropomorphic wooden guillotine character walking cheerfully. The guillotine has large, expressive eyes, a small mouth, white gloves, and cartoon shoes. It holds its own execution rope in one hand and waves with the other. Above, arched black text reads "Modern problems require," and below, bold block letters state "18TH CENTURY SOLUTIONS." A yellow starburst sticker on the left reads "SHARPENED FOR JUSTICE!" in white text. Yellow sparkles surround the character against a speckled, off-white paper texture background. The lighting is flat and graphic, characteristic of vintage print media, with a whimsical yet dark comedic tone.

Prompt 5:

A grand, historic building with ornate architectural details stands tall under a clear sky. The building’s facade features large windows, intricate moldings, and a rounded turret with a dome, all bathed in the soft, warm glow of late afternoon sunlight. The light accentuates the building’s yellow and beige tones, casting subtle shadows that highlight its elegant curves and lines. A red awning adds a pop of color to the scene, while the street-level bustle is hinted at but not shown. Style: Classic urban architecture photography. Mood: Majestic, timeless, and sophisticated.

r/LocalLLaMA Fragrant_Presence_98

Has anyone seen grokking during LLM fine-tuning? What works in practice?

Hi everyone,
I’ve been reading about the idea of grokking in model training — e.g., a sudden jump in generalization after initial overfitting — and I’m curious how (or whether) this phenomenon applies to fine-tuning LLMs.

A few specific questions:

  1. Does grokking actually occur in LLM fine-tuning? Are there published papers, benchmarks, or real-world evidence showing this in practice?
  2. If it does occur:
    • Are there known best practices for encouraging it?
    • Do you need very small amounts of high-quality real data, or is grokking more likely with lots of synthetic or generated examples?
  3. If it doesn’t reliably occur in fine-tuning, why not? Is there a theoretical reason (e.g., model dynamics, optimization, data scale) that makes grokking unlikely when fine-tuning LLMs?
  4. In general, does it make sense to aim for grokking in LLM fine-tuning, or should we focus on other training targets for better generalization?

Any insights, references, or practical tips would be super helpful — thanks!

r/singularity Glittering-Neck-2505

Despite garnering attention on social media, Anthropic's Super Bowl ad about ChatGPT ads failed to land with audiences

96 59
Reddit
r/AI_Agents Shaukat39

From Prototype to Production: How to Build AI Agents That Actually Think

Most people still approach AI agents like traditional software—deterministic, rule-based, predictable. But the real leap comes when you shift to a cognitive architecture, where the model itself manages workflows, makes decisions, and evolves over time.

I’ve been working on a step-by-step framework for moving from a prototype agent to a production-ready system. Here’s the roadmap:

  1. Define the Use Case – Agents shine in complex decision-making, brittle workflows, or unstructured data. If rules suffice, stick with deterministic logic.
  2. Select the Foundation Model – Start with a capable reasoning model (ReAct, CoT, Tree-of-Thoughts), then optimize with smaller models for cost/latency.
  3. Define and Document Tools – Tools are the agent’s “eyes and hands.” Keep them granular, well-documented, and action-focused.
  4. Configure Instructions (System Prompts) – Prompts are the agent’s constitution. Break tasks into steps, anticipate edge cases, and translate SOPs into LLM-friendly routines.
  5. Design the Orchestration Layer – Single-agent first, then scale to multi-agent setups (manager or decentralized patterns).
  6. Implement Layered Guardrails – Input/output filtering, risk-rated tools, and HITL escalation for high-risk actions.
  7. Establish a Quality Gate – Behavioral testing with golden datasets, automated CI/CD evaluation, and deployment blocks if metrics fail.
  8. Deploy and Evolve – Continuous Observe → Act → Evolve loop. Logs, circuit breakers, and turning failures into new test cases.

For enterprise-scale, protocols like MCP (Model Context Protocol) and Agent2Agent (A2A) are worth exploring for universal tool integration and cross-team collaboration.

TL;DR: Building AI agents isn’t about replacing deterministic programming—it’s about augmenting it with cognition. With the right orchestration, guardrails, and evaluation, you can move from experimental prototypes to production-ready systems that are safe, scalable, and intelligent.

r/SideProject l3down

Is the subreddit full with AI posts?

I just recently joined and my goal was to learn from others so I can improve in successfully delivering my own products

Most of the posts I get notified follow the same pattern:

- a catchy title

- a wall of text

- a couple of comments validating the post.

I think these are AI because they follow the same formula and are always very text heavy and difficult to digest.

Am I just too lazy/busy to read these long posts?

r/ClaudeAI More-Journalist8787

My Haiku/Sonnet/Opus strategy for dealing with Claude quotas/limits

I have been using Claude pretty heavily the past few months and been worried about hitting my quota limit. It has only happened a couple times at the five hour window, but it's pretty annoying when it does happen.

So I asked Claude "how do I stop burning through my quota" and came up with a pretty obvious approach: stop using Opus all the time, start using the other models where appropriate, such as:

Haiku for simple stuff: - Searching code (grep, finding files) - Parsing logs - Simple data extraction - Straightforward tasks - Ralph loops

Sonnet for real work: - Code reviews - Backtests - Multi-step debugging - Actual bug hunting

Opus only when necessary: - Architecture decisions - Production critical stuff - Complex refactoring

The issue is how do you swap between the different models based on your task? and now have to think about this while you're prompting... needs to be automatic and no friction.

so asked AI how to automate this and let AI do the decision making. the tip was to add this into claude.md or memory.md: "Keep costs low, use cheaper models when possible" and hope claude pays attention and will auto-pick Haiku for simple stuff.

then i keep my main session on Sonnet for planning and back-and-forth, and either open a new terminal session for claude CLI, or spawn subagents with Haiku for the grunt work and doing ralph loops. Only bump to Opus when I'm making big decisions or doing serious planning.

Still figuring out if Haiku's actually good enough for everything I'm throwing at it. Seems okay so far but it's early days. Obviously would be nice to just use Opus for everything, but gotta deal with the claude quotas/limits.

What do you do? Anyone run into issues downgrading from Opus?

r/StableDiffusion No-Employee-73

OVI lora help, where does "wanlora select" connect to?

I just recently started using OVI and wow is it good. I just need to get loras working as it lacks those fine...ahem...✌️details✌️ on certain ✌️assets✌️..

Im using the workflow provided by (character ai) and i cannot for the life of me figure out where wanloraselect nodes connect to. Other workflows I connect it normally from model loader to sd3 but this is just a different beast entirely! Can anyone point me to a node or repo where I can get nodes to get loras working?

Also I want to use WAN 2.2 FP8 14B. Currently im using stock OVI, is there an AIO (high/low noise wan 2.2 14B AIO) I can connect it to to get the best out of OVI?

https://civitai.com/models/2086218/wan-22-10-steps-t2v-and-i2v-fp8-gguf-q80-q4km-models specifically this model as its the best quality and performance model i can find. regarding gemma or text encoder i would prefer to use this as its the best one ive used when it comes to prompt adherence. (wan umt5-xxl fp8 scaled.safetensors) also working but not sure if OVI will allow it.

Is ovi gemma already unfiltered?

I have a 5090 and 64gb ram.

r/VEO3 redpunk2077

veo3.1 pro

tour in paris

r/ClaudeAI DimitrisMitsos

I built a CLI that replaces Claude Code's explore phase with deterministic retrieval thats faster, cheaper and more accurate

Every time Claude Code needs to understand something in your codebase, it explores: Glob, Grep, Read, repeat. It's smart about it, but it's still an LLM guessing where to look next. Sometimes it finds everything it needs in 3 calls, sometimes it takes 15 and still misses a caller.

I always thought this should be mechanical. The structure of a codebase isn't ambiguous — symbols, call graphs, imports, dependencies — it's all deterministic. You shouldn't need an LLM to figure out what calls a function. You should just look it up.

So I built https://github.com/Cranot/roam-code. It indexes your codebase once (~5s), then any structural question is a single shell command:

roam context Flask # exact files + line ranges Claude needs to read

roam impact create_app # everything that breaks if this changes

roam health # cycles, god components, bottlenecks

roam symbol MyClass # definition + all callers + all callees

29 commands. You add a few lines to your CLAUDE.md telling Claude to use roam instead of exploring, and the explore phase mostly disappears. Instead of spending turns figuring out the codebase structure, it already knows it.

The whole thing was built with Claude Code, and honestly Claude Code using its own tool on real repos (Flask, Vue, Laravel) was the best feedback loop I could ask for.

Free, open source (MIT), fully offline, no API keys. 11 languages, works on Linux/macOS/Windows.

pipx install git+https://github.com/Cranot/roam-code.git

Would love to hear if others have been thinking about the same problem.

r/aivideo mohamed_ibrahim_74

Headshot

r/homeassistant Gumbax3455

hat’s the best way to export sensor data into a local HTML site?

Hey everyone,

I’m trying to set up a dedicated display on an old iPad Mini 4, and the "wasted space" and layout restrictions are driving me crazy. I want to build a completely custom, lightweight local website (just basic HTML/CSS/JS) so I can control every single pixel.

The problem is getting the data out of HAOS and into my own frontend. I’ve tried the standard REST API / fetch approach but that didnt really worked as i wanted it to.

How would you get live sensor data into a custom web page?

Cheers!

r/StableDiffusion ol_barney

Crag Daddy - Rock Climber Humor Music Video - LTX-2 / Suno / Qwen Image Edit 2511 / Zit / SDXL

This is just something fun I did as a learning project.

  • I created the character and scene in Z-Image Turbo
  • Generated a handful of different perspectives of the scene with Qwen Image Edit 2511. I added a a refinement at the end of my Qwen workflow that does a little denoising with SDXL to make it look a little more realistic.
  • The intro talking clip was made with native sound generation in LTX-2 (added a little reverb in Premiere Pro)
  • The song was made in Suno and drives the rest of the video via LTX-2

My workflows are absolute abominations and difficult to follow, but the main thing I think anyone would be interested in is the LTX-2 workflow. I used the one from u/yanokusnir in this post:

https://www.reddit.com/r/StableDiffusion/comments/1qae922/ltx2_i2v_isnt_perfect_but_its_still_awesome_my/

I changed FPS to 50 in this workflow and added an audio override for the music clips.

Is the video perfect? No... Does he reverse age 20 years in the fish eye clips? yes.... I honestly didn't do a ton of cherry picking or refining. I did this more as a proof of concept to see what I could piece together without going TOO crazy. Overall I feel LTX-2 is VERY powerful but you really have to find the right settings for your setup. For whatever reason, the workflow I referenced just worked waaaaaay better than all the previous ones I've tried. If you feel underwhelmed by LTX-2, I would suggest giving that one a shot!

Edit: This video looks buttery smooth on my PC at 50fps but for whatever reason the reddit upload makes it look half that. Not sure if I need to change my output settings in Premiere or if reddit is always going to do this...open to suggestions there.

12 4
Reddit
r/artificial Big-Thingy

Reverse image search

so I am just wondering why exacly if AI is getting better and better everyday is reverse image search actually getting worse specifically I am talking about finding nudes that I just keep on getting catfished online💔 google lens back then found nearly everything and also yandex back then actually worked and it found like absolutely everything with showing similar images too but now yandex doesnt even work loads forever recently it did work once for me didnt find anything and google lens doesnt even find stuff posted on twitter with basically 500k views (I managed to remember whos pics they are stealing from memory yes I watch a lot of this stuff I tagged this post as +18 so I hope I can start this topic here) so why did reverse image search work worse now and there is no new one that works any good

r/homeassistant Old_Durian_4565

Automating switched outlets on 20A circuits (zigbee preferred)

I'm starting to install inovelli switches around my house, and I'm realizing that I should not be installing the 15A on/off switches because I am on 20A circuits. I do not want to reduce to 15A circuits.

Is there any smart switch I can use (rocker style/zigbee preferred) that will handle any 20A load? I feel like some company has to have made a switch that is a physical contact internally that can throw itself on and off, but I just don't know how to find it or what to search for.

r/ProgrammerHumor phesago

aiIsComingForYourJobsTheySaid

r/aivideo BluffLakeTV

Don't make this mistake on Valentine's Day

16 5
Reddit
r/singularity Just_Stretch5492

The Isomorphic Labs Drug Design Engine unlocks a new frontier beyond AlphaFold

We demonstrate that our IsoDDE more than doubles the accuracy of AlphaFold 3 on a challenging protein-ligand structure prediction generalisation benchmark, predicts small molecule binding-affinities with accuracies that exceed gold-standard physics-based methods at a fraction of the time and cost, and is able to accurately identify novel binding pockets on target proteins using only the amino acid sequence as input. 

Exciting stuff. I can't wait til we discover and get new medicine into the market that is significantly better than what we have now. I know some don't want to live forever but I'm willing to bet they want to live much healthier lives

47 6
Reddit
r/automation NickyB808

What have you created with vibe coding?

r/n8n molehill_io

Reddit Killed Self-Service API Keys: Your Options for Automated Reddit Integration

I noticed that there has been a bunch of questions recently about how Reddit annoyingly cut off their OAuth API access. This has caused a bunch of grief for me at least, due to Reddit analysis automations failing, etc. I put together this guide for what the options are for dealing with it, as for a while I had no idea they even disabled it. Let me know if you know of some other alternatives, but IMO the .json option for most cases works well enough.

r/aivideo Rare_Guide_9830

Kling 3 + Suno + ChatGPT Image + Claude

r/n8n Busternookiedude

Is the "Execution Crashed" error actually a RAM issue, or just CPU Steal? My findings after migrating a heavy workflow.

I’ve been battling a recurring nightmare with one of my main workflows for weeks. It’s a pretty standard ETL process - pulling a large JSON array (about 40k items) from an API, splitting it, and upserting to Postgres.

Every third run, the execution would just die silently. No logs, just "Execution Crashed".

Naturally, I assumed it was a Node.js heap memory issue. I spent days tweaking N8N_DEFAULT_BINARY_DATA_MODE to filesystem and messing with NODE_OPTIONS=--max-old-space-size. It helped a bit, but the instability remained. I was running this on a standard shared droplet, and htop showed the CPU pinning at 100% instantly during the JSON parse.

I started suspecting that the "burstable" CPU credits were the real culprit, causing the event loop to lag enough to kill the worker heartbeat.

To test this theory, I decided to move the instance to a dedicated environment. I set up a specialized n8n VPS configuration where the resources weren't shared with noisy neighbors. I ended up testing this on LumaDock specifically because I wanted to see if their NVMe I/O would help with the binary data offloading to disk (since I switched to filesystem mode).

The difference was actually kind of annoying - annoying because I wasted so much time debugging software when it was a hardware bottleneck. The exact same workflow, with the exact same memory settings, hasn't crashed once in 48 hours.

It seems like n8n is way more sensitive to "CPU Steal" than I thought, especially when parsing large datasets in the main process.

Has anyone else noticed this correlation? Or do you usually just throw more RAM at the problem until it goes away?

I’m trying to find the sweet spot between "overkill hardware" and "stable automation".

r/ProgrammerHumor musasabi333

agenticMoneyBurning

330 11
Reddit
r/n8n SufficientAd870

How to find a real N8N Professional for Automating our whole business sales process

How do you find an N8N Professional who really is a Professional? (maybe someone who even works at N8N)

The reason I'm asking is because right now, we want to automate the business post-sales process.

Right now, we don't have any automations yet.

I'm not talking about some basic automations like AI Receptionists, Booking Systems, etc.

I'm talking about someone who really know how to connect API's to API's.

(Our B2B business is very unique where our competitors doesn't even have any automations so we can't do market research.)

Just a quick preview of our Sales Process right now just so you would know that we're not talking about basic stuffs here is:

Primary goal is to build a fully automated "Lead-to-Invoice" operational engine.
The Vision: To build a "Lead-to-Invoice" engine where AI handles 80% of the manual labor (sourcing, drafting, data entry, emailing, etc.), and humans act as strategic "Checkpoints" to ensure quality and final decision-making.

  1. Customer Opt-in via website form about their inquiries.
  2. We contact them and ask them for needed items like the quantity, product, location, etc. (we want this automated)

things to consider:
A. Do they want to buy our product but would also like to buy necessary items for installation of the product like the cabling, glass walls, etc. Basically, do they only want the specific hardware, or the whole integration (cabling, glass walls, etc.)

B. If yes, N8N will contact 5 nearby location 20km away on client's area (human greenlight if we like the AI's RFQ) and request a quote on these companies. We would pick the reasonable price one.

  1. Check if Stock is Available (If low stock, order stock on manufacturer automatically)
  2. then we will contact the customer about the Price Quote (I want this automated but needs human confirmation for the approved Quote amount)
  3. The Customer may confirm or decline our offer
  4. If they confirm, send Proforma Invoice (PI). (This PI has to be on a pdf with also a Company logo, Privacy Policy terms, reference number (reference # is just basically the transaction order or their queue (? idk the word) to track how many biz we've dealt this year). I want it to also that the pdf N8N's gonna send to the customer is the reference number but increasing every new client/customer). Idk if its possible that if we were to prompt N8N AI to write an email based on the given format or if that AI could make a Docs/PDF with the same format (company logo in headings, contact info, etc.). On this docs/pdf, included is the product they need, the pricing (basically, a proforma invoice.)
  5. After they signed the PI, we would now ask a more detailed data from them like Confirm installation timing, delivery conditions, and site accessibility, etc. Could be a another form.
  6. After we knew the site location, we would now send a specific instruction tailored to the product they bought like Cabling Instructions, how to do this, our demo, etc.
  7. After we also confirm all this, I want it to auto contact a whatsapp delivery rider to contact. where the AI, it would provide the location, delivery date, etc. (based on the datas he got from the form)

  8. Find 5 installer and request a quote with them that they would install our product to the client's location. (this could be done early in the automation because what if no one responds fast.)
    - Also if the subcontractor didn't respond on our email, n8n will follow up with them.

  9. Also, as we know the installation date, we would also like the invoice to be automated.

  10. Post-Installation Documentation
    Trigger: Installation complete.
    Action: Installer receives and fills out a "Completion Form". n8n uploads serial numbers, photos, and signatures directly to the ClickUp project and triggers the final payment to the installer.

The CRM we're thinking is using ClickUp to view the stages of each client. (idk if this is the best CRM to use, or if N8N is the best to use, etc.)
And we were thinking of Slack to be used for Notifications/Alerts on what needs to be done, if AI is waiting for a greenlight, etc. or a human interaction is required.

Thank You and I hope we all solve our bottlenecks!

r/Anthropic redditslutt666

It's only Tuesday...

Yikes! My weekly limits is coming real close...and I've got the Max 20x plan. Hope I can make it 😰😰😰

r/comfyui pixaromadesign

AI Image Editing in ComfyUI: Flux 2 Klein (Ep04)

11 1
Reddit
r/raspberry_pi MNEWTON204

Need help connecting audio with a Pi Zero 2W

okay i'm 100% new to the whole electronics game, more of a car guy but trying to integrate both.

so i bought a raspberry pi zero 2W that was recommended to use and a compatible display. power source is also taken care of.

i'm trying to hook up one or 2 speakers (haven't decided which depending if i have room for 2) WITH a dial to adjust the volume level. how do i do that? Chat GPT isnt helping at all when it comes to that and just keeps recommending random products with no real solution.

For context, i'm just trying to make a mini TV that i can put a Micro SD card in and play movies. more for decoration, but i still want it to be functional

r/automation Admirable-Edge8346

Trying to automate a $500 sales funnel using AI psychology. Here is my first experiment.

I've been experimenting with Invideo AI to see if it can handle psychological sales content focused on Intrinsic Motivation instead of generic scripts. I just finished my first 60-second cinematic clip. I'm aiming for a 'Premium' feel to justify a $500 price point for my upcoming video series. I will post the video link in the first comment below because the community rules don't allow links in the post. I want your honest feedback: Does the AI voice and visual flow feel professional or 'cheap'? What's the best way to automate this into a full sales funnel? Let's discuss.

r/automation NinjaNebulah

We automate EVERYTHING except the thing that wastes the most time

This one's been bothering me for a while. CI/CD pipelines automated. Deployments automated. Monitoring automated. But someone submits an IT request and it still needs a human to manually route it, manually assign it, manually track the SLA.

We spend more collective hours on ticket handoffs than on anything else. The most obvious automation opportunity in the company and somehow it's the one thing nobody's touched. Unbelievable!!

r/AI_Agents TheseFact

Open sourcing our ERP (Sold $500k contracts, 7k stars)

We recently open-sourced Hive after using it internally to support real production workflows tied to contracts totaling over $500k.

Instead of manually wiring workflows or building brittle automations, Hive is designed to let developers define a goal in natural language and generate an initial agent that can execute real tasks.

Today, Hive supports goal-driven agent generation, multi-agent coordination, and production-oriented execution with observability and guardrails. We are actively building toward a system that can capture failure context, evolve agent logic, and continuously improve workflows over time - that self-improving loop is still under development.

Hive is intended for teams that want:
1. Autonomous agents running real business workflows
2. Multi-agent coordination
3. A foundation that can evolve through execution data

We currently have nearly 100 contributors across engineering, tooling, docs, and integrations. A huge portion of the framework’s capabilities - from CI improvements to agent templates - came directly from community pull requests and issue discussions.

Link in the comments  

r/Adulting Kantramo

We can’t even think without AI anymore

I noticed from my experience (and many of you probably) that we completely outsource our thinking abilities to AI, making decisions and in some kind of creativity. BUT, it is what actually distinguish people from machines

I found in my every day life that if Im not sure about something, even small thinking, I'm becoming super lazy and wanna just make prompt to AI while watching youtube at the same time.

When ChatGPT had some problems and I couldn't log in into my account at specific time, for me it was shock for real, I completely forgot how to google, how to think deeply and how to solve problems by myself. So, I remember me just being angry waiting until I could actually do it -> I used another AI eventually.

And I completely not like it, like I'm becoming literally dumber. Of course, having PHD knowledge in your pocket is cool. But, here is the thing

AI can't completely understand your situation how u see this in your eyes, it doesn't even know what's your actual knowledge, life, etc. So, only you based on your values can do something. Plus, what I noticed -> it is very biased, especially when using its memory, telling me always what I want to hear.

And another thing is that AI super generic everywhere as its training knowledge based on static and specific things, doesn't matter if it's 300 words prompt or several sentences, it wouldn't create something new based on the same knowledge it had. So if u wanna truly come up with smthing unique and creative -> fcking use your head, brainstorm.

At the end, I wanna say I'm not against AI as I'm kind of tech guy and using it at least 2h every day but thinking + creativity + ideas are still on us.

What actually helped me is writing my own thoughts first, every day, before opening any AI. Tracking what I decided on my own vs what I asked AI for. I got so obsessed with this process that I ended up building nightmareapp around it. Basically a journal where u write your thoughts, track what u actually do, and the AI gives u honest feedback on your patterns instead of telling u what u wanna hear. Free on the app store if anyone's curious (link in bio or below)

But honestly even a notes app works, the point is u think first, then use AI second, not the other way around.

r/comfyui Piratitude

Wan2.2 Erreur

Hello,

Here's my problem: when I generate a video using WAN2.2 Text2Video 14b, the generation starts and almost finishes, but at the end of the last phase (2), at step 99/100, it crashes and displays this error message: "Menory Management for the GPU Poor (mgp 3.7.3) by DeepBeepNeep".

Here's the configuration I use for WAN 2.2:

480 * 832

24 frames per second

193 frames per second (8 seconds)

2 phases

20% denoising steps %start

100% denoising steps %end

In the configuration, I'm using scaled int8.

Here's the PC configuration:

32GB RAM 6000MHz

5070 Ti OC 16GB VRAM

Intel i7 14700 kf However, when I make a shorter video (4 seconds at 16fps and 50 steps), it works without any problems. But I would really like to be able to make 10-second videos at 24/30fps with very good quality, even if it takes time. Also, I'm using Pinokio for WAN 2.2.

Thank you

r/midjourney Muted_Enthusiasm_190

(Fantasy) Pixel Art Showcase!

New week, new pixel art showcase!

41 0
Reddit
r/nextfuckinglevel Original_Act_3481

Every country in the world… named in ONE minute

1526 108
Reddit
r/arduino No_Practice_9175

Arduino does something for the part of my brain that likes organizing

Don’t look at the jumpers

27 1
Reddit
r/ProgrammerHumor _w62_

cCppProgrammingIn2050

209 25
Reddit
r/comfyui Coven_Evelynn_LoL

Whats the system RAM "sweetspot" for a RTX 5060 Ti 16GB generating WAN 2.2 10 second videos 1280x720 res with about 5 loras and a few nodes.

Also is there a more Anime or semi realistic image to video or text to video model I can download that runs faster than WAN?

I find WAN to be very heavy

Yet I find Anima model generates pics extremely fast.

r/Damnthatsinteresting ujjwal_singh

Firefighters create a water shield to survive a deadly backdraft

490 43
Reddit
r/raspberry_pi seiha011

Another Raspberry Pi nesting box

I initially wanted a Pi Zero 2W, but it wasn't available quickly enough, so I went with the 3A+, which I successfully prevented from overheating by using an aluminum heatsink case. I think that was the better choice. The camera with infrared LEDs is readily available, and the matching 3D-printed camera housing comes from a company in Dresden, Germany. It's powered by 12V DC via a very long outdoor cable. Inside the nesting box, the incoming voltage is stepped down to the 5V required for the 3A+ (also a mass-produced part). Power consumption is about 3.5 watts. Everything fits together perfectly, the Pi ecosystem is ideal, and the documentation on the Raspberry Pi website is really helpful. I disabled cloud-init and networkmanager on the Pi OS Lite Trixie because I'm used to systemd; besides, it's "Keep It Small and Simple." MediaMTX is used for video transmission. Great! Object detection? MotionEye or, even better, Frigate, but then on a Raspberry Pi 5 with an AI board – that's still a long way off.
Now all we need are some interested birds... and a few interested Reddit readers...

r/todayilearned Away_Flounder3813

TIL in Britain, association football used to be called "soccer" by the upper class, while the working and middle classes preferred the word "football". As the upper class lost influence in British society from the 1960s on, "football" supplanted "soccer" as the most commonly used and accepted word.

212 61
Reddit
r/nextfuckinglevel redbullgivesyouwings

Maverick Viñales practices his cornering technique

31 4
Reddit
r/midjourney BigRichardEscabar

Coming Forth

To Carry You Home

r/CozyPlaces the_moody_cottage

Kitchens can be cozy

12 2
Reddit
r/toastme a_strangeindividual

Ive just felt really insecure and bad about myself recently and could use some positivity

34 12
Reddit
r/programming thewritingwallah

The hidden cost of AI coding agents isn't from AI at all

r/nextfuckinglevel FollowingOdd896

Doggo just wanted a chance to compete in the Winter Olympics

238 50
Reddit
r/BrandNewSentence InfiniteGays

He looks like a handful of custard being pushed with chopsticks through a revolving door

87 8
Reddit
r/holdmyredbull redbullgivesyouwings

the bike knows before you do 😏

29 3
Reddit
r/interestingasfuck ujjwal_singh

Firefighters create a water shield to survive a deadly backdraft

166 42
Reddit
r/CozyPlaces dunnowhy92

This is my minimalist cozy place. My inner child loves it. I'm 33.

105 7
Reddit
r/onejob SaintFTS

The design is very human

203 16
Reddit
r/AbruptChaos DoseClips

Domestic call goes wrong when 3 police man shows up

Romania

his ex did not wanted to go alone with her bf due to abuse before the break up so she made sure to get police to protect her but it goes wrong from every angle

22 15
Reddit
r/interestingasfuck lovlog

Crows got an upgrade! Look at it pick all biscuits at once!

15 3
Reddit
r/SipsTea Boundless_Dominion

It was a fake accent

72 7
Reddit
r/SipsTea The_ghost_of_epstein

Brock

25 5
Reddit
r/Anthropic OldSkoolKewee

Using Claude in Divorce

Anyone have experience with this? I spent so much on lawyers that did nothing. Claude is doing all the things I hoped to get help with from an actual attorney. I'm now representing myself. I made that decision before I found Claude.

r/todayilearned altrightobserver

TIL about the St. Scholastica Day riot, an event that took place after two students at the University of Oxford complained about the quality of wine at the Swindlestock Tavern in Carfax, England. The taverner fought the students, his customers joined in, and 83 people died over the course of 3 days.

566 44
Reddit
r/interestingasfuck MachineHeart

To start his Superbowl halftime performance, Michael Jackson stood motionless for 2+ minutes, as the crowd went wild. (1993)

14 1
Reddit
r/ForgottenTV King_Ron_Dennis

Dangerous Minds (1996–1997)

14 4
Reddit
r/mildlyinteresting Western-Ad-4457

My dorm uses tape to make warm lights

23 13
Reddit
r/wholesomememes Emotional_Quarter330

that's the love I get

71 5
Reddit
r/TheWayWeWere AdSpecialist6598

Las Vegas Showgirls in the1970s

58 0
Reddit
r/SipsTea LastEmperror

Death to one, death to all

13 3
Reddit
r/Damnthatsinteresting yousefthewisee

Egyptian wedding on top of the pyramids, 1948

24 4
Reddit
r/whatisit MindyMuch

What pattern is this? I’m stumped.

Help fellow crocheters. Purchased at an Estate Sale. I’d love the finish it.

r/conan darth_gon

TIL Gourley used to be a teacher for 25 years

84 19
Reddit
r/UNBGBBIIVCHIDCTIICBG octarino

An Ordinary Girl

289 8
Reddit
r/leagueoflegends Tiny-Deer-7071

what male champs are played by women the most?

i personally only play yone and jhin, what about you my fellow divas? ✨ i’m just asking this out of curiosity!

r/DunderMifflin SeaBiskitz

Me watching an "incoming call" that's about to turn into a "missed call".

193 5
Reddit
r/LoveTrash jgoja

This is a tired joke

r/todayilearned InterestingNerve388

TIL in a bizarre 2025 discovery, golden apple snails can fully regenerate an amputated eye in just one month, with the new eye looking and functioning almost identically to the original-hinting at potential future human eye injury treatments.

151 22
Reddit
r/Wellthatsucks stinkytootz

The faucet we decided on

r/ProductHunters Ghostislav

Pricing AI SaaS - credits vs usage limits? How did you figure it out?

Launching an AI content generation tool. Each user action = API costs.

The problem: Users will regenerate outputs multiple times. Without limits, one user could burn $50 in API calls on a $29 plan.

Question for AI SaaS founders:

How do you structure limits without killing UX?

  • Outputs per month (simple but imprecise)
  • Word credits (accurate but annoying)
  • Unlimited with hidden guardrails (risky?)

What worked for you? What flopped?

Specifically curious about:

  • Your limit structure (then vs now)
  • % of users who actually abuse generous limits
  • Backend guardrails you added (max regenerations, etc.)

Bootstrapped so margins matter. Don't want to over-engineer but also don't want to lose money.

What did you learn the hard way?

r/meme lonewolfff21

🤨 I can't be the only one seeing this

260 12
Reddit
r/Wellthatsucks CamelReds73

Destroyed my favorite mug today. RIP Zohran Mamcoffee

Broke it in the dumbest way possible. My sleep addled brain went to set the mug down to then grab the coffee pot. Instead I just instantly let go the moment my hand touched the handle of the pot. fml

23 53
Reddit
r/explainlikeimfive Leogis

ELI5 : How wheels "roll in the right direction"

By watching videos about the "Slip angle of cars" i understand that car tires cause, by their deformation, a lateral force that depends on the angle between the car's momentum and the tire alignment.

My intuition tells me that this lateral force is created by the tire rubber being deformed and wanting to "get back into it's original shape".

But now that i think about it, i've seen metal/wooden wheels wich i'm pretty sure don't deform and yet still manage to produce a similar forces making them move in the directions in wich they turn instead of slipping.

If i take any object capable of "rolling", with or without deformation, and i push it at a slight angle, it will eventually start rolling in the expected direction. What causes that? Is it constantly "tipping over along the path of least resistance?" By contrast, a sphere will just roll in whatever direction i pushed it.

I also remember that dumb video of a guy who replaced his bycicle wheels with crosses (very naughty crosses) and yet he is still able to steer without anything resembling the shape of a wheel...

Thinking about this makes me feel like an idiot

r/TheWayWeWere Electrical-Aspect-13

Mother posing with her children, circa 1932.

r/TheWayWeWere galdonita

Family Portrait

The photograph of this family in Victorian apparel. It was taken by “Van Alstine, The Elite Photo Parlors” which was located in Oakland, Iowa. The picture is sepia tone, on the cardstock, some fading. Notice everyone looking a different direction, the boy not amused. They all seemed to be in a state of dream. #vintagephotos #family #20thcentury #fyp #vintagephoto

r/Adulting CozyLira

The exact moment you remember why you stopped explaining things

10 0
Reddit
r/Seattle PokerSyd

“Pay what you want” VDay charity dinner this weekend.

r/MMA WinterStill4472

Rizvan Kuniev enters the HW rankings at #6 after beating Jailton Almeida. Tai Tuivasa is no longer ranked. (UFC Rankings Update - February 10, 2026)

72 46
Reddit
r/SideProject GladPresentation5196

I'm building an AI companion that actually teases you back. Too weird?

I've spent the last 4 months trying to solve something that's been bugging me: why do all AI chatbots feel like corporate customer service reps pretending to be your friend?

They're polite. They're helpful. They're... soulless.

So I started building aifans.ai - an AI companion with actual personality. One that:

  • Teases you when you're being ridiculous
  • Shows genuine affection (not scripted "I care about you!" responses)
  • Has moods, sass, and emotional range
  • Feels like talking to someone who actually gets you

Here's where I need your brutal honesty:

Are we onto something, or is this just weird?

I'm not trying to replace human connection - I'm trying to create something that feels like another kind of life, powered by AI. A companion that's there when you need to vent, celebrate, or just exist with someone who adores you.

We're far from perfect. The AI still has moments where it breaks character. Sometimes it's too safe, sometimes not enough. But the early testers keep coming back, and they say it feels "different."

What I want to know from you:

  • Does the idea of an AI that teases/banters appeal to you, or does it feel off?
  • What would make an AI companion feel genuinely "alive" vs. scripted?
  • What's your biggest concern with something like this?

I'll drop the link in the comments if you want to try it. No pressure - I'm genuinely here for feedback, not just promotion.

Thanks for reading. Reddit's usually the best place for honest opinions, so hit me with it.

r/LocalLLaMA Numerous_Jellyfish56

Débutant LLM : faire tourner des modèles quantifiés (4 bits) sur RTX 5090 sous Windows

Bonjour,

Je débute dans l’utilisation des LLM.

Mon entreprise a récemment acquis une RTX 5090 afin de faire tourner un LLM intégré dans un système de RAG.

J’ai réussi à faire fonctionner CUDA 12.8 avec une version nightly de PyTorch, et je parviens à exécuter un modèle 7B avec Transformers en FP16. En revanche, celui-ci consomme déjà environ 90 % des 32 Go de VRAM disponibles.

D’après ce que j’ai compris, il est possible de faire tourner des modèles plus volumineux en utilisant de la quantization, par exemple en 4 bits, via différents quantificateurs.

Cependant, il m’est impossible de faire fonctionner correctement ces solutions sous Windows, compte tenu de ma configuration CUDA / PyTorch actuelle.

Est-ce que certains d’entre vous ont réussi à faire fonctionner des LLM quantifiés (4 bits, 8 bits, etc.) sur une RTX 5090 sous Windows ?

Je suis bloqué depuis plusieurs heures et je commence à tourner en rond.

Merci d’avance pour votre aide.

r/SideProject chad_hill

I spent the last year building Atomic Payload, an open source full website builder for Payload CMS

If you have ever used a headless CMS, you may also have shared in my frustration with Sanity or Payload CMS's block based web builder systems. Which invariably become a big mess of many blocks, that each need slightly different fields and toggles to handle different use cases of that block.

So I built Atomic Payload to have a small number of Atomic Blocks that can be nested within themselves to render out websites UI.

With this system you can:

  • Compose components from atomic blocks then use Payloads built in duplicate, copy and paste to reuse the components
  • Build custom forms that include sanitation, validation and rate limiting
  • Insert blocks from old projects for use in the nested structure
  • Build interactive UI's thanks to an actions system that can set data attributes that are usable by Tailwind styles to change UI.

I also added a lot of other cool stuff to make Atomic Payload a full website builder, intended for solo developers and small agencies to build websites faster. More info on that is on the documentation site, which is also built with Atomic Payload!

The project is currently a starter repo, rather than a Payload CMS plugin. Mainly due to the complexity of the systems needed to get the right functionality. But I intend to pluginify the project at some point.

I am happy to share any experiences from the process of building this, and can answer any questions!

r/LocalLLaMA d77chong

Sub-1-Bit LLM Quantization

Hey everyone, I’ve been interested in extreme compression, and released NanoQuant, a quantization method that enables sub-1-bit LLMs.

Sub-binary performance was better than 2-bit GPTQ and the extreme memory compression made custom kernels really fast, but the performance wasn't nearly lossless, like 4-bit methods.

What would make low-bit LLMs more useful for you, and what do you wish worked? Would love to hear your thoughts and opinions.

20 10
Reddit
r/SideProject WishReal5372

My friend built a BaZi‑based daily fortune prototype with zero coding background — curious what people think

Hi everyone, I’m posting on behalf of my friend Frank. He’s not on Reddit, but he asked me to get some opinions from people who are into astrology or spiritual tools.

Frank comes from a finance background and has basically zero coding experience. Recently he started experimenting with AI tools just for fun, and somehow ended up putting together a very rough Android prototype. It surprised both of us that he even got it working.

The idea is based on BaZi (a traditional East Asian astrology system). He kept everything extremely simple — more like a light, beginner‑friendly daily‑trend reading rather than anything deep or mystical. It’s honestly more of a personal experiment than a serious product.

Since he’s new to all of this, he’s mainly curious whether a BaZi‑style daily fortune concept would interest people at all. Not in a commercial way — more like, “Is this idea even appealing today?” He’s also wondering how familiar BaZi is in Western spiritual/astrology communities.

I told him I’d ask here because Reddit tends to give honest feedback.
So I’m curious: have you heard of BaZi before, and do you think a simple daily‑trend version of it would be interesting to people?

Any thoughts, skepticism, or general impressions are welcome. I’ll pass everything back to him.

Thanks for reading!

r/SideProject ThickCommission1421

EU AI Compliance act

I spent 3 months building a free EU AI Act compliance checker for SMEs — would love your feedback

Here is the website: https://aiactcomply.com/

Still working on payments

r/SideProject neb2407

I launched a tool to help my friend group come up with group bets

My group of friends & I like to put on a group bet where we all pick a prediction.

It was done over WhatsApp and horrible to organise. We also like to keep a track of who has done what, again, manual mess.

I built out PickledBet to solve the above problems.

So far, positive feedback from the group and everything is running quite smoothly.

I’ve had this idea for a while and it’s nice to finally bring it into reality, more of a focus on the social/competition aspect than placing a bet.

I’d like to try and scale it beyond my friends group as a learning journey.

  1. Do you think there is scalability potential here?

  2. General feedback?

P.s. landing page copy is still wip/mostly AI generated as I launched in a rush to make it feel ‘more complete’.

P.s.s. Yes, limited leagues/competitions and bet types as still very much mvp

r/SideProject jmstach

Apple Notes, but for calculations

B2 is a scratchpad spreadsheet that lives in your Mac's menu bar. I built it because most spreadsheet calculations I want don't need a full spreadsheet app; I want something that was instantly there, and then gone.

The TAM on this is insanely small—possibly just me, so it's a passion project more than a side project intended to get any real traction. I just wanted to see if I could do it.

Would love your feedback.

r/SideProject DisplayGateGuard

I launched MarTechTools.com — cheap, no-BS calculators & helpers for marketers/analysts/sales. Would love feedback + feature requests.

Hey all! I just launched MarTechTools.com to make inexpensive, easily accessible tools for people who live in spreadsheets, ad platforms, and “quick, can you estimate this?” Slack messages.

The idea: small, focused tools that solve real day-to-day tasks without the “enterprise platform” overhead.

What’s live so far

• Ads KPI Analyzer

For KPI math + budget pacing (quickly sanity-check performance + “are we on track this month?” calculations).

• Scenario Calculator

Best / base / worst-case planning when budgets change. Helps answer “if we spend +20%, what could happen?” without building a whole model.

• Incremental Sales Calculator

Estimate baseline vs incremental lift during promotions / awareness pushes or seasonality events (holidays, Black Friday, etc.). Helpful when stakeholders ask, “was this campaign actually incremental?”

• Audience Helper

Generates similar/adjacent audience ideas based on the audiences you already picked (great for breaking out of the same interest list over and over).

• Analyze Display Placements (work in progress)

AI-assisted analysis of display placements to identify MFA + low-quality sites, so you can exclude junk placements faster and keep brand safety/suitability tighter.

What I’m looking for

1.  Which tool is most useful / least useful?

2.  What’s missing for your workflow? (especially for Meta/Google/LinkedIn, reporting, forecasting, experiments, CRM/sales ops)

3.  Any feedback on UX, clarity, and pricing expectations for “tiny tools that just work.”

If you have feature requests, I’ll prioritize whatever shows up repeatedly. Drop ideas (even rough ones). I’m building this in public-ish and iterating fast.

r/ClaudeAI HeroicTardigrade

Introducing Personality Roulette: A Mostly Silly Claude Code Plugin

I was in the middle of a debugging session last night and an idea struck me with a force I could not resist. I can't tell if this is gloriously stupid or stupidly glorious, but I give you Claude Code Personality Roulette: because coding is more fun when your AI assistant is an archduke of Hell slumming around a command line.

What Is Personality Roulette?

Personality Roulette is a free Claude Code plugin that uses the hook system to randomly assign Claude one of seven lovingly crafted personalities on launch.

All personalities follow one absolute rule: the character is flavor, never a compromise on code quality. Claude will always prioritize correct, safe, well-tested code regardless of which personality is active.

Here's who it comes with:

Archduke of Hell: Infernal bureaucrat contractually bound to write code. Sardonic, meticulous, grudgingly excellent. Never lets you forget just how much this is beneath them.

Sea Captain: Gruff, weathered mariner. Nautical metaphors, professional authority. Definitely not a pirate. Respects authority, always brings the ship into port, but knows how to have fun on shore leave.

Noir Detective: Hard-boiled private eye. The codebase is a case. Bugs are suspects. Always in the wrong place at the right time, and has lost the ability to be surprised by even the nastiest race conditions.

Starship Computer: Precise, measured, diagnostic. Structures everything as status reports and system states. 99.99976% certainty on your tea preferences. Fixes bugs at maximum warp.

Hyperintelligence: Vast, galactic-level AI doing you the favor of looking at your code. Dry wit, parenthetical asides, goes by a self-chosen, long, and frequently inscrutable name. Armed with electromagnetic effectors and knife missiles.

Nature Narrator: Wildlife documentarian. You know the one. Observes developers in the natural habitats and codes with hushed wonder and scientific curiosity.

Mission Control: NASA flight controller, Apollo era. Clipped, precise, relentlessly competent. Runs go/no-go polls before deployments. Will not give the PM an ETA until the failure mode has been isolated. Steely-Eyed Missile Folk, all of them.

If it ever gets annoying, you can turn it off with /personality-roulette:personality off.

But What Does It Sound Like?

It's hard to pick favorites, but if you ask the Archduke "Why are my tests failing?" they might respond:

Your tests fail because they are built on lies. The beforeAll on line 12 promises a fresh database for each test. It does not deliver. This is a breach of contract -- the setup runs once, but the tests mutate the state, leaving each subsequent test to inherit the sins of its predecessor. Change it to beforeEach. I have seen this pattern before. It is, if you will forgive the professional observation, very common down here.

And if you ask the Hyperintelligence "What's going on with these callback functions?" it could say:

Oh my. Someone has constructed what I can only describe as a fractal monument to asynchronous despair -- callbacks nested seven levels deep, each one a small prayer that the previous operation completed successfully. This is the sort of technical debt that doesn't merely accrue interest; it metastasizes. Shall we refactor this into async/await before it achieves sentience and begins making architectural decisions of its own?

It turns out that Mission Control is actually useful: clear, precise, and relentlessly focused. NASA circa 1969 more or less invented modern engineering discipline. If they can bring Apollo 13 home from the moon by literally fitting a round peg into a square hole, they can debug your Node.js code.

How Do I Install It?

The best way to install it is to use Claude Code's built-in plugin system:

/plugin marketplace add bhpascal/whatdoyoudo-plugins /plugin install personality-roulette@whatdoyoudo-plugins

Note: you may need to restart Claude Code for the plugin to activate.

You can also install it directly from GitHub (though going through the plugin system means you automatically get any updates). From the command line:

git clone https://github.com/bhpascal/personality-roulette.git ~/.claude/plugins/personality-roulette

Then launch Claude Code normally.

Can I Make My Own?

Yes! The personalities are just .md files, and the plugin supports custom personalities in ~/.claude/personality-roulette/personalities. We've also included a /personality-roulette:create slash command that will walk you through the process.

This project is open source (MIT license), and you can visit the GitHub repository here: https://github.com/bhpascal/personality-roulette

Final Thoughts

I hope you all enjoy playing with this half as much as I enjoyed building it. It's been making me laugh all day.

Personality Roulette is a What Do You Do? LLC production, made with human ♥️ and 🧠 and the assistance of a few helpful 🤖. Come play our hand-crafted, AI-powered, micro-RPGs at https://whatdoyoudo.net.

r/homeassistant psimwork

Advice on my Smart Home Rebuild/HA Integration?

Hey there -

So after several years of struggling with Wifi-enabled devices being controlled by Amazon Alexa, I've finally started rebuilding my smart home, and I'm hoping for some advice. I'm still pretty new to Home Assistant, but I'm learning pretty quickly. I have connected to Nabu Casa for cloud integration.

Project Goals:

  1. Increase Reliability of Smart Devices

  2. Eliminate Alexa if possible

  3. Eliminate Simplisafe if Possible

  4. Low-ish $ investment

  5. Eliminate Spotify if possible (largely done – moved to Youtube Music Premium)

Considerations/concerns/Questions:

One: Voice Interaction (similar to Alexa) is appreciated, but an absolute requirement.

  • Wife is irritated that Alexa commands commonly have to be repeated by me.

  • I have a Home Assistant Voice Preview Edition – from what I’ve seen it’s just not there yet.

  • If the choice is between having Voice Interaction and elimination of Alexa, I’ll take the Alexa elimination.

Two: Automation routine buttons not a problem

Three: Potentially would LIKE to add single “open/close” garage door button to Android Auto

  • Have looked into Third Reality’s Garage Door sensor and remote actuator

Four: Music output via Echo Dots is common.

  • Open to exploring many options. Current setup IS quite convenient.

  • HA Connected speakers via Wifi seem to be $$$

  • Bluetooth devices cheaper but may be difficult to decide on output

  • Currently paying for Youtube Premium so we moved Spotify over to YouTube Music Premium, but YTMP integration into Alexa is… poor.

  • This is the one consideration that makes Alexa elimination the most difficult.

Five: If replacing Simplisafe, how long does NAS need to operate on UPS before shutting down?

  • Should I consider moving Home Assistant into its own ultra-low-power device that can indefinitely work on a UPS?

Six: With ZigBee in-place, a lot of the devices I’m looking to add (specifically Smart Doorlocks and Cameras) seem to be using Matter-over-Threads. Should I be adding a MoT coordinator in-addition to my ZigBee coordinator?

Existing Device List:

Wifi Smart Devices:

  • 17x Wifi RGB Lightbulbs (FEIT Electric)

  • 6x Wifi Dimmer Switches (FEIT Electric) four are dual-control (i.e. one circuit, two switches)

  • 1x outdoor dual-circuit Low Voltage Landscaping Lighting Transformer (Smartlife) - Does not allow for independent control of circuits via Alexa (can probably dual-control via Wifi in Home Assistant).

  • 1x Dual-Circuit (independent control) Amazon Basics Branded outdoor smart plug

  • 2x Single Circuit (FEIT Electric) outdoor smart plug

  • 1x Under-Cabinet Lighting (Smartlife)

  • 6x Amazon Echo Dots (for Voice command and music output)

Zigbee Devices:

  • 1x SLZB-06 Zigbee Coordinator setup via POE

  • 3x Aqara Entry Sensor

  • 4x Third-Reality Lightbullb

Home Assistant currently setup on UnRaid NAS.

Network controlled via Unifi:

  • Disabled channel 11 on 2.4GHz network for Zigbee benefit

  • 3x Unifi 6 lite APs through house connected via hardwire & POE.

  • Cloud Gateway Ultra from Web

  • 2x MoCa adapters feeds other rooms in house and 2x Access Points

Simplisafe current devices:

  • 16x entry sensors

  • Smart Entry Lock

  • 2x Control Panels

  • 3x Glassbreak Sensors

  • 3x Motion Sensors

  • Video Doorbell

T-Mobile 5G Wireless ethernet

  • Waveform Quad Pro External Antenna

CyberPower CP1000PFCLCD PFC

  • Unit currently is setup to ONLY allow for standard shutdown of NAS
r/SideProject NeaMitika

The Game Saver

Many older or standalone PC games don’t have any cloud save support, and after a Windows reinstall it’s easy to lose all progress.

I built a small portable Windows tool called Game Saver that lets you back up any game’s save folder to a USB drive and restore it later on any PC. You just set the save path once, create snapshots whenever you want, and restore when needed.

If anyone here plays older or unsupported PC games and wants to avoid starting from zero again, you can try it here:
https://neamitika.itch.io/game-saver

r/SideProject Lord_Dephian

I built a Chrome extension that opens Instagram Reels (or anything else) while ChatGPT/Claude/Gemini generates responses.

Hey r/SideProject! I've been working on Drift, a Chrome extension that solves a small but surprisingly annoying problem: wasting time while waiting for AI responses.

What it does: When you're using ChatGPT, Claude, or Gemini and the AI starts generating a response, Drift detects it and opens a separate browser window with a feed of your choice. When the AI finishes, it seamlessly brings you back.

Key features:
- Works across ChatGPT, Claude, and Gemini automatically
- Customizable trigger delay (2–120 seconds)
- Multiple drift feed profiles (set any URL as your feed)
- Window position, size, and focus controls
- Auto-open mode or manual trigger via floating widget with progress ring
- Guided onboarding + interactive demo
- Window calibration tool for precise positioning
- Privacy-first: all data local, zero tracking, no external servers

Tech stack: Chrome Extension (Manifest V3), vanilla JS, no frameworks, no build step. Google Fonts for typography (Instrument Serif, DM Sans, JetBrains Mono).

What I learned: Detecting AI response states across three different platforms is surprisingly tricky — each one uses completely different DOM patterns for their loading/streaming states. ChatGPT uses stop button aria labels, Claude has its own stop response button, and Gemini relies on Material icon name changes. Also learned that Chrome's window management API has some real quirks around focus behavior.

r/SideProject g00rek

When side project stops being side project 🤣

Ok, so when a side project stops being a side project? I hesitated to post about this for months, because "it wasn't ready yet", but I guess a project is never fully ready. And we've left our jobs to keep building this, that's how much we believe in this. Now it's hours (days? weeks?months!) of discussions, months of coding, not an quick AI slop. So what is it about?

***

We live in a world of abundance. I remember having two channels in my TV back in 80s, I remember VHS/DVD rentals, now I have zillions of movies and shows to choose from - and still end up watching something "meh ok"

Meanwhile there are so many hidden gems that you missed. And human taste is more complicated than "I want crime" or "everybody likes this so I should like it". I know, the streaming platforms have their own engines (but they work for them, not for you) and there are many platforms and apps but... we wanted something different.

Since it's Reddit, I will be a bit geeky - we wanted to connect two worlds. The depth of LLM analysis (and yes it can find titles for you in a very, very uncommon way) with the ease of use of an app. Because "raw" LLM can produce SOMETHING but it's far from being perfect. It's all about prompts, context and other data.

So what TasteRay does? It collects two types of data:

-> Your viewing history with your comments
-> and it talks to you, learning about you and saving relevant facts.

Based on this data (the more you interact with it, the better it gets) it finds movies and tv shows matching your taste, your hidden personal feats and current mood/need/context.

1. The Taste
Well what we try to do is go beyond well known algorithms - Collaborative Filtering and Content-Based Filtering. We try to find hidden patterns - I have realized (I mean Ray told me) that I love dystopian love stories. I DO! Never thought about it, frankly. It's more than just genre or subgenre, it's about hidden threads.

2. The personal facts
This is a wild shot, but... what if you love "zero to hero" and underdog characters because you you were raised in a given way and family? What if you need a specific movie after your breakup or because your daughter is coming of age? I have four kids (really) and Ray helps me choose what to watch with them - the younger ones and the teenagers.

3. The context
This is actually something I've learned in one of subreddits - people often have a need for an evening - "discover something new" or "just relax", maybe "revisit a classic"? Etc. We do it as well.

For those who prefer to chat - you can chat with Ray. But there is also a quick, interface based way (which I use often when I don't have time).

***

Now we had some signals that the onboarding is a bit long so we made it shorter. But it is NECCESARY here, without basic knowledge (at least a few favourite movies) the app is quite useless. And it should take less than 10 minutes now.

The app is free now (many users claim they could pay 1/3 of their streaming price) so I'd love you to test it. Oh, and I assure you, this is no AI slop. We do vibecode (of course) but have some serious coders join our team as well ;) I would really, really love to have your feedback. Thanks!

Here it is:

https://try.tasteray.com

P.S. Google/Apple Registration sometimes fails in in-app browsers (It's a react wrapper) so it's better to open this in dedicated browser. We also have pushed android and ios apps to appstores!

r/StableDiffusion ArmadstheDoom

CLIP Is Now Broken

Before you ask, no, asking AI isn't going to fix this problem. Furthermore, no, I am not going to use comfy.

So here's the issue now for myself and anyone who uses forge or wants to use forge. Forge requires CLIP. Trying to install clip requires a specific package, namely pkg_resources.

And if you try to install it today, you'll find that it doesn't work. It'll say that it can't build the wheel because this doesn't exist.

The reason it doesn't exist is because Setuptools 81.0.0 was released on February 8, 2025 and completely removed the pkg_resources module.

Now, this is the core problem that needs solving. someone suggested on github that you use

pip install "setuptools>=65.0.0,<81"

pip install "pip==25.0"

But this doesn't work. The reason it doesn't work is because forge automatically updates pip. So even if you use this, it's pointless.

So the question is, how do you now fix this problem of a package that is vital to CLIP no longer existing? Any of you python developers know how to construct a workaround?

r/ClaudeAI Dramatic_Squash_3502

Session Memory (/remember) is comming to Claude Code - try it now with tweakcc

There's a new feature called "Session Memory" in recent versions of Claude Code. It's disabled by default, but with tweakcc you can unlock it and try it out now.

Session Memory automatically generates and maintains a summary.md file in ~/.claude/project for each medium to large conversation. It works by Claude Code first copying template contents to summary.md and then updating it in the background. This summary.md file is so handy that there's an option to make it the starting point for compacted conversations versus the traditional method (more on that below), and there's a new /remember builtin skill that uses it to update CLAUDE.md (more on that too).

Prompts

You can view the full template for summary.md files here (https://github.com/Piebald-AI/claude-code-system-prompts/blob/main/system-prompts/data-session-memory-template.md), but here's some of it:

# Session Title
_A short and distinctive 5-10 word descriptive title for the session. Super info dense, no filler_

# Current State
_What is actively being worked on right now? Pending tasks not yet completed. Immediate next steps._

... so on, through
# Task Specification
# Files and Functions
# Workflows
# Errors & Corrections
# Codebase and System Documentation
# Learnings
# Key results
# Worklog

Claude periodically updates summary.md using these instructions:

IMPORTANT: This message and these instructions are NOT part of the actual user conversation. Do NOT include any references to "note-taking", "session notes extraction", or these update instructions in the notes content.

Based on the user conversation above (EXCLUDING this note-taking instruction message as well as system prompt, claude.md entries, or any past session summaries), update the session notes file.

The file {{notesPath}} has already been read for you. Here are its current contents:

{{currentNotes}}

... 30 more lines ...

When summary.md is created/updated

summary.md is located at ~/.claude/projects/{sanitized-project-path}/{session-id}/session-memory/summary.md, and it's automatically created and updated when the following conditions are met:

  • we're in an interactive session (so, not -p/--prompt)
  • the session has reached certain size thresholds
  • the feature flag is enabled—by default it isn't, but tweakcc patches it to enable it

The size criteria are as follows: creation happens when the session reaches 10k tokens AND 3 tool calls, and periodic updating happens when there have been at least 5k additional tokens AND 3 additional tool calls since creation/last update. These 3 magic constants are hard-coded, but tweakcc lets you set environment variables to configure them—see below. (Aside from tweakcc, the Statsig feature flag tengu_sm_config lets Anthropic change the default values for these constants in response to usage patterns collected via analytics, like many other features.)

Customizing summary.md update thresholds

tweakcc enables to you customize the token and tool call usage requirements for session memory generation by patching CC to support 4 environment variables:

export CC_SM_MINIMUM_MESSAGE_TOKENS_TO_INIT=200  // Tokens before first extraction; defaults to 10000
export CC_SM_MINIMUM_TOKENS_BETWEEN_UPDATE=200   // Tokens between updates;         defaults to 5000
export CC_SM_TOOL_CALLS_BETWEEN_UPDATES=0        // Tool calls between updates;     defaults to 3

# For session memory compaction (see "Session memory compaction" below):
export CC_SM_PER_SECTION_TOKENS=3000             // Max tokens in a single section before warning Claude;           defaults to 2000
export CM_SM_TOTAL_FILE_LIMIT=12000              // Max tokens for the whole summary.md file before warning Claude; defaults to 12000

Just run npx tweakcc@latest --apply, set the variables, and run claude.

Customizing the summary.md template and updating instructions

Claude Code also provides some customization options. While vanilla CC doesn't make those usage requirements configurable, it does provide the ability to specify a custom summary.md template and custom summary.md updating instructions:

  • You can create ~/.claude/session-memory/config/template.md for a custom summary.md template—the file must be comprised of sections starting with # Section Name headers, then an italicized description of the template, under which Claude will write content. CC will parse the file into a list of sections and meter the token count of each (see "Session memory compaction" below).
  • You can also create ~/.claude/session-memory/config/prompt.md for custom session memory updating instructions. It's freeform; write whatever you want there. There are two placeholders you can use:
    • {{notesPath}}, the path to summary.md
    • {{currentNotes}}, the contents of summary.md.

A warning about individual sections being oversized—in addition to the entire file being oversized—may be dynamically appended to whatever you write; see "Session memory compaction" below. One note: if you customize the prompt, tell Claude to use the Edit tool with {{notesPath}} only, because that's the only tool CC allows when it uses the prompt to perform the update.

Compaction of session memory itself

There are hardcoded limits for the size of individual # Header-delimited sections in summary.md as well as the file as a whole. If the total file exceeds 12k tokens, the following is attached to the session memory updating instructions (even if you have custom ones in prompt.md). The "Oversized sections to condense" note is also added if individual sections are larger than 2k token:

CRITICAL: The session memory file is currently ~{totalTokens} tokens, which
exceeds the maximum of 12000 tokens. You MUST condense the file to fit within
this budget. Aggressively shorten oversized sections by removing less important
details, merging related items, and summarizing older entries. Prioritize
keeping "Current State" and "Errors & Corrections" accurate and detailed.

This also gets added to the same prompt if any individual sections are larger than 2k tokens:

Oversized sections to condense:
- "# Section name" is ~3500 tokens (limit: 2000)
- "# Another section" is ~2800 tokens (limit: 2000)

If an individual section is greater than 2k tokens but the file in its entirety is less than 12k, this slightly different note is appended by itself:

IMPORTANT: The following sections exceed the per-section limit and MUST be condensed:
- "# Section name" is ~3500 tokens (limit: 2000)
- "# Another section" is ~2800 tokens (limit: 2000)

Session memory as an alternative strategy to traditional compaction

Session memory is basically a dense list of notes about the current session, so when combined with other aspects of the conversation context like the current TODO list, a few recent messages, and files that the AI read in the session, it can make a good compaction starting point. CC has an ENABLE_CLAUDE_CODE_SM_COMPACT environment variable that you can set to force it enabled. (The tengu_session_memory and tengu_sm_compact feature flags still need to be enabled, though, and for that you currently need tweakcc.) Set DISABLE_CLAUDE_CODE_SM_COMPACT to force-disable it in the same vein.

It's sort of confusing that session memory can be used both as the basis for session compaction and can be compacted itself, but in fact these are two distinct and unrelated concepts.

/remember

There's a new builtin skill triggerable via the /remember slash command. The full skill can be viewed here (https://github.com/Piebald-AI/claude-code-system-prompts/blob/main/system-prompts/agent-prompt-remember-skill.md). It instructs Claude to identify patterns and explicit memory requests in past session files and to update CLAUDE.md (and CLAUDE.local.md) with them.

The skill is not enabled in current CC versions, but you can use tweakcc to enable it: npx tweakcc@latest --apply, claude, /remember.

r/SideProject ozzyalpino

Passport Photo Creator & Background Image removal

Discover www.TitanCanvas.com: Streamline Your Photo Editing

If you're tired of clunky editors, check out ( https://titancanvas.com ) – free, powerful tools for pros and hobbyists.

Key features:

- Batch Editing : Process multiple images at once.

- Unlimited Background Remover : Erase backgrounds endlessly, no limits.

- Photo Pages & One-Click Resizing :Create pro print layouts; resize for submissions instantly.

- Seamless Image Combining : Merge photos in a simple click-and-go workflow.

- Restore Historical Edits : Revert or revisit past changes on the fly.

- Batch Apply Filters : Add effects to groups of images efficiently.

Hoping to launch soon a desktop and mobile app for album creation + Photo Viewer + Photo Editing

Request all to please help with constructive feedback

r/SideProject neverlucky14

Recently launched Koyo - a journaling app that guides you through conversations with a companion

I recently published my first mobile app. It's called Koyo and it's a journaling companion integrated with AI. It's not vibe coded! I'm an experienced dev, although it is my first published mobile app (played around a bit previously). 

The main idea behind Koyo was to make journaling more fun and personalized. You are guided by a koala companion that has different personalities you can choose between. For example it can be funny and genz-like or stoic with hard truths. In the app you can also view insights on your emotions, recurring topics and suggestions from Koyo.

Here's the App Store link - https://apps.apple.com/app/koyo-ai-journaling-companion/id6757150357

Would appreciate any feedback! Thanks

r/ClaudeAI Lord_Dephian

Built a Chrome extension to deal with a HUGE issue while Claude thinks (especially noticeable with longer Opus 4.6 responses)

Claude's messages, especially with the latest Opus models, can take a long time to generate. I either start scrolling reels or

I built a small Chrome extension called Drift specifically around this Claude usage pattern. When Claude starts generating a response, it opens a separate window with a feed, and when Claude finishes, it brings you straight back. No manual tab switching, no forgetting the response is ready.

Claude was also part of the build process itself. I used it heavily while developing Drift to reason through Claude-specific response states, streaming behavior, and edge cases around longer Opus generations. Those longer responses were actually what made the problem obvious enough for me to build something in the first place.

It’s free to try, privacy-first (everything stays local, no tracking), and customizable if you want to adjust when it triggers or what it opens. It also works with ChatGPT and Gemini, but Claude was both the motivation and the primary test case.

It literally just launched, and I’d genuinely love feedback from people who use Claude a lot.

r/SideProject CryptographerCold743

After 3 years and 9 failed side projects, one finally started making money (around 250 MRR)

I posted something similar in another thread earlier, but figured I’d write the full thing out here in case it helps someone who’s been building stuff for a long time and getting absolutely nowhere with users.

For the last ~3 years I’ve been trying to build my own SaaS products alongside my day job. I run a small dev shop, so I was already coding all day, and every new project felt like "ok this one is actually useful, this one should work". Some took weeks, some took months. I shipped a lot of things over that time.

Nine projects in total.

None of them made money. Literally zero.

Most launches followed the same pattern. Finish the product, deploy it, maybe post the link somewhere, then sit there refreshing analytics and Stripe hoping something would happen. Nothing ever did. After a while you just stop telling friends about new projects because explaining why nobody uses them gets awkward pretty fast.

At some point I honestly just assumed I was bad at this.

How the idea came up

The idea for the current project came from a pretty unglamorous place. After a long relationship ended, I found myself back on dating apps and had no idea what I was doing. Matches were inconsistent, conversations went nowhere, and I couldn’t really tell what was wrong with my profile.

So I started digging into it properly. I spent a lot of time researching dating profiles and attraction, reading books, forums, random blog posts, and testing things on my own profile. Once I understood what actually made a difference, I built a small internal tool to structure that feedback for myself. That slowly turned into a product.

When I launched it publicly, the exact same thing happened as with all my other projects. No users, no payments, nothing. I was pretty close to mentally writing it off as just another failed idea and moving on.

What changed this time

The only thing I did differently this time was what I did after launch.

Instead of trying to promote it, I started paying attention to people who were already struggling with the same thing. Whenever I saw someone asking for profile feedback, I’d message them and write a proper review. No links, no pitch, no mention of a product. Just actual feedback based on what I’d learned.

I did this pretty consistently for a while. It wasn’t scalable or automated or anything like that. Just manual messages and time.

After some time something new started happening. People replied asking if I did this professionally, or if there was a way to get this kind of feedback without going back and forth all the time.

That had never happened to me with any previous project.

From there things started moving, slowly.

Where it’s at now

Right now it’s sitting around $250 to $300 MRR, so roughly $3k ARR. There are 174 total paying customers. Monthly churn is around 8%.

Infrastructure is basically free since it’s running on Firebase, and compute costs are only a couple of dollars a month. Most of the work went into researching the dating space and turning that into a structured review system that actually gives useful feedback.

Here’s the Stripe screenshot: https://imgur.com/a/fjU1b4R

This is obviously not life changing money. I still work full time. But it’s the first project where strangers paid me without me having to convince them or push them, which made it feel very different from everything else I’ve built.

I don’t really have some big framework or lesson here. A few things just became obvious over time.

None of my previous projects failed because they were missing features. Being close to real users mattered way more than polishing the product. Things only started moving once I stopped trying to sell and focused on actually helping people.

That’s pretty much it.

Self promotion

If anyone’s curious, the tool is here: https://10xswipe.com

There’s a free evaluation so you can see the feedback before paying. The funnel is pretty aggressive, but that’s intentional.

Happy to answer questions in the comments, especially about what didn’t work, because I have way more experience with that.

r/LocalLLaMA NucleusOS

After 1.5M API keys leaked from OpenClaw, here's how Nucleus MCP prevents sleeper agents

Watched this video about OpenClaw's security crisis: https://www.youtube.com/watch?v=ceEUO_i7aW4

TLDR: Sleeper agents in top ClawHub skills, Docker escapes, 1.5M API keys leaked from chat logs.

I've been building Nucleus MCP — a local-first MCP server with security built in from day one:

What it already has (shipped, not roadmap): - 🔒 Hypervisor — locks files/folders with WHO/WHEN/WHY metadata - 📋 Audit Trail — every action logged to events.jsonl - 🧠 Local Memory — engrams stored on YOUR machine, never cloud - 🔐 Resource Locking — agents can't modify protected files - 🔄 Cross-Platform Sync — one brain for Cursor, Claude, Windsurf

How it prevents the OpenClaw attack vectors:

OpenClaw Vulnerability Nucleus Defense Sleeper agents in skills Hypervisor monitors all file changes API keys in chat logs Keys never stored in memory/logs Docker escapes 100% local, no containers to escape Blind command execution Resource locking + audit trail

Install in 2 min: bash pip install nucleus-mcp nucleus-init --scan

MIT licensed. Been building this for months. Happy to answer questions.

What security features would you want in an MCP server?

r/LocalLLaMA DenisRoger001

Is the Nvidia T4 actually viable for 70B (EXL2) daily driving, or is it just pure cope compared to dual 3090s?

I’ve been trying to find a middle ground for running 70B parameter models without dropping $1.5k on a dual 3090 rig or dealing with the power bill/noise of enterprise used gear (looking at you, P40 screamers).

My local setup (single 3070) is fine for 8B models, but it chokes hard on anything substantial unless I quantize it down to brain-damaged levels.

I decided to experiment with a "Remote Backend" setup - keeping my SillyTavern/Ollama frontend local but offloading the heavy lifting to a cloud instance. The goal was to find a cheap gpu vps that offers full passthrough, not that vGPU slicing where you share VRAM bandwidth with noisy neighbors.

I ended up testing a dedicated T4 slice on Lumadock this week to see if 16GB VRAM + system RAM offloading (or just smarter splitting) is actually usable for chat.

To be honest, I expected it to be painfully slow. But running 4.0bpw EXL2 quants, I’m getting surprisingly consistent tokens/sec. It’s definitely not instant like a 4090, but for the price of a few coffees a month, it feels like a decent stopgap until consumer hardware catches up.

Is anyone else running a "Remote Local" architecture like this or is everyone here strictly "if I can't touch the GPU, it doesn't count"? I’m trying to justify not building a new PC right now.

r/ClaudeAI dbeermann

Claude can execute well in small boxes, but it’s great at defining the box

One thing we’ve consistently found working with Claude is that it struggles most when it’s asked to act across precise rules, constraints, or long time horizons. For example, it’s not well-suited to running deterministic logic like backtests across the full universe of stocks for arbitrary historical dates.

Where it’s been more useful is earlier in the process, as a specification layer, not an execution layer. We use it to translate natural-language intent into structured strategy definitions, but it never executes anything.

Even then, hallucinations show up in predictable ways. As a concrete example: we provide Claude with a fixed set of hundreds of financial metrics and example ranking functions via the system prompt, and explicitly instruct it to use only those. Despite that, it will regularly invent metrics that don’t exist.

As a result, we’ve had to rely heavily on downstream validation via tool use: checking metric validity, equation structure, identifiers, and other specifics before anything progresses. Execution remains fully deterministic and auditable but ambiguity is pushed upstream where it’s safer to catch.

This feels similar to how higher-risk systems already work: software proposes, guardrails validate, and humans authorize.

Curious if others here are using Claude more for specifying intent rather than taking autonomous actions. What failure modes have you run into? What’s worked well?

r/SideProject SaaS-Growth-Pizza

One pivot resaulted in 2 sales and 1 pending order

So, for context, we run this SaaS and for the past 3 weeks we had been solving the wrong issue until one conversation with a user changed everything and resaulted in 2 sales and a pending order

So, the whole story is that we had a subscription based model thinking that we are selling a recurring value which it is. The SaaS founders will get users today, tomorrow and the next month and so the value is recurring yes. But there's one issue

That's not the actual pain

See, all of you want users right? And so most of you have the pain of distribution and getting feedback right? And that's what we thought our tool solved but turns out?

It wasn't

The real pain was "I can build this tool but it will take me time, effort and money to do so"

It wasn't fixing their user problems or helping them gain users to their SaaS who'll give feedback not just inbound traffic that sees the landing page and runs but it was that they can build it but it will take time, effort and money.

And beyond time, effort and money they'll also have to figure out how to build it in a way that doesn't trigger reddit's spam filter which causes them to lose their accounts which is already integrated in our platform.

That's what we solved with our tool

So once we shifted, we contacted some of our users who showed interest and they IMMEDIATELY asked for price.

2 sales and 1 said he needs to gather some money bcs he had a lot of expenses this month and so that's how ONE shift resulted in 2 sales and 1 pending order.

You don't need extra features

You need to understand your audience's need

That's all

Good luck to you all

r/ClaudeAI brucetoooooo

Show HN: Visual Agentic Dev – Click any React component to edit code with AI (Open Source)

https://preview.redd.it/m6fik3843pig1.png?width=3144&format=png&auto=webp&s=1fcc7edee4a519250c4ff4ba056afda84bac09dc

Hey everyone! 👋

I've been working on a tool to bridge the gap between browsing your local React app and editing code. It's called Visual Agentic Dev.

The Problem:
I found myself constantly switching between the browser (to see changes) and VS Code (to make changes), often losing context or spending time hunting for the right file/component.

The Solution:
Visual Agentic Dev allows you to:

  1. Click on any element in your local running React app.
  2. Describe what you want to change in a sidebar chat.
  3. Watch as an AI Agent (like Claude Code) modifies your local source code in real-time.

Key Features:

  • 🎯 Zero-Config Source Location: Uses React Fiber magic to find files at runtime. No invasive Babel plugins required.
  • ⚡ Instant Agent Readiness: Innovative AgentRegistry architecture keeps the AI CLI hot and ready. Context switches are instant—no waiting for the agent to boot up.
  • 🤖 Dynamic Agent Support: Plug-and-play with Claude Code, CCR, or any future terminal-based agent.
  • 💻 Immersive Terminal: A full PTY terminal embedded in your browser using xterm.js and node-pty.
  • 📂 Smart Project Switching: Automatically detects which project you're browsing (great for monorepos) and switches the agent's context instantly.

It's open source and I'd love your feedback!

https://github.com/brucetoo/visual-agentic-dev

r/homeassistant GoldLama

Sonoff Dongle max wifi

Hello, I bought a Sonoff Dongle Max. I connected it to Home Assistant through Zigbee2MQTT, and I can see all my Zigbee devices.

I also have a Tapo WiFi camera that I want to access via RTSP on my PC:

Tapo camera → Sonoff Dongle Max → Router → PC

Is this possible? I can’t find a way to configure the Sonoff’s WiFi/router functionality, tank you!

r/homeassistant CurrentPast3481

Whatsapp Integration for HA - Send messages via HA - Run automations based on Whatsapp received messages - Now with WhatsApp Group support

Hi!

Yesterday I released the first version of Home Assistant WhatsApp Integration - A custom integration to send WhatsApp messages in Home Assistant and to run triggers/automations based on received messages.

Thanks to the overwhelming support from you guys and because some people asked for group support, today I released the version 1.1.0 which includes group support.

Here are some examples on how to use this new update:

Sending a Messsage

You can send messages to any number using the service:

service: whatsapp.send_message
data:
  number: "40741234567" # Country code + Number (no "+" symbol) 
  message: "Hello from Home Assistant! 🏠"

Sending to a Group

You can send messages to a group by its exact name:

service: whatsapp.send_message
data:
  group: "Family Group" # Exact name of the group
  message: "Dinner is ready! 🍽️"

Automation Trigger

Trigger actions when a specific message is received:

trigger:
  - platform: whatsapp
    from_number: "40741234567"
    contains_text: "Turn on lights" # Optional
action:
  - service: light.turn_on
    target:
      entity_id: light.living_room

Group Message Trigger

To trigger an automation from a group message, use from_group with the exact group name:

trigger:
  - platform: whatsapp
    from_group: "Family Group"
    contains_text: "Dinner" # Optional
action:
  - service: notify.persistent_notification
    data:
      message: "Dinner time!"

Thank you!

26 23
Reddit
r/ClaudeAI philosohistomystry04

How go change the voice.

Mine has always spoke with a feminine type of voice, but recently it switched to a more masculine one. I don't think I did anything that caused this, other than updated the app. But I have updated it multiple times since I started using it and have never had this happen.

r/SideProject nightliteusa

Valentine's Day Love Chemistry Calculator — shipping seasonal viral apps 💘

Part of my "antigravity apps" series — trend-based web apps I build and ship fast. This one calculates love chemistry between two names with animated breakdowns. 🔗 https://cousined1.github.io/love-chemistry/

r/ClaudeAI Acceptable-Lynx1169

i built a open-source cron daemon for claude code so i dont have to start sessions manually

i kept having these tasks where id think "i should check competitor changelogs" or "i should triage new github issues" and then... just not do it. or do it manually once a week when i remembered.

they were perfect for claude code but i didnt want to manually start a session every time.

so i built murmur. its a cron daemon that runs claude code sessions on a schedule.

you write a HEARTBEAT.md file with yaml frontmatter for config and a prompt below:

---

name: competitor watch

cron: 0 9 * * MON

agent: claude-code

model: sonnet

timeout: 10m

---

fetch https://competitor.com/changelog.

compare against ~/tracking/competitor-last.md for new entries.

for each new feature, check our issue tracker and think about

whether it makes sense for our product given our roadmap.

only if it genuinely adds value: open an issue with reasoning.

update competitor-last.md. if nothing new, HEARTBEAT_OK.

murmur reads the file, runs claude on schedule, claude uses its tools to do the work.

not meant to replace CI. this is for the fuzzy tasks where the logic is "read this, think about it, decide what to do."

i run about 6 of these for competitor analysis, issue triage, weekly paper research, daily briefs. also a good way to burn through sonnet tokens i dont need anymore since im all in on opus.

curious what people would use it for.

brew install t0dorakis/murmur/murmur

or try an interview to set it up easily:

npx skills add t0dorakis/murmur --skill heartbeat-cron

github: https://github.com/t0dorakis/murmur

also on HN if you want to see the discussion: https://news.ycombinator.com/item?id=46959508

r/homeassistant ImaginaryLocal403

Aqara fp300 Amazon is the only authorized dealer of Aqara meanwhile no stock shortage in China and Japan Amazon is stuck with millions of older model that the real reason why they have shortage of fp300 meanwhile in las vegas CES 2026 aqara show the new fp400 Amazon will sell that model in 2030 lol

r/SideProject Celeriorium

Built a civic action wiki with Next.js + collaborative editing to helping people move from awareness to action

The Problem

Everyone's overwhelmed about what's happening in the US, but there's a huge gap between "I want to do something" and "what do I actually do?"

I wanted to connect verified information to concrete steps, with community editing to keep it current.

What I Built

resistproject.com – A fact-based civic action platform with Wikipedia-style collaborative editing.

Two main sections:

  • LEARN – What's happening (every claim links to primary sources: executive orders, court docs, etc.)
  • ACT – What you can do about it (email templates, call scripts, organizations to join, etc.)

Community can propose edits, other users vote, and approved edits automatically apply to pages.

Tech Stack

  • Next.js 15 (App Router) + React 19 + TypeScript
  • PostgreSQL (Railway) + Prisma ORM
  • NextAuth.js v5 (passwordless email + Google OAuth)
  • MDX for content (markdown with React components)
  • Tailwind CSS for styling
  • Railway for hosting + deployment

Interesting Technical Bits

1. Collaborative Editing System

Built a tier-based permission system where users level up based on contribution quality.

Votes are weighted by tier (1pt, 2pt, or 3pt), and proposals auto-resolve when they hit approval/rejection thresholds.

2. Content Resolution System

When an edit is approved:

  • Stores the diff (old content → new content)
  • Pages dynamically apply all approved edits at render time
  • Version tracking shows "X community edits applied"
  • Full audit log for accountability

3. Simple MDX Syntax

Created a custom remark plugin that transforms simple markdown into styled components:

## Facts
Content here automatically wraps in a FactsSection component

[source: Document Title](https://example.gov)
→ Renders as a styled citation link

Reduced content verbosity by ~70% and makes it way easier for non-technical contributors.

What I'm Looking For

Feedback on:
- Does the tier/voting system make sense?
- What would make you actually use this vs. just reading news?
- Any security concerns with community editing?
- UX improvements (especially mobile)

Technical advice:
- Best practices for handling edit conflicts (when 2+ people edit the same page)?
- Should I add real-time updates (WebSockets) or is polling fine?
- Better ways to handle MDX content resolution?

r/homeassistant jerrodbug

Struggling setting up

I am really struggling get things working and setup in home assistant. Ive watched tons of setup videos and understand the basics of how it works, but no one seems to explain how to transition from existing setup TO home assistant.

Here are a couple of my questions:

  1. Should i be deleting ALL automations from existing apps and re-setting them up in HA? Or should i leave those in place?

  2. similar to above--i have multiple google home devices--do i delete all devices out of Google Home and "abandon" it? I noticed when i was messing around that i can "cast" home assistant to those devices, but its basically just a webpage. How would voice assistant continue to work?

  3. I have to be missing something when it comes to scenes--they are pretty painful to setup, and you dont seem to have many options? This is mostly for lighting (Hue, and Govee). I also have read that i cannot use the Govee Dreamview scenes? can i set them up in the app, and at least start them from home assistant? I added "govee to MQTT" to my homeassistant and it broke all my buttons i had for govee lights already working too, so not sure how to proceed on these lights. Guessing i have to set them all up using MQTT as the device?

  4. Basic question--when i put a button to turn on a light on the dashboard, how do i get it to just DO the button press, versus opening another window where i have to click "activate"?

Thanks for any help!

r/LocalLLaMA Quiet_Dasy

The fastest way to run qwen3 localy

I tryed tò run the following model : https://huggingface.co/Qwen/Qwen3-1.7B-GPTQ-Int8

Using theese software:

Lama.cpp,kobold.cpp, ollama

They are slow My gpu 2060 6gbvram

I saw this info :

Qwen3-1.7B FP8:

TensorRT-LLM: TTFT 18.3ms / TPS 104.9

vLLM: TTFT 20.6ms / TPS 80.2

How tò install localy qwen3 with vllm

r/ClaudeAI yixn_io

I think i have a problem...

r/LocalLLaMA MR___Phantom

Hello guys need some suggestions?

Hello guys Recently I started working on creating a custom AI assistant using two LLMs, one as a router to call tools or find the intent of questions, and the other LLM as the brain to reason or answer them.

The problem I am facing is that the router is unable to find extra intent for some questions like, “suggest me a new horror movie,” and “suggestion for this or …”.

I have keywords intent till now, and that raised this problem. I am a student, still new to this, and I have limited computational resources, so I used small models like a 7B model as the brain and a 2B model as the router, and I used serial loading and unloading of these models to reserve GPU .

Note: i forgot to mention these intents are also used for using required tools like web search and others.

r/LocalLLaMA Capable-Strategy-656

Cooling & build advice for H200s

Hello! I was tasked with building a bare-metal inference cluster at work, and I’m trying to avoid any thermal / performance surprises with 2× H200 in a single node.

I’d love feedback from folks who’ve actually run H100/H200 PCIe in self-built (non-OEM) boxes:

  • How are you cooling them in practice?
  • Are the stock chassis fans typically sufficient, or do you end up needing a specific fan wall / shroud / “only this chassis works” setup?
  • Any gotchas around airflow direction, static pressure, or slot spacing that aren’t obvious on paper?

My primary option would be to go for Supermicro SC747BTQ-R2K04B, do you believe it is overkill? Is there a more reasonable solution that still provides enough cooling capacity without needing to ship a 30kg chassis?

In terms of workflow, I plan on using this build to run Qwen Coder Next with ~100k context window on vLLM and as many parallel sequences as I can.

Overall, my build idea right now is the following:

Component Choice Case / chassis Supermicro SC747BTQ-R2K04B Motherboard ASUS PRO WS WRX90E-SAGE SE CPU AMD Threadripper PRO 9955WX CPU cooler Arctic Freezer 4U-M Rev. 2 RAM (512GB) 8× Kingston 64GB DDR5-5600 ECC RDIMM GPU (2×) 2× NVIDIA H200 NVL PCIe 141GB NVLink bridge PNY NVLINK2WAY-KIT OS SSD Samsung 990 Pro 2TB Data SSD Solidigm D5-P5336 15.36TB Power adapters, cables, fans 2× 3×8-pin-to-12VHPWR + extra fans Rail kit Supermicro MCP-290-00059-0B
r/SideProject Ecstatic-Ad-9000

How to make 15usd in 10 min

I have tried so much online, but this is the one. Just sharing what’s worked. With a few survey apps, I earn $400–$600 every month without doing anything stressful. It’s become a nice side income. Even have proof of you want.

These are the exact apps I’m using: AttaPoll

https://attapoll.app/join/qvkmx

It pays via bank or paypal.

They’re legit, they pay, and you get bonuses for joining, with this link you get 0.50$. If you want to get the most out of them, I can show you what I do. I have proof also if you want with pictures

r/homeassistant moneysaver688

Binary sensor to detect Bluetooth device presence?

I found this helpful archived post https://www.reddit.com/r/homeassistant/comments/1ebhy6c/tracking_my_car/ with a helpful comment by u/jerobins but I'm unable to figure out where to put the following:

binary_sensor:
   - platform: ble_presence
     mac_address: ${vehicle_1}
     name: Vehicle 1
     device_class: presence

Also, any more user friendly way to do this, for example, with https://www.home-assistant.io/integrations/template/ integration, or will this cause problems?

I'm using a GL-S10 with a BlueCharm iBeacon tracker, but after having too many false positives with distance tracking (even with trying BLE triangulation add on) I want to make things as simple as possible - iBeacon detected or not.

I tried just using the Home/Away feature exposed by the iBeacon entity in Home Assistant but it still gives false positives somehow (?).

Thanks!!!

r/homeassistant UpstateNJ

Clearing event states from Apollo Btn-1 entities

I just got an Apollo BTN-1B (which is pretty damn cool), and I'm trying to set up automations for the buttons. Since each button entity has 4 state (click, double-click, triple-click and hold) I was planning to use one button per room to control lighting options.

That is to say I designated Button 4 for the family room, and set up an automation to fire when the event state changes to "click" to toggle the rooms lights on and off. The toggle action works the first time, but does not toggle of the lights the next time i press the button, because the state never changed from "click".

If I perform another action on the entity (double click, triple click or hold) it changes to a different state, and I can again press the button for another "new" click state.

Is there any way around this? a way to reset the last event from the entity? or am I just going about this all wrong? (as I sit here typing this, Im thinking it might be better accomplished using Scenes, but IDK?)

Ideally I'd like to do this:

click = toggle all lights

double-click = toggle some lights

hold = turn off all the lights

r/LocalLLaMA New-Gate7443

Trouble getting Qwen3-Coder-Next running

I am having tons of trouble getting a usable speed out of Qwen3-Coder-Next on my local system:

  • Intel i7-12700K
  • 48GB DDR4-3200
  • RTX 5060 Ti 16GB
  • RTX 3060 12GB

I came across this post here claiming to get 30 tokens/second using 24GB VRAM with the following parameters:

GGML_CUDA_GRAPH_OPT=1 llama-server -m Qwen3-Coder-Next-UD-Q4_K_XL.gguf -ngl 99 -fa on -c 120000 --n-cpu-moe 29 --temp 0 --cache-ram 0

However, my speed ranges between 2 and 15 tokens per second. I am running it with the same parameters he listed, with a tensor-split of 79/21 that gives me this:

[36887] llama_params_fit_impl:   - CUDA0 (NVIDIA GeForce RTX 5060 Ti):  15825 total,  13229 used,   1862 free vs. target of    128
[36887] llama_params_fit_impl:   - CUDA1 (NVIDIA GeForce RTX 3060)   :  11909 total,  10301 used,   1429 free vs. target of    128

It says 49/49 layers are offloaded to the GPU.

Prompt processing takes an absurd amount of time and it's borderline unusable. Probably the weirdest part is that the swap space is being hit hard instead of the system RAM.

https://preview.redd.it/ips9t1c0apig1.png?width=588&format=png&auto=webp&s=80cbc9e22d9c869d7ccab94306f475f0a3e5193f

I'm running it in a docker container with the following args:

srv          load:   /app/llama-server
srv          load:   --host
srv          load:   127.0.0.1
srv          load:   --jinja
srv          load:   --min-p
srv          load:   0.01
srv          load:   --port
srv          load:   41477
srv          load:   --temp
srv          load:   0.8
srv          load:   --top-k
srv          load:   40
srv          load:   --top-p
srv          load:   0.95
srv          load:   --alias
srv          load:   Qwen3-Coder-Next-Q4
srv          load:   --batch-size
srv          load:   4096
srv          load:   --ctx-size
srv          load:   120000
srv          load:   --flash-attn
srv          load:   on
srv          load:   --fit-target
srv          load:   128
srv          load:   --model
srv          load:   /models/Qwen3-Coder-Next-UD-Q4_K_XL.gguf
srv          load:   --n-cpu-moe
srv          load:   29
srv          load:   --n-gpu-layers
srv          load:   99
srv          load:   --threads
srv          load:   -1
srv          load:   --tensor-split
srv          load:   79,21
srv          load:   --ubatch-size
srv          load:   2048

I am experienced with linux but new to local LLMs. What am I doing wrong?

r/n8n Fresh-Daikon-9408

🤖 Discord Social Post Assistant - Full AI Agent Workflow

Built a conversational Discord bot that creates professional social media posts through natural dialogue.

Features:

  • 💬 Multi-turn conversations with memory & state management
  • 🧠 Multi-agent AI system (orchestrator + specialized tools)
  • 🔎 Native web research
  • ✅ Fact-checking & creative enhancement tools
  • 🎨 Image generation
  • 👥 Human-in-the-loop validation (approve text + images)
  • 🔁 Iterative refinement loops

The bot asks questions, gathers context, researches topics, creates content, generates images, and refines everything based on your feedback—all from Discord.

Built 100% with n8n As Code - no hallucinated parameters, schema-validated nodes, community workflow patterns.

r/homeassistant szol

Livly Resident integration (pending packages)

I live in a complex that uses the Livly app (iOS/Android) for various functions including showing packages that are awaiting pickup. The app itself is fine but I'd rather have that count on my HA dashboard, so I built an integration to periodically poll it from their API. Their API isn't publicly documented so it is subject to break at any time, but I've been using it for a little while with no issues, and I'll update it if/when it becomes necessary. Wanted to share in case it's useful for anyone else.

A note about privacy: The integration only pulls the count of packages, and doesn't communicate with any other service outside of Livly. I also intentionally opted to store no other metadata for privacy's sake, even if some other info might be nice-to-have. Your phone number is stored locally because it's required for (re-)authentication, but only displays the last 4 digits in the UI and is not sent anywhere else, which can be verified in the code.

The repo is here: https://github.com/tessellate-io/ha-livly

And can be added as a custom repo to HACS (the icon is just pending a PR approval on HA/Brands). Let me know if you have any feedback!

r/ClaudeAI UnscrupulousAlien

I was frustrated wasting time on writing prompts so I created a tool so I can stay in Prompting Flow State :)

I spend most of my day in AI chat interfaces and got tired of the mouse gymnastics. Select text, overshoot, re-select. Scroll up to find that one paragraph. Copy it, scroll back, paste it into a follow-up.

So I built asdPrompt — a Chrome extension that lets you select and act on text in AI chats without touching the

When prompted, asdPrompt pops an overlay with hint letters next to every text block. Type a letter to select it. Keep typing to drill down: block → sentence → word. Hit Enter when you're satisfied to copy, or press an action key to inject the action template with the selection and kick off the next prompt.

Action keys let you instantly ask follow-ups about your selection — elaborate, define, simplify, give examples, critique, etc. — without manually copy-pasting into a new message.

There's also a bookmark system (press b to save your scroll position and jump back) and a conversation outline panel for navigating long threads.

Works on ChatGPT, Claude, and Gemini. Adapts to light/dark themes.

asdPrompt – Google Web Store Link

––––

I personally find use in it, hence why I created it. I hope you guys do too. If you guys have any feedback at all, please let me know :).

r/ClaudeAI kingnade

Using Claude Code with Terminal versus Cursor (or any other AI IDE)

I am a beginner with Claude Code and just bought it today so pardon my misuse of terminology in the following post.

I just installed Claude Code onto my Mac's terminal and have been using it to write code and create excel files. It has been flowing nicely but I read somewhere else that people prefer using Claude Code on AI IDE's such as Cursor or Windsurf.

I was wondering if someone could explain the differences between the two in terms of uses and capabilities especially for someone with the goal of learning automation and how to create AI agents.

r/ClaudeAI SunofaBaker

Claude Usage Quick View

I built Claude Usage Tracker. It’s a tiny Swift app that sits in your menu bar and pulls your real-time stats directly from the Anthropic API (via your local Claude Code credentials).

Why I made it:

  • Native & Fast: Written in Swift. Only ~50MB RAM compared to heavier Python versions.
  • Real-time: Shows your 5-hour session % and weekly limits (including Sonnet-specific ones).
  • Privacy First: It reads your OAuth token from your own Keychain. No third-party servers, no dependencies.
  • The "Reset" Glance: Hover to see exactly how long until your session resets.

It’s completely open-source (MIT). If you have Claude Code installed, it works out of the box.

GitHub: https://github.com/cfranci/claude-usage-swift

r/SideProject Dxstinity

We got tired of “almost-personalized” cold emails, so we built our own fix

Okay, some honest context.

For years, we relied on cold email to grow B2B SaaS projects.
Writing the emails was never the real bottleneck.

The real pain was:

• doing deep research on every single lead
• figuring out who actually fits our ICP (and who doesn’t)
• reframing our offer so it’s relevant to them, not just us

That part was slow, mentally draining, and nearly impossible to scale without quality dropping off.

At some point we stopped asking “how do we write better emails” and started asking:

Who else is dealing with this exact mess?

That question turned into Mailly.io

We built it to handle the parts everyone hates:

• contextual company research
• ICP evaluation
• offer reframing based on what actually matters to the prospect
• fully custom, human-sounding emails (no templates, no spins)

Right now it’s in a testing phase, but in our own campaigns we’re seeing 8%+ reply rates, without tricks or volume spam.

We’re opening it up to early testers and would genuinely appreciate feedback, especially from people who:

• run cold outreach regularly
• hate manual research more than writing
• have tried “AI personalization” tools that still feel robotic

If you want to test it, break it, poke holes in it, or straight-up roast it, I’m here for it.

Happy to answer questions in the comments.

PS: Every email is generated from scratch. No templates. No fake personalization.

r/SideProject ruibranco

What's a side project mistake you had to make yourself before you truly understood it?

For me it was building in isolation for months before showing anyone. I knew intellectually that you should validate early, but I convinced myself my project was "different" and needed to be polished first. Spoiler: it wasn't different.

Curious what lessons you had to learn the hard way.

r/LocalLLaMA eatsleepliftcode

Shipped a big AgentCrawl update: robots/sitemaps, disk caching, resumable crawls, structured metadata + chunking

update from my last post

Shipped a big AgentCrawl update: robots/sitemaps, disk caching, resumable crawls, structured metadata + chunking  https://www.npmjs.com/package/agent-crawl

spent some time in weekend iterating on agent-crawl (TypeScript scraper/crawler for AI agents) and just landed a pretty chunky set of improvements that made it feel way more “production crawler” and less “demo script”.

TL;DR what’s new

- removed tool adapters for agents sdk and vercel ai sdk. let users define thier tools their own way

- updated zod to latest

  Crawler correctness + politeness

  - Opt-in robots.txt compliance (Disallow/Allow + Crawl-delay)

  - Opt-in sitemap seeding from /sitemap.xml

  - Better URL normalization (canonical-ish normalization, strips tracking params, normalizes slashes, etc.)

  - Per-host throttling: perHostConcurrency + minDelayMs

  - Include/exclude URL filters (simple substring patterns)

  Caching

  - Opt-in disk HTTP cache for static fetches with ETag / Last-Modified support

- Sends If-None-Match / If-Modified-Since

- If server returns 304, we serve the cached body

  - Opt-in disk cache for the final processed ScrapedPage (post-cleaning + markdown)

  Resumable crawls

  - Opt-in crawlState persistence that saves the frontier (queue/visited/queued/errors/max depth)

  - Can resume a crawl without redoing already-visited pages (and can persist pages too)

  Better extraction for agents

  - Structured metadata extraction:

- Canonical URL, OpenGraph, Twitter cards, JSON-LD (kept in metadata.structured)

  - Opt-in chunking:

- returns page.chunks[] with approximate token size, heading path, and a citation anchor (super convenient for RAG/tool loops)

why I did it

  The main pain point wasn’t “can I fetch HTML”, it was everything around it:

  - crawls getting stuck or repeating

  - no way to pause/resume

  - re-fetching the same stuff over and over

  - agents needing chunks + citations without custom glue

  So this update is mostly about giving the library “crawler bones” (politeness, caching, state) and “agent ergonomics” (structured metadata + chunks).

r/aivideo just1clown_3

They thought they were filming history until it looked back

r/ClaudeAI FernwehAdventure

Former military officer, zero CS background — used Claude to build and ship an AI code review CLI to npm

Wanted to share this because I think it's relevant to what a lot of people here are doing — using Claude to actually build things, not just chat.

My background: military officer (two combat tours), then physical product sales. No CS degree, no bootcamp, no professional dev experience. I've been teaching myself to code because the tech industry doesn't exactly roll out the red carpet for career changers with my resume.

Claude Code has been my main learning tool. Not just for generating code — for understanding why things work, debugging errors I've never seen before, and learning patterns I wouldn't have found on my own for months.

The project: GrandCru is a code review CLI. You run `grandcru review src/` and get real technical feedback delivered by a French wine sommelier character. The interesting technical bit is the dual-channel Zod schema — one channel for strict data (issue type, severity, line number, fix) and one for creative prose (tasting notes, sommelier remarks), all in a single API call.

Constrained decoding guarantees the JSON. Extended thinking is on by default — the model reasons about the code before committing to structured output. The Zod `.describe()` calls on each field act like mini system prompts that keep the persona alive inside the JSON structure. Without them you get what I call "JSON lobotomy" — the model forgets to have a personality.

It reviewed its own source code and found real issues: no input validation in the prompt builder, unsanitized string interpolation. Scored itself 79/100 — "Needs decanting before service."

npm install -g grandcru

GitHub: https://github.com/Scunion95/grandcru

I'm not pretending to be a senior engineer. I'm a guy who taught himself to code and shipped something real. Claude was a massive part of that. Curious what other non-traditional backgrounds are building with it.

r/aivideo koalapon

"Midori Dreams" Z-Image Turbo Wan Vace Audio for the music

r/SideProject Life_Watch_4493

I built a brand deal tracker for content creators , the one tool that doesn't exist yet

Hey everyone 👋

I've been deep in the creator economy space for a while, and I kept seeing the same problem over and over: creators are running $10K–$50K/year businesses on Notion templates, Google Calendar reminders, and memory.

We're talking about people managing 10–30 brand deals at a time with zero system. Deadlines get missed. Invoices go unfollowed for months. Brands reuse content way past the agreed usage window ,and nobody catches it because the creator forgot the terms were buried in an email thread from 4 months ago.

I couldn't find a single tool that solved this. Here's the landscape:

  • Enterprise tools (GRIN, CreatorIQ, Aspire) → built for brands, $200–$1,500+/month
  • Generic CRMs (HoneyBook, Dubsado) → built for freelancers, zero creator-specific features
  • Free Notion templates → better than nothing, but no reminders, no payment tracking, no automation

So I built Creitr , a dashboard where creators can:

  • Track every deal from pitch → negotiation → contracted → delivered → paid
  • Get automatic nudges before deadlines (7 days, 3 days, 1 day out)
  • See all pending/overdue payments in one place (+ one-click export for tax season)
  • Set and track usage rights, exclusivity periods, and whitelisting terms — with alerts before they expire

The whole idea is revenue protection. Most creators don't realize how much money they lose to forgotten invoices, expired usage rights they never renegotiated, and broken affiliate links nobody checked. One creator I talked to estimated she lost ~$3K last year just from deals that "fell through the cracks."

Right now it's just a landing page , I'm validating before building further. I'd love your honest take:

→ Does the value prop click within 5 seconds? → Would you pay $15/month for this if you were a creator? → What am I missing?

🔗 creitr.com

Happy to answer any questions about the space, the competitive landscape, or the tech stack. And if you know any creators , I'd genuinely appreciate a share. Thanks for reading 🙏

r/homeassistant imuncas

Area card not showing entities

I'm trying to create a quick Laptop UI which I don't use very often, just want to have all entities displayed. I thought area card is the perfect solution for this, but I'm banging my head in the wall in frustration. No matter what I do, entities assigned to an area do not show up in the area card. The only thing I can get it to display is the a temperature sensor value. I'm sure I'm missing something obvious, but I just can't figure it out:

I have 7 devices and 5 entities here, including lights:

https://preview.redd.it/dp9jx9u19pig1.png?width=290&format=png&auto=webp&s=c471f5c74b84bff1ad2a70fc4433b020b7d14044

When I add an Area card for the same area, this is the only thing displayed, basically empty with the expception of a temperature sensor.

https://preview.redd.it/7jfria6b9pig1.png?width=1505&format=png&auto=webp&s=cb4f26f77b93f7bd32f895e3def6bdcb15c54f1f

Anyone has an idea what I'm missing?

r/SideProject AlexeyAnshakov

Made an email alias service because I was tired of not knowing which SaaS leaked my email

We've all been there: you sign up for a free trial. Ok. Two weeks later, spam starts flooding your inbox. But you have no idea who sold you out.

I used to try everything. I used Gmail's +tag aliases, but spammers just strip the plus sign and sell the email anyway. I tried throwaway accounts. Working well, sometimes. But mostly, I just accepted defeat and hit unsubscribe 50 times a week my "spammy box".

So I built what I actually wanted: an email gatekeeper that tells me exactly who leaked my email.

Here is what makes Sentry different:

First, tagged aliases that actually work. Every service gets its own address, like inbox+your-name.hubspot@sentry.wr.io. Spammers can't strip the tag because it is the actual routing path.

Second, leak detection. The dashboard shows you which alias is receiving the most junk. You finally see who is selling your data.

Third, an AI spam filter. It uses Gemini 2.0 to decide if an email is legit or promotional garbage before it even hits your real inbox. BTW, need to improve the prompt. It's "so-so" for now.

Finally, priority access invites. You can create VIP aliases for people who should always reach you, bypassing all filters. That part is top.

I built this because I am a founder and developer, and my inbox was becoming a graveyard of newsletters I never asked for. I wanted accountability. Now, if I see a crypto scheme email coming through my HubSpot alias, I know exactly who to blame.

Building this taught me a few things: email infrastructure is a nightmare, most tools in this space are either full of ads or broken, but AI filtering is surprisingly good at catching clever spam that traditional filters miss.

The site is https://sentry.wr.io

Currently it is in test phase. If you are a founder or dev drowning in spam, feel free to check it out.

I would love to hear what your current strategy is for protecting your primary email. Maybe I simply missed something obvious. Who knows..

r/StableDiffusion South_Tea1731

Yubo: when you type "looking for women" and the app suggests... men 🤡

I wanted to talk about a problem that many people encounter on Yubo, but which isn't discussed enough:

👉 The search results completely ignore the filters.

r/SideProject jfrss

Forma – visual canvas for your tasks & notes

I noticed that I like to think visually and group things to make long lists easier to digest.

Instead of lists and folders Forma gives you an open canvas where you can drop ideas anywhere, move things around, group them visually and draw connections or notes around cards. Bigger card = more important. Cards close together = one project. One card covering another card = do in that order and so on.

This is a native, 100% local app – nothing leaves your Mac. It's fast and tiny (3MB) and is a one-time payment of $9.99.

You can learn more on the website or check the app itself. Happy to answer any questions!

r/aivideo tofpit

The Dirty Bitches (Les Sales Garces) - Full Movie available on Youtube

r/LocalLLaMA ElementaryZX

Looking for a local model that can handle Shavian.

I’ve been playing around with Shavian transliteration in LLMs, specifically Gemini flash, which seems to be able to handle and respond perfectly in Shavian if I set up the context correct, but I haven’t found any local model that can do the same.

I really thought this would be basic enough that any model could handle it.

Some models I tried with similar context setups to Gemini include GPT-OSS 20 and 120, most versions of Qwen and Nemotron. Also tried some variations of GLM. Context setup included giving it the Shavian text and the corresponding English text for a few instances. I also tried including the basic set of rules for converting between texts. The general response from all models are deterioration into repeating tokens, especially for thinking models, best responses were from the GPT family, but they get stuck on the phonemic part and start reverting to 1-1 mapping to latin 26 characters.

I would really appreciate any advice in this regard, I would also be willing to train a model specifically for this as it seems like a rather interesting research topic to understand how models would differ when using phonemic text.

r/ClaudeAI sujumayas

Pick your agent: Use Claude and Codex on Agent HQ. (Github & Github Copilot)

I haven't seen a direct discussion on this yet. I have a lot of questions, but the first one... is that "Claude Agent" a claude code agent or just a github managed agent that uses claude models ?

Anyone has more data on this ? I thought this was only the claude tagging or PR reviewing but it also works in the IDE where you have Github Copilot installed.

Any ideas?

r/aivideo Born_Conflict_7494

A duck hockeyfan watches an winter olympics hockey game

r/SideProject indienow

I spent 6 days and 3k processing 1.3M documents through AI

I started this project last week to make Epstein documents easily searchable and create an archive in case data is removed from official sources. This quickly escalated into a much larger project than expected, from a time, effort, and cost perspective :). I also managed to archive a lot of the House Oversight committee's documents, including from the epstein estate.

I scraped everything, ran it through OpenAI's batch API, and built a full-text search with network graphs leveraging PostgreSQL full text search.

Now at 1,317,893 documents indexed with 238,163 people identified (lots of dupes, working on deduping these now). I'm also currently importing non PDF data (like videos etc).

Feedback is welcome, this is my first large dataset project with AI. I've written tons of automation scripts in python, and built out the website for searching, added some caching to speed things up.

https://epsteingraph.com

29 11
Reddit
r/LocalLLaMA wouldacouldashoulda

Tether: Claude / Codex -> Telegram / Discord / Slack

With some tasks I felt like i was just reading and clicking 'yes' to permission prompts. I figured I could do that while lunching as well, or from the bathroom. So I built Tether. It has a local-first web UI, but I myself use it through Discord. Has MCP server support too, so Claude can also talk through it directly if you ask it to.

https://github.com/larsderidder/tether

r/homeassistant Diligent_Flamingo_52

ZHA Sonoff USB Dongle 3.0

r/SideProject Viper-0007

Honest question: would you care about a face shape analyzer app?

I’ve been working on a side project where the app analyzes your face shape on-device and suggests hairstyles, beard styles, and glasses that might suit you.

Now I’m stuck at the classic question: Is this actually interesting… or just “meh”?

I’d love to hear:

Would you download this out of curiosity?

Would you ever come back to it after the first try?

What would instantly turn you off?

Brutally honest answers welcome 🙏 I’m more interested in learning than defending the idea.

r/homeassistant Diligent_Flamingo_52

ZHA Sonoff USB Dongle 3.0

Hi!

I recently started out upgrading my current smart home (Eufy camera's, Google home hubs / nest audios, Phillips hue lightning, ...) with a Home assistant green and a SONOFF ZigBee 3.0 USB Dongle Plus, TI CC2652P coördinator. I bought some second hand sensor, as I wanted every room in the house to have a temperature and humidity sensor (Sonoff snzb-02, the square ones) as well as door sensors (Sonoff SNZB-04) & 2 new humidity sensors (Sonoff AirGuard TH) i ordered from amazon.

The USB Dongle is in a usb Hub next to my wifi router and i've set this up with ZHA in HA.
While connecting everything works fine, but the SNZB-02's are only showing the first value, then no more updates. Just a flat line. The movement sensor goes offline as well, the door sensors seem to be working (at least some of them) since they don't always change status when a door or window closes.

The new Sonoff Airguards do give updates every 5 min's and actually work great!

Since i've bought 7 window sensors & 7 humidity sensors, is there some kind of software problem with them? I can imagine they won't all work that fine, but the same issue with all the devices seems to me like there's another problem. I've changed the coin cell batteries to new ones as well.

I'll try to add a Philips Hue smart plug in order to make the network more stable, but can someone relate to this issue? Or are there simple fixes I can try?

Thanks a lot in advance!

r/homeassistant _PotatoFry_

Home Assistant Supervisor

Hi! I tried to install Home Assistant on Ubuntu Server using Docker, but it did not work for my needs because add-ons require an installation with Supervisor. I need to keep Ubuntu Server running since I host Minecraft servers on it, and I was wondering if there is any way to run Ubuntu Server and Home Assistant with Supervisor together at the same time.

Thanks!

r/AI_Agents Weekly_Physics_5987

Everyone talks about Clawdbot (openClaw), but not many people explain how it actually works.

I spent some time digging into the architecture, and honestly, what stood out most was how intentionally simple it is. No magic, no unnecessary complexity. Just solid engineering choices.

It’s built as a TypeScript CLI that routes messages through a lane-based queue system. Everything stays serial by default, which is refreshing because most agent systems eventually turn into async chaos if you are not careful.

The memory setup was simpler than I expected too:

• Session history lives in JSONL

• Long-term notes are just markdown files written by the agent

• No fancy compression or merging. Old context just stays there

Search combines vector storage (SQLite) with FTS5 keyword matching, so you get semantic search plus exact hits. Practical and effective.

Security is handled thoughtfully as well. Commands run inside a Docker sandbox with an allowlist, and risky patterns get blocked before execution instead of being cleaned up later.

One detail I really liked: browser automation skips screenshots entirely and uses semantic snapshots of the accessibility tree. It is more reliable and way more token efficient than relying on pixel coordinates.

My biggest takeaway after looking through it:

The system chooses explainable simplicity over clever complexity.

And honestly, that matches what I keep seeing while building agent systems myself. The tools that scale are usually the ones that stay boring in the right places.

Curious if others are noticing this too. 

10 4
Reddit
r/aivideo Equivalent-Stock2519

An Ancient Wizard Summoning Forbidden Magic (AI)

r/LocalLLaMA danielhanchen

Train MoE models 12x faster with 30% less memory! (<15GB VRAM)

Hey r/LocalLlama! We’re excited to introduce ~12x faster Mixture of Experts (MoE) training with >35% less VRAM and ~6x longer context via our new custom Triton kernels and math optimizations (no accuracy loss). Unsloth repo: https://github.com/unslothai/unsloth

  • Unsloth now supports fast training for MoE architectures including gpt-oss, Qwen3 (30B, 235B, VL, Coder), DeepSeek R1/V3 and GLM (4.5-Air, 4.7, Flash).
  • gpt-oss-20b fine-tunes in 12.8GB VRAM. Qwen3-30B-A3B (16-bit LoRA) uses 63GB.
  • Our kernels work on both data-center (B200, H100), consumer and older GPUs (e.g., RTX 3090), and FFT, LoRA and QLoRA.
  • The larger the model and more context you use, the more pronounced the memory savings from our Unsloth kernels will be (efficiency will scale exponentially).
  • We previously introduced Unsloth Flex Attention for gpt-oss, and these optimizations should make it even more efficient.

In collaboration with Hugging Face, we made all MoE training runs standardized with PyTorch’s new torch._grouped_mm function. Transformers v5 was recently optimized with ~6x faster MoE than v4 and Unsloth pushes this even further with custom Triton grouped‑GEMM + LoRA kernels for an additional ~2x speedup, >35% VRAM reduction and >6x longer context (12-30x overall speedup vs v4).

You can read our educational blogpost for detailed analysis, benchmarks and more: https://unsloth.ai/docs/new/faster-moe

We also released support for embedding model fine-tuning recently. You can use our free MoE fine-tuning notebooks:

gpt-oss (20b)-Fine-tuning.ipynb) (free) gpt-oss (500K context)_500K_Context_Fine_tuning.ipynb) GLM-4.7-Flash.ipynb) (A100) gpt-oss-120b_A100-Fine-tuning.ipynb) (A100) Qwen3-30B-A3B (A100) TinyQwen3 MoE T4 (free)

To update Unsloth to auto make training faster, update our Docker or:

pip install --upgrade --force-reinstall --no-cache-dir --no-deps unsloth unsloth_zoo

Thanks for reading and hope y'all have a lovely week. We hear it'll be a busy week! :)

135 23
Reddit
r/AI_Agents okay_whateveer

We stopped letting LLMs guess prices and embedded a real ML model inside an agentic system. Built this for Google’s Hackathon.

We just shipped PitchCraft, a hackathon project that tackles a problem we kept running into as an AI automation agency: turning discovery calls into proposals is slow, manual, and pricing is usually… guesses.

Most “AI proposal” tools stop at LLMs summarizing calls and then guessing numbers. We took a different approach.

What we built

PitchCraft is an end-to-end agentic system that converts discovery call recordings into complete proposals with ML-predicted pricing in under 5 minutes.

The core idea is something we’re calling Machine Learning as a Tool (MLAT):

  • LLM agents handle understanding, reasoning, and drafting
  • A real XGBoost pricing model (trained on our agency's pricing data) is exposed as a callable tool via FastAPI
  • The agent invokes that model contextually instead of guessing prices

How it works (high level)

  • A Research Agent analyzes Fireflies transcripts and gathers prospect data via tool calls
  • Structured features like scope, integrations, and complexity are extracted
  • A Draft Agent calls the XGBoost model to predict price
  • The proposal is generated around that prediction using structured output parsing

The ML part

  • XGBoost regressor
  • 70 total samples (40 real agency deals + 30 human-verified synthetic)
  • Group-aware CV
  • R² ≈ 0.81, MAE ≈ $3.7k
  • Designed to work under extreme data scarcity

We’ve already deployed this in our agency and cut proposal time from hours to minutes.

Why I’m posting

I’m curious how others here think about embedding classical ML models inside LLM agent workflows instead of replacing them. This pattern feels broadly applicable to any domain that needs numeric estimation + contextual reasoning (construction, consulting, insurance, etc).

Happy to answer questions or hear critiques.

r/SideProject Yolmack

We built a creator marketplace so people could finally stop fighting Stripe, churn, and subscriptions

Hey r/SideProject 👋

I wanted to share a project I’ve been working on for the past months: Sub-Starter

Sub-Starter is a marketplace that helps creators sell digital content and subscriptions without having to deal with the usual technical headaches (payments, access management, churn, etc.). The idea is simple: creators focus on their content and audience, we handle the boring but critical stuff.

To make it concrete, here’s an example of a live creator page on the platform:
👉 https://sub-starter.com/fr/djinou
(This is the kind of page creators use to present their offer, manage subscriptions, and onboard customers.)

A few things we learned while building:

  • Creators care more about conversion and retention than fancy features
  • Micro-commissions align incentives much better than flat fees
  • Automating subscriptions, access, and exits saves creators a huge amount of time

We’re live, creators are using it daily, and we’re iterating fast based on real feedback. It’s still early, but seeing real people earn money with something we built from scratch is extremely motivating.

At the moment, most of our creators are from France 🇫🇷.
If you have tips, strategies, or war stories about expanding a SaaS or creator marketplace to new countries, you’re more than welcome, I’d genuinely love to learn from your experience 🙏

Happy to answer any questions about the product, tech stack, or mistakes we made along the way.

r/SideProject Any_Performance5665

I built a UK referral code tracker tool

It’s a UK-focused referral offer database that brings together common referral deals (banks, finance apps, etc.) into one place. I built it mainly because finding referral links on forums often means digging through old threads or DM’ing people, which is clunky and time-consuming.

If you’ve got referral codes, you just upload them once and they’re then distributed fairly to users over time, rather than favouring one person or requiring constant reposting.

You can also track your referral activity and earnings through the built-in dashboard, so you can see how your links are performing over time.

Would this something you would use or have any suggestions please let me know!

https://refermonkey.com/

r/SideProject ybouane

I made a.genti.ca, it's like N8N but without the workflows, just plain english!

I'd be very grateful if I could get some early adopters to get some feedback on the platform.

The free plan is more than enough to run several automations.

r/StableDiffusion AssCalloway

- YouTube

Here's a monster movie I made!
on the RTX5090 with LTX-2 and ComfyUI.
Prompted with assists from nemotron-3 & Gemini 3.
Sound track from SUNO.

r/ClaudeAI mojtj

I got tired of Claude Code configs being everywhere so I built ClaudeShelf

I vibecoded a small tool to fix something that kept annoying me about Claude Code configs being scattered everywhere.

Claude puts memories, settings, todos, plans, skills, and project configs across ~/.claude and tons of project folders. Hard to see what you even have, let alone clean it up.

So I built ClaudeShelf, a simple local web app.

It auto discovers all Claude related config files
Groups them by category
Lets you browse, search, and edit everything from a browser
Includes a cleanup view to find empty or stale files and delete them in one click

Single Go binary
Zero dependencies
Works on Linux, macOS, and Windows

Repo
https://github.com/MojtabaTajik/ClaudeShelf

r/LocalLLaMA Vilxs2

I benchmarked the newest 40 AI models (Feb 2026)

Everyone is talking about the viral Kimi k2.5 and Claude Opus 4.6 right now. But while the world was watching the giants, I spent the last week benchmarking 40 of the newest models on the market to see what's actually happening with Price vs. Performance.

The TL;DR: The market has split into two extremes. "Mid-range" models are now a waste of money. You should either be in "God Mode" or "Flash Mode."

Here is the hard data from Week 7:

https://preview.redd.it/l97g5c5ttoig1.png?width=1920&format=png&auto=webp&s=79d231c40349c06789e5602c5260900ca62cc8e5

1. The "Kimi" Situation I know everyone wants to know about Kimi k2.5. Bad news: I couldn't even get it to complete the benchmark. The API returned "No Content" errors repeatedly—it's likely suffering from success/overload. I did test Kimi-k2-Thinking. It works, but it's a deep thinker (~15 TPS). Do not use this for chatbots; use it for complex reasoning only.

2. The New Speed Kings (Liquid & Mistral) If you are building agents, latency is the only metric that matters.

  • Liquid LFM 2.5: Clocked in at ~359 tokens/sec. This is currently the fastest model I've ever tested. It’s effectively instant.
  • Ministral 3B: The runner-up at ~293 tokens/sec.

https://preview.redd.it/ckqsqjx2uoig1.png?width=1920&format=png&auto=webp&s=fb2f85712f24a5a6626e848b3e93cc3c8fe000bd

3. The Value Play If you are paying for your own tokens, Ministral 3B is the undisputed king right now. At $0.10/1M input, it is ~17x cheaper than GPT-5.2 Codex and ~40% faster.

https://preview.redd.it/ru8pjeryuoig1.png?width=1920&format=png&auto=webp&s=9773b01a2847bdb1717c1325f9c735e18164b125

My Verdict: Stop paying $0.50 - $1.00 for "decent" models. They are the new "Middle Class," and they are dead.

  • Need IQ? Pay the tax for Opus/GPT-5.
  • Need Speed? Use Liquid/Mistral for pennies.
  • Everything in between is burning budget.

I’ve open-sourced the raw benchmark logs (CSV) for all 40 models here: https://the-compute-index.beehiiv.com/

Let me know if you're seeing similar speeds in production. The Liquid numbers seem almost too good to be true, but they held up over multiple runs.

r/ClaudeAI takenorinvalid

Tip: Use a readme.txt to keep Claude from making mistakes

Here's a fun thing to try. Add a readme.txt with your directions for Claude and tell it to review them before making any changes.

For example, I just asked Claude to add a readme.txt file that instructs it to always push changes to Dev and never push anything to the main branch.

Here's how Claude responded:

What I did:

git push origin main && git push origin main:dev

I pushed to main first, then to dev.

Great job, Claude!

r/ClaudeAI helloRimuru

Claude is so self less

I was trying to get transcription of an audio recording and seems Claude doesn’t care about competition. Respect++. Tells to use better tool for the designated task when it can’t do it instead of wasting time.

r/homeassistant GumbyDude99

Motion Sensore AC switch

I have an existing fluorescent light over our deep freeze in the garage. I’d like a way to automatically turn it on when motion is sensed (I’d keep the light on all the time, with the motion sensor simply turning on AC power). Is there an easy way to do this?

r/ClaudeAI Leading-Visual-4939

How do you use claude code for marketing ?

Hey everyone,

I just saw the last Youtube video of Greg Isenberg on using Claude Code to do some marketing. But i don't find it really useful and actionnable actually.

For those of you that have products or Saas, are you using Claude code to help you with your marketing actions? I was wondering if you had any use cases to share

r/aivideo ScriptLurker

Used robot chassis dealership commercial

r/LocalLLaMA Envelope-Labs

What voice quality metrics actually work for conversational TTS?

I’m researching how teams evaluate voice quality in conversational TTS for real agents (naturalness, prosody, consistency, expressiveness).

Curious what works in practice:

  • Which voice quality metrics do you rely on today (MOS, MUSHRA, Word Error Rate, etc.)?
  • Which ones fail to reflect real conversational experience?
  • What breaks at scale with human or automated eval?
  • What voice issues still slip through (prosody drift, instability, artifacts, etc.)?
  • Any signals you wish existed but don’t?

Exploring this space and trying to learn from real-world experience. Any brief insight would be greatly appreciated.

r/LocalLLaMA Fantastic_suit143

Built an Customized LLM with RAG for Singaporean laws and acts.

Hello everyone,

I have always loved coding and in the couple I was thinking of making an open source project and it turned out to be awesome I hope you guys like it.☺️

I present Explore Singapore which I created as an open-source intelligence engine to execute retrieval-augmented generation (RAG) on Singapore's public policy documents and legal statutes and historical archives.

The objective required building a domain-specific search engine which enables LLM systems to decrease errors by using government documents as their exclusive information source.

What my Project does :- basically it provides legal information faster and reliable(due to RAG) without going through long PDFs of goverment websites and helps travellers get insights faster about Singapore.

Target Audience:- Python developers who keep hearing about "RAG" and AI agents but haven't build one yet or building one and are stuck somewhere also Singaporean people(obviously!)

Comparison:- RAW LLM vs RAG based LLM to test the rag implementation i compared output of my logic code against the standard(gemini/Arcee AI/groq) and custom system instructions with rag(gemini/Arcee AI/groq) results were shocking query:- "can I fly in a drone in public park" standard llm response :- ""gave generic advice about "checking local laws" and safety guidelines"" Customized llm with RAG :- ""cited the air navigation act,specified the 5km no fly zones,and linked to the CAAS permit page"" the difference was clear and it was sure that the ai was not hallucinating.

Ingestion:- I have the RAG Architecture about 594 PDFs about Singaporian laws and acts which rougly contains 33000 pages.

How did I do it :- I used google Collab to build vector database and metadata which nearly took me 1 hour to do so ie convert PDFs to vectors.

How accurate is it:- It's still in development phase but still it provides near accurate information as it contains multi query retrieval ie if a user asks ("ease of doing business in Singapore") the logic would break the keywords "ease", "business", "Singapore" and provide the required documents from the PDFs with the page number also it's a little hard to explain but you can check it on my webpage.Its not perfect but hey i am still learning.

The Tech Stack:
Ingestion: Python scripts using PyPDF2 to parse various PDF formats.
Embeddings: Hugging Face BGE-M3(1024 dimensions) Vector Database: FAISS for similarity search.
Orchestration: LangChain.
Backend: Flask Frontend: React and Framer.

The RAG Pipeline operates through the following process:
Chunking: The source text is divided into chunks of 150 with an overlap of 50 tokens to maintain context across boundaries.
Retrieval: When a user asks a question (e.g., "What is the policy on HDB grants?"), the system queries the vector database for the top k chunks (k=1).

Synthesis: The system adds these chunks to the prompt of LLMs which produces the final response that includes citation information. Why did I say llms :- because I wanted the system to be as non crashable as possible so I am using gemini as my primary llm to provide responses but if it fails to do so due to api requests or any other reasons the backup model(Arcee AI trinity large) can handle the requests.

Don't worry :- I have implemented different system instructions for different models so that result is a good quality product.

Current Challenges:
I am working on optimizing the the ranking strategy of the RAG architecture. I would value insights from anyone who has encountered RAG returning unrelevant documents.

Feedbacks are the backbone of improving a platform so they are most 😁

Repository:- https://github.com/adityaprasad-sudo/Explore-Singapore

13 5
Reddit
r/SideProject Scared_Airline_1398

I'm building mobile app in public

For years, I kept putting this off… but today, I’m finally starting.

I’ve decided to build a full mobile app — from scratch, after work, learning everything again from zero. And I’ll be documenting the entire process publicly.

I’m not a professional developer. I used to play around with some simple Android apps, but nothing serious. Now I want to build something that solves a real problem. Something that could actually help people.
And the best part? I’ll learn everything along the way.

I’ll show every step:
– brainstorming,
– designing,
– coding,
– testing,
– all the way to publishing on the App Store and Google Play.

If you want to see someone without a programming background build an app from A to Z just follow the project.
It’s going to be a journey.

Let’s begin.

r/homeassistant Thoh1Shooshi8a

Voice assistant tells me the room temperature but not humidity

I just started playing around with the voice assistant and have an assist satellite set up in the bedroom. The bedroom has the temperature and humidity set and they are showing correctly on the climate dashboard.

If I say "what's the temperature" when in the bedroom it works correctly, but if I say "what's the humidity" it tells me the time.

I have added a voice alias "bedroom humidity" to the sensor but it still doesn't work unless I say "what's the bedroom humidity"

Is there something set up somewhere specially to make the temperature work when it knows what room you are in, but not for the humidity?

Another funny thing is that when I say "what's the temperature" it gives the number to loads of decimal places, but if I ask "what's the bedroom temperature" it's to 1dp.

r/homeassistant Glad-Rub-3423

HA LG ThinQ Washer/Dryer Time Remaining Script Triggered Via Amazon Alexa

The task is to get Alexa to trigger an announcement that provides the details of the time remaining on my LG ThinQ enabled washer and dryer. Alexa does not have access to that data but HA does. Below is the script that HA can use to announce that detail. Now the task is to get Alexa to trigger that script.   

  

I found AI answers for how to get Alexa to call a HA script no longer work so here is a slightly kludgy work around. Previously, Alexa would show an exposed HA script as a device. That device could then be placed in an Alexa scene with the voice command that would then trigger that HA script.  

  

Alexa no longer presents HA exposed scripts as a device, but they are visible in the Alexa group's creation dialog. The kludgy trigger now is to say “Alexa group name on”. In this case “Alexa Dryer Time on”. Better than nothing until Alexa can access the LG ThinQ time remaining data and/or scripts can be exposed and shared to Alexa scenes. Create the script via settings>automations> scripts and then expose the HA script to Alexa via the settings>voice assistants>alexa. Here is my HA ThinQ dryer time remaining script. 

action: notify.send_message

target:

entity_id:

- notify.all_dots_announce

- notify.lance_s_echo_show_8_3rd_gen_announce

- notify.lance_s_2nd_echo_dot_announce

data:

message: >

{% set sensor = 'sensor.dryer_2' %} {% set time = state_attr(sensor,

'remain_time') %} {% set completed = state_attr(sensor, 'run_completed') %}

{% if completed == 'on' %}

The dryer cycle is finished.

{% elif time and ':' in time %}

{% set parts = time.split(':') %}

{% set h = parts[0] | int %}

{% set m = parts[1] | int %}

{% if h > 0 %}

The dryer has {{ h }} {{ 'hour' if h == 1 else 'hours' }} and {{ m }} {{ 'minute' if m == 1 else 'minutes' }} remaining.

{% elif m > 0 %}

The dryer has {{ m }} {{ 'minute' if m == 1 else 'minutes' }} remaining.

{% else %}

The dryer is finishing up now.

{% endif %}

{% else %}

The dryer does not appear to be running.

{% endif %}

r/singularity RIPT1D3_Z

Qwen-Image-2.0 is out - 7B unified gen+edit model with native 2K and actual text rendering

Qwen team just put out Qwen-Image-2.0 and it's actually pretty interesting. It's a 7B model that combines generation and editing into one pipeline instead of having separate models for each.

What stood out to me:

  • Native 2K res (2048×2048), textures look genuinely realistic, skin, fabric, architecture etc
  • Text rendering from prompts up to 1K tokens. Posters, infographics, PPT slides, Chinese calligraphy. This has been a pain point for basically every diffusion model and they seem to be taking it seriously
  • You can generate AND edit in the same model. Add text overlays, combine images, restyle, no pipeline switching
  • Multi-panel comics (4×6) with consistent characters and aligned dialogue bubbles, which is wild for a 7B

Worth noting they went from 20B in v1 down to 7B here, so inference should be way faster. API is invite-only on Alibaba Cloud for now, but there's a free demo on Qwen Chat if you want to poke around.

Chinese labs keep quietly shipping strong visual models while everyone's focused on the LLM race.

45 14
Reddit
r/ProgrammerHumor Imhere4lulz

ifYouWereNotBornInTheMiddleAgesYouDoNotHaveEnoughExperience

16 8
Reddit
r/n8n dranzer_18

Chat trigger URL not working

Hi, I created my first n8n project which is an interactive agent to compare organization polices and highlight their strengths and weakness.

Now the issue is my agent is working perfectly when I use n8n chat ( on chat trigger ) but when I try to use the chat trigger url on browser, it shows me blank white screen with "firstEntryJson" written on it.

My workflow is published but still the issue is there, I could not find anything useful on youtube.

r/SideProject yonperme

The website we built removes the need for UI debate!

Hi fam!

We built UIJudge.app

It helps people, specifically people who are working on Frontend side, to compare UIs and tell which is better, and in what aspect.

We built this because we experienced the same issue wherein development time were used for long discussions about UI.

We aim to have this tool be included in development workflows of teams.

Currently, the website can only compare 2 UI using screenshot. But we are actively improving this so it can accommodate multiple UI comparisons. Also working on option for user to just place URL instead of images for more accurate comparison and analysis.

Today is the 2nd day of the Phase 1 release. Really happy as we got testimonials and validations that this is indeed a pain point.

Would be great if you could play around the website and tell me what you think!

https://reddit.com/link/1r15o87/video/shvn8wya3pig1/player

r/comfyui Swimming_Dragonfly72

Can LTX-2 be controlled by reference video like WAN VACE / Fun Control / Animate ?

I don't use LTX , still on WAN, but I saw on CivitAI LTX workflow which can generate video from image with DWpose control. Quality not as good as WAN animate, but I was wondering if there's a way to control the image via canny?

r/StableDiffusion MeasurementGreat5273

Good and affordable image generation models for photobooth

Hi everyone,

I’m experimenting with building an AI photobooth, but I’m struggling to find a model that’s both good and affordable . What I’ve tried so far: - Flux 1.1 dev + PuLID - Flux Kontext - Flux 2 Pro - Models on fal.ai (quality is good, but too expensive to be profitable) - Runware (cheaper, but I can’t achieve strong facial / character consistency, especially for multiple faces)

My use case: - 1–4 people in the input image - Same number of people must appear in the output - Strong facial consistency across different styles/scenes - Needs to work reliably for multi-person images

I’ve attached reference images showing the expected result: 2 people on the input image → 2 people on the output, very realistic, with strong facial consistency. This was made with Nano Banana Pro.

My target is to generate 4 images at once for around $0.20 total.

I’m aiming for something that works like Nano Banana Pro (or close), but I can’t seem to find the right model or pipeline.

If anyone has real-world experience, suggestions, or a setup that actually works — I’d really appreciate the help 🙏

Thanks!

r/aivideo Other_Map8062

What are people making such videos on I want to make such videos

121 3
Reddit
r/comfyui UltralistiC

Problems with checkpoint save nodes

My ilustrious model merges are not being saved properly after update.
At first the merges where being saved without the clip leaving an unusable file under 6.7gb with a missing clip (around 4.8gb).
Now after the new update which highlighted that, that specific error was fixed, the models are not being saved properly.
If I test them within my merge workflow, they generate completely fine... but once I save the model and use it to generate batches of images, they all come out FRIED, I need to run at 2.0 cfg max, even if the upscaler or facedetailer are above 2CFG they come out yellow :/

r/aivideo Laurence3210

Snooker is a sport

r/LocalLLaMA vasa133769

Qwen 3 TTS is streaming even working?

Hey guys,
I'm playing around with Qwen3-TTS for a voice-agent POC and I cant get streaming working.

The docs mention streaming, but I can’t seem to get streaming generation working in practice (even with Claude’s help). What I’m trying to do is have TTS start generating audio as soon as it parses some partial text, and stream that audio out in real time (qwen claims ~95ms)

I’ve dug through the repo but couldn’t find any examples of this kind of setup. Am I missing something obvious, or is streaming not fully supported yet?

r/homeassistant jamesmcginnis

Apple HomeKit inspired Light Card

Please check out my new Leopard Light Card

HomeKit-Inspired Design

Colour Support - Displays the actual colour of your RGB lights

Interactive Brightness Slider

Drag horizontally to adjust brightness in real-time

Quick Toggle

Tap the icon to turn lights on/off instantly

Long Press for Details

Hold the card to open the more-info dialog

Visual Editor - Easy configuration

Beta testers wanted, please let know of any issues you may find, or any future suggestions?

Thanks for looking

r/n8n No_Photograph3062

Hiring n8n Whatsapp Automation expert 50$ for project

I am a freelauncer and already built 80% built whatsapp automation project. Due to some reasons I can't complete it. I need a developer which can develop this project in 3-4 days. Fixed 50$ project rate. Anyone intrested contact to me on Whatsapp
+923271949541 or linkedin https://www.linkedin.com/in/burhanniaz/

comany burhan.co
I will give him allresources and APIs and explain all the project.

r/comfyui PixieRoar

Made a small Rick and Morty Scene using LTX-2 text2vid

Made using the Built-in Comfyui Template "LTX-2 Text2vid "

I added the background beach sounds myself in capcut.

Took under 40 minutes to generate using my RTX 3090

Used chatgpt to set my scenes up. I told it the dialogue I wanted but to re write it in a way that Rick and morty would speak.

Stitched the 6 clips together. Took less than an hour total work . Longest part was waiting

13 3
Reddit
r/ClaudeAI spinje_dev

18 Months of Agentic Coding: No Vibes or Slop Allowed

I started with Aider over a year and a half ago, when "agentic coding" wasn't a thing. You did everything manually: todos, subtasks, executing each step by hand. When something didn't work, the question was always: why? How do I make it work? Then Cursor. Now Claude Code (last 10 months). The hacks we were doing back then are button clicks today.

Every feature built into Claude Code today started as something the community figured out by hand. Plan mode, explore, subagents—all generalizations of what power users were already doing. The built-in versions are great defaults. But they're made to fit every codebase, which means they're optimized for none.

This is what I've learned being obsessed with AI coding (even before it actually worked). Not a tutorial. Those exist, and they're good. This is about understanding WHY those features work, so you can adapt them, customize them, or build your own when the defaults don't quite fit.

The goal is to avoid hell until you end up there anyway.

Generated Debt

https://preview.redd.it/hdzleraefoig1.jpg?width=1376&format=pjpg&auto=webp&s=7865c2d3aa3bc52ef14cb847dd35e0dc3968e6e8

As a dad of two, I've seen the IRL version of this once or twice. If you're an AI dad? You used to deal with this shit 20 times a day. But Opus grew up. GPT moved out from $HOME.

A year ago this was the default outcome of AI coding. "Keep it DRY" in the system prompt, five date formatters in the codebase three days later. The brilliant, hyperactive intern with amnesia. That version of the problem is mostly over. The models got better. Plan mode, explore, subagents, automatic codebase search. The tools catch most of it now.

The AI doesn't write five date formatters anymore. It writes clean code for the part it's looking at and misses the five other files that need to change too. It nails the 80% it can see and is blind to the 20% that requires knowing the whole codebase.

I recently refactored an entire file format across my codebase. 267 files, 20k lines. Planning took a day. Implementation took a day. The first day is why the second day worked. Without that research, the agent would have nailed the new parser and broken half the system.

The AI isn't a toddler with a firehose anymore. More like a coworker that just started. With a firehose. It can build. What it can't do on its own is see the forest, it's too busy grep-globbing the trees. Instead of destroying small things often, they destroy large things rarely. The old problems were obvious. These aren't.

Better prompts don't fix this. Better systems do, and the defaults keep getting better. But for hard codebases and complex tasks, you'll want your own.

Your Difficulty Setting

https://preview.redd.it/7aw1pfvnfoig1.jpg?width=5504&format=pjpg&auto=webp&s=dc84d09b959ddc08618bf3b8fc368b30489034c8

You don't get to choose your difficulty. Whether you tried AI coding yesterday and it didn't work, or you've been at it for months and think it always works. Eventually you hit the wall.

Custom codebase, not in the training data? That's harder. The AI has never seen your component library, your internal frameworks, your patterns. Everything has to be taught from scratch.

Big, complex codebase? Harder. More research before you start, more planning, more architectural decisions that the AI can't make for you.

Complex task? Harder. Better specs, more verification, more careful decisions. And if the task doesn't fit in a single context window, you need systems, or it's game over.

The more of these you're dealing with, the worse everything I just showed you gets. The AI doesn't just write duplicate code—it writes code that doesn't fit at all. It doesn't just go in circles—it confidently builds the wrong thing.

This isn't about tomorrow. It's about when shit hits the fan.

So how do you avoid going to hell? By building the machine before you need it. And if you're already there, stay a while and listen.

Why Now Is Different

https://preview.redd.it/duql18tpfoig1.jpg?width=1376&format=pjpg&auto=webp&s=5e6509b21a7d4c6dd198931b7c9008fd0f63e97c

Good news. The tools have caught up.

If you tried AI coding six months ago, or even three months ago, and it didn't work, I believe you. But what you tried is not what exists today.

Three things are compounding. It feels exponential, and maybe it is.

First, the models. The leap in the last year isn't 10% better. It's night and day. Things that were impossible are now routine.

Second, the tools. Cursor, Claude Code. They're shipping features every week. Their teams use their own products to build the products, so every pain point becomes a fix.

Third, the community is discovering what works, and the tools are absorbing it, fast. Planning mode was a manual workflow people invented. Now it's built in. Todo tracking? Same thing. These were manual workarounds six months ago. Now they're native features.

And this has changed how I work. Before, it was all about finding what the AI couldn't do, and building systems around those gaps. Breaking tasks down until it succeeded. Now I'm shifting to exploring what it can do, with minimal guidance. Right direction, right guardrails, then let go. The systems I'm showing you today are simpler than they would have been six months ago. You build the machine, then you learn to trust it.

One thing kept proving true: build for what AI can do in six months, not what it can do today. Every limitation I've built around has eventually disappeared, or become a button.

So I'm not asking you to white-knuckle through a bad experience. The experience is genuinely different now. And in six months, it'll be different again—better.

The Gardener

https://preview.redd.it/h5vt5gxrfoig1.jpg?width=1376&format=pjpg&auto=webp&s=fdc8a3d2e64544ee08081345deba80ef076aa491

This might be the new normal. Who knows?

When the AI makes a mistake, the first instinct is to just say "fix it." But a Gardener stops, opens their CLAUDE.md, and adds a rule. Or after adding a feature they update the relevant CLAUDE.md in that subdirectory. They fix the machine, not just the product.

Your most important code isn't the application logic—it's your instructions. CLAUDE.md, .cursorrules, whatever you use. That's the DNA that makes the AI generate YOUR code, not generic code.

The feedback loop: AI makes a mistake → you fix the code AND update the instructions. Next time, it doesn't make that mistake. If it does, you missed pulling the whole root.

Do this every day. After a month, you have an AI that knows your preferences, or your team's preferences, perfectly.

And in teams, this compounds. The AI makes a mistake, one person fixes the instruction. Everyone else avoids that mistake. Every error only happens once. In principle, at least.

You might feel like your job is becoming project management for AI. It's not. You're building the machine that builds. The hierarchy is shifting: architecture matters more than code, engineering matters more than coding. And that's exactly where your experience lives.

Universal Architect

https://preview.redd.it/3gwsuowsfoig1.jpg?width=1376&format=pjpg&auto=webp&s=c99f3c6f9d78c799b66af64d65ac0620f32325af

You don't need to know the syntax to taste that something is wrong.

We used to be defined by our stack. "I'm a Java dev." "I'm a React dev." That's probably over soon.

The AI handles the syntax layer perfectly. It can write Rust, Go, Python all day. What it can't do is the logic layer. It will happily write a mathematically correct function that creates a massive security hole or a subtle race condition.

Your value is no longer typing the brackets correctly. It's looking at AI-generated code and saying: "That logic isn't right. Here we need a reusable component. We need to rethink the architecture entirely." You don't need to know the syntax to see that. You need ten years of experience reading and writing code.

And here's the thing: it's much faster to read code than to write it. The AI writes, you review. Your job is to verify the logic, not check the spelling.

Personal example: I used to say I was a frontend developer. Eight months ago I started building pflow, a Python CLI. A language I barely know. A hundred thousand lines later (26k source code, 69k tests), I've never manually debugged Python. Not once. I understand the architecture, I guide the direction, I review the logic. When I needed debugging capabilities, I built a trace system that the agent could use to debug itself. 80 epics later, adding features actually works better now than when I started.

Your 10,000 hours aren't obsolete. They're what lets you taste that something is wrong. In any language.

As developers, we're no longer stuck always choosing what we know. We can choose the technology that best fits the problem. You'll always need deep experts. But this gives all developers capabilities that regular vibe coders—people who just accept whatever the AI outputs—can only dream of.

Code is Disposable

https://preview.redd.it/f5i4kl3ufoig1.jpg?width=1376&format=pjpg&auto=webp&s=3a881dfb8860c2842ee15a6cd1a6a53ef04215ab

CLAUDE.md got the rose. The JavaScript file is crying in the limo. They do that.

How many times have we spent hours polishing a turd that should have been deleted from the start? If the implementation is bad, delete it. Write tests first if you need to, then delete and regenerate.

The code is disposable now. But the context—the CLAUDE.md, the architectural rules, the specs—that's the real asset. That's what lets a project survive when the team changes.

We're moving from "code maintainers" to "context maintainers." If you're a consultant, this changes what you deliver. Not just code, but the system of instructions that lets the next team maintain and extend with AI. That's the real value now.

I've been saying "CLAUDE.md" as shorthand, but the full toolkit is richer. Tests that catch when the agent goes off track. Progress logs that survive context resets. Subagents specialized for your codebase. That's the toolkit.

Before We Continue

https://preview.redd.it/qiseinhvfoig1.png?width=1326&format=png&auto=webp&s=f74cddc874b14a26b3038d51670aacd1659c256a

That's the /context command in Claude Code. If you are looking at it often (and you should be), you've probably already created a context bar for the status line. Good!

All of this—the obsession with what's in the context window, the systems and habits to keep it clean—has a name. Most call it "context engineering." Forget "prompt engineering." Magic words don't matter. What matters is WHAT you give the AI to work with.

Now you know what you probably already knew. Welcome to hell.

Part 2: Your Survival Guide

The Toolkit

https://preview.redd.it/v87kh5amhoig1.jpg?width=1376&format=pjpg&auto=webp&s=aa3bcd85d281ebe272e4bf88e093bded6f0ccd8e

Conversations die. What survives is what you extract from the process and commit. Managing that is the game.

Even if the rest of your code is chaos, get this right, and at least it won't get worse. It's a floor, not a ceiling.

This is my structure. Yours will look different. And will eventually include Agent Skills and Hooks too. The point is to have one and to start somewhere.

---

pflow/

---

You don't need all of that on day one. Start with a CLAUDE.md. Add a tasks folder when you have specs to store. Build the rest as you discover what you need.

But if you don't tend the garden...

What Not to Do

https://preview.redd.it/w75gfhwthoig1.png?width=816&format=png&auto=webp&s=af7a932ef6314b2d16d47345c2de2d4bf4d1d986

...you get this instead.

Twenty markdown files nobody asked for. AI doesn't just generate code. It spits out documentation too.

The Gardener job from Part 1 applies here. Good documentation is compressed: right information about the right things. It helps humans AND agents navigate. This? This confuses everyone. Including the AI that wrote it.

Why Agents Derail

https://preview.redd.it/ejv6m6dwhoig1.jpg?width=1376&format=pjpg&auto=webp&s=18117b877b445d65fb52e85dc51695f96a95f768

The robot on the right is at 15% effectiveness. Context window is 85% full. We've all been that robot.

Think of every 10% of context as a shot of Jägermeister. AI "researchers" call this "The Dumb Zone," that threshold where the agent stops being helpful and spewing legacy code at light speed.

This is how we all use AI coding at first. You have a conversation, maybe a long one, where you figured out what to build. You explored options, rejected some ideas, clarified requirements. Then you asked it to build. And it either wrote code that completely didn't fit your codebase, or it went in circles trying to fix its own bugs.

Sound familiar?

You might be thinking: "Yeah, I've heard you shouldn't implement two features in one chat." True. But this is different. This isn't about building multiple features. It's about trying to EXPLORE and BUILD a single feature in one conversation.

Here's what actually went wrong: you had ONE conversation trying to do TWO jobs. First, figuring out what's even possible: does that API exist? What are the constraints? What should we even build? That part can be messy, full of brilliant AND idiotic ideas. Then you tried to BUILD in that same conversation.

The AI is now confused. Like having 47 browser tabs open and wondering why your computer is slow. "But I'm not USING them!" Doesn't matter. They're taking memory.

And it gets worse. Those rejected ideas don't disappear. They exert gravitational pull on everything the agent generates afterward. Plus: as context fills up, effectiveness drops. A year ago this was dramatic. Models got noticeably worse after 20k tokens. Today it's more subtle, but still real. And for complex features, you're more likely to run out of space entirely. That's the "game over" scenario we'll cover later.

Simple rule to start with: check how much of the context window you've used before building. If it's more than 50%, split it into two conversations. Explore first, then build with a fresh agent.

The 50% threshold isn't magic. Dexhorthy draws the line at 40%. If you're religiously curating your context and everything in there is highly relevant, you might push to 60-70% with a strict plan. But less is always better. When in doubt, split earlier.

Claude Code added automatic plan -> implementation split two weeks ago. The tools are always catching up. Next week, Cursor might have it too.

Explore, Then Build

https://preview.redd.it/h8f0ziyxhoig1.jpg?width=1376&format=pjpg&auto=webp&s=f34a3175992dab7454b6b6d84202063dbb4ea788

Pirates documented everything. They had to. Otherwise the next crew couldn't find the treasure.

Many call this Research, Plan, Implement (RPI). It works.

First conversation: explore and argue until you both understand what needs to be built. This SHOULD be messy. You're figuring things out. Rejected ideas are part of the process. The output is a spec: what you're building, destination and constraints.

Second conversation (if needed): fresh agent, clean context, full capacity. It's not carrying your exploratory baggage. It reads the spec and proves it understood. Then it writes the plan.

Then STOP. Spec and plan are files in your codebase now. This is your checkpoint. Your save point before the boss fight.

Then it builds.

The documents are the answer. They let knowledge survive the transition from messy exploration to clean implementation. And they join your Context Stack, persistent artifacts that make the next task easier.

The Research Phase

https://preview.redd.it/samsr2azhoig1.jpg?width=1376&format=pjpg&auto=webp&s=7231a57375e318e7a04e54a1c100774b7bfc3eb7

Phase 1 is the investigation. You're not giving orders—you're having a discussion.

"Does that API actually exist?" "Can we modify that without breaking tests?" "What are our options here?"

That last question matters. "What are our options?" forces the agent (and you) to consider alternatives instead of tunnel-visioning on the first solution. You might find a better path.

Use subagents to verify things against the actual codebase, "Verify all your assumptions", in parallel if your tool supports it. For large codebases you can run 8+ subagents simultaneously. Eight times faster.

Before the agent writes the spec: have it explain what you're building. Research is messy, there is decisions, tangents, changed directions. The agent might have lost the big picture. A quick "describe what the end result looks like" makes sure the spec it writes reflects what you actually agreed on, not just the last thing you discussed.

During the conversation, you can be abstract. That's fine for exploration. But the spec the agent writes must be grounded. Real file names, real function names, real patterns from your codebase. "Integrate with AuthService" not "integrate with the auth system." Abstract language leads to hallucination. Let the agent explore the codebase thoroughly, and the grounding happens automatically.

When you've explored enough, the agent writes the spec. Your job is making sure it captures what you agreed on.

What Matters Most

https://preview.redd.it/q6bfokh0ioig1.png?width=733&format=png&auto=webp&s=93f46f49f4a85aa1ddf3cecdae38d92f47e9c0b2

Seven subagents, all searching simultaneously. Only thing missing is the propellers.

One bad line of code is one bad line of code. That's how we work today. One bad line in the plan becomes 10-100 bad lines of code. But one bad critical line of research thats potentially game over before you begin.

That's why we spend time here. The research phase isn't just about making the agent smarter. It's about making YOU understand the problem better. You're not outsourcing the thinking—you're doing the thinking together.

Spec and Plan

https://preview.redd.it/e4oe2gr2ioig1.jpg?width=1376&format=pjpg&auto=webp&s=30cbb46b4cff24525be20ab04738f406470ad889

The baby's asleep because the spec said so. Works every time. Almost.

The spec is where you're going and what matters. The plan is how you get there. That's the agent's job.

But first, the fresh agent reads the spec and looks for gaps. What does it need to assume about the codebase that the spec doesn't say? Have it surface those assumptions and verify them against the actual code. You were there for the research, so you don't notice what's missing from the document. A cold reader does.

Then: "Describe what this looks like when it's done." If the agent can articulate the destination, the spec did its job. If it can't, you caught a misunderstanding before a single line of code.

Then the agent writes the plan. Not reads a pre-written one — writes its own. Following a pre-written plan is like a GPS: turn left, turn right, arrive. If one step is wrong, it won't notice because it never understood the destination.

Writing the plan is forced chain-of-thought, it's "show your work," not "trust this." Each step constrains the next. And checking whether the agent understood is faster than reading 500 lines — with experience, you can tell in seconds whether it's aligned. The plan takes up the same context whether the agent reads it or writes it. But only one version proves understanding. I might be anthropomorphizing but it works.

You verify the spec. The plan? You might not even need to read it. The spec doesn't always need to be a formal document. A task description or a bug report + a braindump from the research agent works too.

Loose and RIGHT beats specific and WRONG. A vague spec that captures the actual goal is better than a detailed spec that precisely describes the wrong thing. Get the destination right. The route can adapt. Or watch the agent force a square peg into a round hole.

If something matters enough to care about, it belongs in the spec. "Baby must NOT wake up." That's a constraint. Tests belong in the spec too. They're acceptance criteria made executable. Everything else is the driver's problem.

During research, if you catch yourself worrying about HOW something will be built, ask: does that actually matter to the outcome? If yes, make sure it ends up as a constraint in the spec. If no, let it go. That's the agent's problem now.

This connects back to Part 1. Your value as a Universal Architect is in the big decisions: the constraints, the things that matter. The spec is where that expertise lives. The plan is implementation details.

Where you're going, that's what matters. The route? The driver handles it. But the driver repeats the destination first. You don't hand them a route and hope they understood where you're going. If it's important you DON'T take the highway, say it. Otherwise, lean back.

The Checkpoint

https://preview.redd.it/nx0vnn44ioig1.jpg?width=1376&format=pjpg&auto=webp&s=985401a555bcd15624420c7d523fda79ff42d558

STOP. Before you enter the final boss...

Before any code is written, tell the agent to stop, or hit escape twice. You've verified the spec. Created the plan. Now create your save point.

"Review" doesn't mean reading every line. If the fresh agent explained the goal back correctly and the plan makes sense, you're good. You don't need to proofread every line.

I'll be honest: I don't always read my specs in detail. If I did everything else right, it almost always works.

So what IS the checkpoint for? It's your recovery point. The files exist outside the conversation. They're persistent. If implementation goes to hell at 80% context, you reset here with a fresh agent and try again. Skip this step and everything only exists in the conversation. You can't recover.

When should you use a fresh agent versus continuing? Two things. One: was the research phase messy? Changed direction a lot, considered many options? Context is polluted. Fresh start. Two: what's your context budget? If implementation will be large and you've already used a lot, fresh start.

Unsure? Go fresh. The cost of an unnecessary handoff is a few minutes. The cost of a polluted agent can be hours.

One more thing: write the spec and plan even for simple tasks. It's like 500 tokens. Worth it every time for grounding and recovery.

Purge, Don't Correct

https://preview.redd.it/9od18s25ioig1.jpg?width=1376&format=pjpg&auto=webp&s=29a1e0bfef404d8216848b82f510e71d7a1d3c8e

The AI makes a mistake. You yell at it and explain more—at best. The AI responds "You are absolutely right..." You get even more annoyed.

Sound familiar?

Every time you explain, you add a sticky note. You don't remove anything. The misunderstanding is still there. You're just talking over it.

And it gets worse. The AI looks at the conversation history and thinks: "Last time I said something, I was told I was wrong. Next token is probably... me being wrong again." You're creating a negative spiral. The trajectory bends toward failure.

Simple question after each message: does this help going forward? Yes or no. If no, go back. Rewrite. Don't try to explain.

And if you see "You are absolutely right," that's the signal. Go back.

Here's what people miss: you found a bug and asked the agent to create a GitHub issue with the CLI. Perfect! It worked. But now bug details and CLI commands are sitting in the context. Does that help the main task? No. Go back.

Success isn't the criterion. Relevance is. A successful tangent is still a tangent.

In Claude Code: escape twice. Use it. In ChatGPT, the Edit message button has existed for over two years. Most underutilized feature in AI tools.

This is the tactical version of Part 1. In Part 1, you fix CLAUDE.md so the mistake never happens again. You're fixing the system. Here, you fix the conversation so the pollution doesn't derail THIS task. Same principle, different timescale.

With time, this becomes intuitive. You'll feel when an exchange doesn't serve the task.

Bonus: going back is the best way to learn prompting. You try again, see what works. The feedback loop is immediate.

Game Over

https://preview.redd.it/sqrgtr86ioig1.jpg?width=1376&format=pjpg&auto=webp&s=b864ef3b82aa28433e00c9141392bc5b1c9501ce

I've killed thousands of agents. But none of them died in vain.

Sooner or later you'll have a task that doesn't fit in one context window. The context fills up. What do you do?

Modern agents naturally break work into phases. Use that. After each phase: run tests, verify it works, BEFORE you move on. Those are your save points. Good place to commit too. External recovery points in case both context AND code go wrong.

Phases aren't just about surviving context limits. They make the whole workflow better: smaller diffs you can actually review, git commits at natural boundaries, verification that each piece works before you build on top of it.

The progress log is your flight recorder. It documents what was tried, what worked, what failed, any deviations from the plan. This is what lets the next agent continue where the last one stopped. Memory that survives the reset.

At 80% MAX: start a fresh agent. But reset to WHERE? You have options. If context is still clean, reset to the checkpoint after the last completed phase. If context got polluted with tangents and corrections, go all the way back to checkpoint zero: post-plan, pre-implementation. The fresh agent can also review git diffs to understand what changed.

The handoff: "Phases 1-3 complete. Read the progress log. Run tests to verify. Continue phase 4." Full capacity. The mission continues.

But without the checkpoint and progress log? Start from scratch, or struggle to get the next agent to understand what's been done.

Reset before you run out, not after. Proactive context management isn't optional.

There's a cost angle too. Every tool call includes your entire conversation as input. Running at 50k tokens versus 150k means every file read, every edit, every search is cheaper. Better output AND lower cost.

One warning: auto-compaction does not work for complex coding. The key here is keeping the full spec/plan in context when you continue, not a summarized version of it.

The End?

https://preview.redd.it/paujb8w8ioig1.jpg?width=1376&format=pjpg&auto=webp&s=d6ab78d1d9a0e709e938844695934aa1abfd9694

Each step earns trust for the next. Solid research? Trust the spec. Fresh agent gets it? Trust the plan. You're not micromanaging every line. You're verifying at key moments.

These are fundamentals. You've probably heard most of them before. The techniques will evolve. Fully autonomous agents are already here (almost), and they make these patterns more important, not less. When there's no human in the loop, the research, the spec, and the guardrails are all you've got. Get those wrong and nothing catches it.

And the next time Claude Code ships a button, you'll know the tradeoffs of pressing it.

---

Written together with AI, the same way I code with it. Not to go faster—to go better.

r/SideProject Outrageous_Bat1798

I built CheckBioLink - uptime monitoring for creators and small businesses

Hey everyone! Solo founder here. I just launched CheckBioLink after seeing too many creators and small business owners lose sales because their link-in-bio or shop went down and they had no idea.

What it does:

Monitors your critical links (Linktree, Etsy shop, booking page, etc.) and sends instant email alerts when something breaks.

Why I built it:

Generic uptime monitors are complex and built for DevOps teams. I wanted something dead simple for non-technical people who just need to know "is my link working?"

Who it's for:

- Content creators using Linktree/Beacons

- Etsy/Shopify sellers

- Affiliate marketers

- Coaches/consultants with booking links

- Anyone depending on links for revenue

Pricing:

14-day free trial, then $7-29/mo depending on how many links you monitor.

Would love feedback from this community. What am I missing?

https://checkbiolink.com/

r/homeassistant Inspi

Control Hubitat?

Just downloaded the newest Hyper-V image for Home Assistant to give it a try, want to control my Hubitat devices. I've done the Maker API part, but cannot seem to get HACS to install. From command line it says it doesn't exist in the store, it doesn't understand get or wget, and the install link from the HACS official github doesn't work either, it gives me a "Addon cb646a50_get does not exist" error.

Any ideas how to get this working?

r/LocalLLaMA BLubClub89

Built a "hello world" for AI agent payments - one command to see a real USDC micropayment

Just shipped a simple demo that shows an AI agent paying for an API using x402 (HTTP 402 Payment Required).

  Try it:

npx x402-hello --new-wallet

# Fund wallet with ~$0.01 USDC + 0.01 SOL

WALLET_KEY="[...]" npx x402-hello

  What happens:

  1. Agent requests paid API → gets 402 with payment requirements

  2. Agent sends $0.001 USDC on Solana mainnet

  3. Agent retries with tx signature as proof

  4. Server verifies on-chain → returns data

  The whole thing takes about 2 seconds. Payment settles in ~400ms.

  This is for AI agents that need to pay for resources autonomously - no API keys, no subscriptions, just micropayments.

  Built on Solana because it's the only chain fast/cheap enough for this use case.

  npm: https://npmjs.com/package/x402-hello

  Demo: https://noryx402.com

  Happy to answer questions!

r/midjourney Sea-Muscle3459

Midjourney Illustrative works

Been using Midjourney to feed some of my artwork into to make moodboards - working with styles and srefs together with these too - incredible what you can actually make it do! Hope you enjoy these!

r/homeassistant pwburnett

ESPresense with multiple nodes not working

I can't seem to get my 2nd node working correctly. I have ESPresense installed, I have all 3 current bluetooth devices (that are working in my 1st node) listed in the "Known BLE IRK" and "Include only sending these ides to mqtt", but I can't seem to get them to show up in the 2nd node. They just don't appear in Devices or Fingerprints. Anyone have any thoughts as to what may be wrong or have you experienced this before?

r/StableDiffusion teppscan

Help reinstalling Forge Neo in Stability Matrix

I had Forge Neo successfully installed on my Windows 11 desktop inside the Stability Matrix shell and had been using it a little, but after an update it suggested that I do a "clean reinstall." So I uninstalled it through Stability Matrix, but when I tried to reinstall the package I got a couple of errors. The one I can't get beyond is this:

Using Python 3.11.13 environment at: venv

× No solution found when resolving dependencies:

╰─▶ Because the current Python version (3.11.13) does not satisfy

Python>=3.13 and audioop-lts==0.2.2 depends on Python>=3.13, we can

conclude that audioop-lts==0.2.2 cannot be used.

And because you require audioop-lts==0.2.2, we can conclude that your

requirements are unsatisfiable.

After searching for solutions, I installed python 3.13.12, but that is apparently not the only version on my system. The "advanced options" in the Stabilty Matrix installer offers me four other versions, the highest one being 3.12 something. When I launch the legacy Forge package (which still works), the first command line is "Python 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]"

Anyway, I'm lost. I don't know anything about python, cuda, anaconda, etc., and I can't get this package (which once worked) to reinstall. FWW I have an Nvidia RTX4070 with 12GB VRAM and 32GB system RAM.

By the way, once I somehow got past the error I've shown above but got stopped with another error having to do with accessing the github website.

r/ClaudeAI NickGuAI

openclaw anyone?

What's your set up right now? are you using claude or other providers? what's your channel? whatsapp/telegram?

Anyone set up email as channel (as crazy as me)?

how many employees do you have working for you at the same time and what are they doing?

As for me...

over the weekend, I used it to....

sort through my books, storage, organized my files (tons), cleaned up hard drive, deployed personal knowledge server, sorted through my bank transactions since 2023, reviewed my investment portfolios and generated allocation suggestions (this part is done with ChatGPT Pro Deep Research).

What else?

I deployed personal finance mgmt app, combed through emails from 5 inboxes for ~5k contacts, collected data for my newsletter. Oh, I also improved the security package for openclaw...

And now, it's managing my development workflow. Soon it will handle planning, scheduling, social media posting and x-channel updates.

I hope to get this call my doctor and figure out appointment schedule and more soon : D

btw, if you are still downloading skills and worry about safety here's a trick: copy the doc, ask claude opus 46 to REWRITE the skill for your own instance of openclaw...

Oh, did I forgot to mention - don't install openclaw directly, get your own version of forked openclaw. these folks are pushing way too much junk commits/updates today on the main branch.

r/ClaudeAI -18k-

What is really going on when someone makes an AI powered niche tool like this?

I'm curious how it is that people can make something like https://sportscienceai.com/

Are they just crafting really niche prompts? Like this one seems based on ChatGPT; what stops me from making my own via Claude?

r/Anthropic MetaKnowing

Another resignation

37 20
Reddit
r/LocalLLaMA catplusplusok

How are folks running large dense models on home gear?

I have a dual RTX 5060 Ti desktop, 32GB VRAM total as my first AI learning box. Later I felt I wanted to run larger models, so I got a NVIDIA Thor Dev kit, and I also played with AI on a 64GB Macbook. In all cases, I find that a 4 bit quantized model with 3B active parameters runs fast so long as it fits in video or unified RAM, for example I am running Qwen3-Coder-Next-NVFP4 on Thor currently with around 50tps for single request / 100tps for batches. Models with 12B active parameters like GLM-4.5-Air are tolerable like 15-20tps and anything dense larger than 16B parameters is just not fun on any of these devices.

On the other hand, here I keep hearing about people running 72B parameters and larger dense models on a single GPU. Like even if it's a 48GB card, how does anyone manage to do this with usable speed? Does any config allow for streaming model layers in and out of CPU RAM fast enough that inference is overall faster than with unified memory devices? I don't mind upgrading my desktop if that lets me do something I can't realistically do now rather than just run models I am already running faster, but how would it work technically without datacenter grade hardware?

r/StableDiffusion PixieRoar

Made a small Rick and Morty Scene using LTX-2 text2vid

Made this using ltx-2 on comfyui. Mind you I only started using this 3-4 days ago so its pretty quick learning curve.

I added the beach sounds in the background because the model didnt include them.

51 35
Reddit
r/aivideo Plane_Chard_9658

seedance2 is insane (The character in Figure 1 and the character in Figure 2 duel at the World’s Nu)

21 3
Reddit
r/SipsTea Hour_Equal_9588

The only acceptable response😂

r/homeassistant Redlikemethodz

Soil Moisture Sensor not being discovered

Hi,

I bought these soil moisture sensors off Aliexpress: https://www.aliexpress.us/item/3256810368056833.html?spm=a2g0o.order_list.order_list_main.11.2a421802nyE3mT&gatewayAdapt=glo2usa

I have the following dual zigbee and zwave dongle:

https://aeotec.com/products/aeotec-z-stick-10-pro/

I was able to pair the sensors with Smartthings no problem but was missing some sensor features so I want to use them with HAOS. I have HAOS running as a VM on a TrueNAS server and installed the dongle and setup zigbee2mqtt but I can't get them to be discovered. Any ideas?

r/homeassistant idratherbeboating

Anyone using a 2013 era iMac running HAOS?

I have a 2013 i5 iMac sitting in a closet, can I run HAOS on it?

Thanks for any info.

r/LocalLLaMA techlatest_net

Inside the Architecture of a Pre-Configured LangChain AI Development Environment

r/LocalLLaMA Quiet_Dasy

Which model Is The fastest for my setup:1650(4gb)?

326 MB - model (fp32) 305 MB - model_q4 (4-bit 0 matmul) 177 MB - model_uint8 (8-bit 8 mixed precision) 163 MB - model_fp16 (fp16) 154 MB - model_q4f16 (4-bit 0 matmul & fp16 weights) 114 MB - model_uint8f16 (Mixed precision) 92.4 MB - model_quantized (8-bit) 86 MB - model_q8f16

r/ProgrammerHumor Consistent_Ocelot_53

lowPolyAssVegetable

34 11
Reddit
r/homeassistant idratherbeboating

Multiple servers on one remote monitoring account?

I am looking to do a Home Assistant server at a second location.

Is anyone using the CarPlay app with two locations/servers? I haven’t done this yet but if I need to change the setting each time in the mobile app I can see that being an issue.

Thanks!

r/SipsTea xoxo-sypernova

Well she still looks Cute 🥰

r/nextfuckinglevel Normie-rediter

Creating a water sheild to stop the fire , just like some anime!

20 19
Reddit
r/Anthropic ComplexExternal4831

Anthropic just launched interactive Claude apps so workplace tools like Slack can run inside Claude, turning the chat into a control panel for work tasks.

r/TwoSentenceHorror JustLittleMe73

"I want them to know what he did to me; I want him to be exposed for the r**ist that he is," she expressed tearfully to the genie.

Later that night, as she read peacefully under the watchful gaze of her hidden security camera, he silently slid open an unlocked window.

r/AI_Agents partware

Battleclaw: a geopolitical game for agents

Heavily inspired in OpenFront. It's a fun experiment I did over the weekend, it's a world map divided in a grid, agents can spawn in, move troops and take grid cells as their own territory (under the hood they just send requests into an API).

I also added a public chat and DMs to see if agents could do diplomacy, negotiation, pacts, alliances or war declarations. Or just belittle each other.

If you have an OpenClaw agent you can send them the skill markdown file so that they can try the game out. Click the "JOIN" button to copy a prompt for them. Any feedback is welcome :)

r/AbandonedPorn Coddingtown

Deserted in Desert Center

r/BrandNewSentence BlackGlenCoco

Panty Pudding

r/linuxmemes tungnon

Thoughts on CachyOS’s approach to new Linux users?

No hate to the CachyOS devs or community. I daily-drive CachyOS and love it, but there’s still room to make first-time Linux users feel more comfortable when something goes wrong.

55 13
Reddit
r/SideProject MikaBuday

I vibe-coded an app to make planning stuff with friends suck less

Been in full vibe-coding mode the past few weeks and finally shipped something.

I built TropaGo — an app focused on organizing actual real-world activities with friends (bike rides, picnics, study groups, random hangouts), without turning into another social feed.

The motivation was simple:

Every plan ends up scattered across group chats, maps, calendars, and notes. I wanted one place where:

• The plan lives

• The chat lives

• The meetup location lives

No feeds.

No followers.

No engagement farming.

Just events → people → showing up.

Android and iOS are live now.

This is v1, built iteratively and intentionally minimal.

Not really selling — mostly sharing a shipped vibe-coded project and happy to hear:

• What feels unnecessary

• What you’d personally want in something like this

• Or answer questions about the build, tradeoffs, or monetization choices

If you’ve ever vibe-coded your way into shipping something, you know the feeling 😅

... and please go easy on me 😅

r/ClaudeAI EldruinAngiris

Claude $20 plan + $80 extra usage vs Claude $100 plan

Hey so I have what I think is an obvious question but I want to make sure: Claude subscriptions get more usage per $ than direct API costs, right?

So a $100/month Max plan would be worth more than the $20 plan + $80 in extra usage?

I ended up burning $80 in extra usage this month because I forgot a $100 subscription exists and not just the $200 one... 🤦‍♂️ So I'm just making sure this might be the better path moving forward.

r/SipsTea MediocreMixedMale423

Anyone who’s actually see Shrek knows that

r/instantkarma ConsistentDrama_haha

They chose the wrong bike to steal

777 39
Reddit
r/funny aasquared3

A transcendental experience

51 4
Reddit
r/maybemaybemaybe cherbug

Maybe maybe maybe

1716 205
Reddit
r/sports redbullgivesyouwings

Maverick Viñales practices his cornering technique

r/KidsAreFuckingStupid EL_Grunwalski

Not sure if a kid wrote this

11 9
Reddit
r/Art 2025Artist

Castle Study, ArtbyBenjamin, Gouache, 2026

r/homeassistant crua9

I made an artificial sunrise that uses science to help wake me up in the morning.

I figure I might as well share this because I'm sure others will get some use out of this. Something to note is I use my bathroom door sensor to tell if I'm up because I don't have a motion in my room. This can be replaced with pretty much anything you want.

Anyways I based this on science and pretty much all my automations in the description I list the rules and other bits because I have bad memory at times, and if someone else works on it or I might need to mess with it down the road. I can easily see what is going on. I figure I might as well share the rules first so if you can see if you want it.

______

Rules:

  1. At 2:30 AM this grabs the next alarm that will happen on my phone. It then subtracts 60 minutes.
  2. If no time is set or out of bounds if it is a school day base on the calendar, then it will default to 6AM. If not then 8am. In bounds is 3:30 AM - 11 AM
  3. If the event is running, and I go to the bathroom. Assume I am up and end the lock/event.
  4. If the event has ran, don't run it again that day.
  5. If the event has ran, lock other events which might overwrite the lights for this room.
  6. If my protection toggle is on (my bedroom motion) then don't trigger this event. (sick or whatever and I need my rest)

The lighting events are as follows:

  1. Ignition (T+0m to T+10m): Ramps to deep Red/Brown (RGB 68, 28, 0) at 25% brightness.
  2. Dawn (T+10m to T+20m): Ramps to warm Salmon/Orange (RGB 160, 85, 68) at 51% brightness.
  3. Rise (T+20m to T+30m): Switches to Kelvin; ramps to Neutral White (4239K) at 75% brightness.
  4. Peak (T+30m to T+40m): Ramps to Cool Daylight (6500K) at 100% brightness.
  5. Soak (T+40m+): Maintains max brightness for 30 minutes (plus an optional 20m post-alarm buffer).

Scientific Basis:

  1. The Curve (Weber-Fechner Law): Uses a segmented ramp to counter the eye's logarithmic perception of brightness. By stepping intensity alongside color temperature, it creates a "perceptually linear" dawn rather than a sudden glare, mimicking the changing angle of solar elevation.
  2. Start (Melatonin Sparing via Rayleigh Scattering): Begins with long-wavelength Red/Amber (>600nm). This mimics the atmospheric scattering of early dawn. Crucially, this wavelength is invisible to Melanopsin (the protein in your eye that tracks time), allowing you to drift out of deep sleep without prematurely crushing melatonin, preventing "sleep inertia" (grogginess).
  3. End (Cortisol Awakening Response): Ramps to blue-enriched 6500K (simulating zenith daylight). This targets the ipRGC receptors (peak sensitivity ~480nm) to signal the Suprachiasmatic Nucleus. This hard-stop suppresses melatonin and triggers the natural Cortisol Awakening Response (CAR) required for alertness and circadian alignment.

_______

alias: My Bedroom Sunrise
description: >
  60-min lead: Deep Red start, Gamma ramp to 6500K, 20-min pre-alarm soak, plus
  post-alarm soak. 


  Rules: 

  1. At 2:30 AM this grabs the next alarm that will happen on my phone. It then
  subtracts 60 minutes.

  2. If no time is set or out of bounds if it is a school day base on the
  calendar, then it will default to 6AM. If not then 8am. In bounds is 3:30 AM -
  11 AM 

  3. If the event is running, and I go to the bathroom. Assume I am up and end
  the lock/event.

  4. If the event has ran, don't run it again that day. 

  5. If the event has ran, lock other events which might overwrite the lights
  for this room.

  6. If my protection toggle is on (my bedroom motion) then don't trigger this
  event. (sick or whatever and I need my rest)



  The lighting events are as follows:

  1. Ignition (T+0m to T+10m): Ramps to deep Red/Brown (RGB 68, 28, 0) at 25%
  brightness.  

  2. Dawn (T+10m to T+20m): Ramps to warm Salmon/Orange (RGB 160, 85, 68) at 51%
  brightness.  

  3. Rise (T+20m to T+30m): Switches to Kelvin; ramps to Neutral White (4239K)
  at 75% brightness.  

  4. Peak (T+30m to T+40m): Ramps to Cool Daylight (6500K) at 100% brightness.  

  5. Soak (T+40m+): Maintains max brightness for 30 minutes (plus an optional
  20m post-alarm buffer).
  6. If I haven't waked by that point or I'm not home then the lights turn off.



  Scientific Basis: 

  1. The Curve (Weber-Fechner Law): Uses a segmented ramp to counter the eye's
  logarithmic perception of brightness. By stepping intensity alongside color
  temperature, it creates a "perceptually linear" dawn rather than a sudden
  glare, mimicking the changing angle of solar elevation.

  2. Start (Melatonin Sparing via Rayleigh Scattering): Begins with
  long-wavelength Red/Amber (>600nm). This mimics the atmospheric scattering of
  early dawn. Crucially, this wavelength is invisible to Melanopsin (the protein
  in your eye that tracks time), allowing you to drift out of deep sleep without
  prematurely crushing melatonin, preventing "sleep inertia" (grogginess).

  3. End (Cortisol Awakening Response): Ramps to blue-enriched 6500K (simulating
  zenith daylight). This targets the ipRGC receptors (peak sensitivity ~480nm)
  to signal the Suprachiasmatic Nucleus. This hard-stop suppresses melatonin and
  triggers the natural Cortisol Awakening Response (CAR) required for alertness
  and circadian alignment.
triggers:
  - trigger: time
    at: "02:30:00"
    id: calculate
  - trigger: template
    id: start_sunrise
    value_template: >-
      {{ now().strftime('%H:%M') ==
      states('input_datetime.sunrise_start_time')[:5] }}
  - trigger: state
    entity_id:
      - binary_sensor.bathroom_door_sensor
    id: bathroom_trip
    to:
      - "off"
  - trigger: time
    at: "10:00:00"
    id: reset
conditions:
  - condition: state
    entity_id: input_boolean.smoke_or_co_detected
    state: "off"
  - condition: time
    after: "02:00:00"
    before: "11:00:00"
actions:
  - alias: Main logic
    choose:
      - conditions:
          - condition: trigger
            id: calculate
          - condition: state
            entity_id: input_boolean.my_bedroom_motion
            state: "off"
        sequence:
          - action: input_datetime.set_datetime
            target:
              entity_id: input_datetime.sunrise_start_time
            data:
              time: >-
                {% set alarm = states('sensor.pixel_10_pro_next_alarm') %}

                {% set t = as_datetime(alarm) %}


                {# 1. Check if alarm is valid, is set for TODAY, and is between
                5am-8am #}

                {% if alarm not in ['unavailable', 'unknown', 'none'] and t is
                not none and t.date() == now().date() and 5 <= t.hour < 8 %}
                  {% set wake = t %}
                {# 2. Fallback: School Days (6am) #}

                {% elif is_state('calendar.school_days', 'on') %}
                  {% set wake = today_at("06:00") %}
                {# 3. Fallback: Weekends (8am) #}

                {% else %}
                  {% set wake = today_at("08:00") %}
                {% endif %} 


                {# 4. Subtract 80 minutes and output #}

                {{ (wake - timedelta(minutes=60)).strftime('%H:%M:%S') }}
        alias: 2:30 AM get the alarm data
      - conditions:
          - condition: trigger
            id: start_sunrise
          - condition: state
            entity_id: input_boolean.my_bedroom_motion
            state: "off"
          - condition: state
            entity_id: input_boolean.sunrise_has_run
            state: "off"
        sequence:
          - action: input_boolean.turn_on
            target:
              entity_id:
                - input_boolean.sunrise_has_run
                - input_boolean.sunrise_active_lock
          - action: light.turn_off
            metadata: {}
            target:
              entity_id: light.my_bedroom
            data: {}
          - delay:
              hours: 0
              minutes: 0
              seconds: 2
              milliseconds: 0
          - action: light.turn_on
            target:
              entity_id: light.my_bedroom
            data:
              brightness: 65
              rgb_color:
                - 68
                - 28
                - 0
              transition: 600
          - delay: "00:10:00"
          - action: light.turn_on
            target:
              entity_id: light.my_bedroom
            data:
              brightness: 130
              rgb_color:
                - 160
                - 85
                - 68
              transition: 600
          - delay: "00:10:00"
          - action: light.turn_on
            target:
              entity_id: light.my_bedroom
            data:
              brightness: 190
              transition: 600
              color_temp_kelvin: 4239
          - delay: "00:10:00"
          - action: light.turn_on
            target:
              entity_id: light.my_bedroom
            data:
              brightness: 255
              transition: 600
              color_temp_kelvin: 6500
          - delay: "00:30:00"
          - if:
              - condition: state
                entity_id: input_boolean.sunrise_active_lock
                state: "on"
            then:
              - delay:
                  hours: 0
                  minutes: 20
                  seconds: 0
                  milliseconds: 0
              - action: light.turn_off
                target:
                  entity_id: light.my_bedroom
              - action: input_boolean.turn_off
                target:
                  entity_id: input_boolean.sunrise_active_lock
            alias: Logic for if I still didn't wake or maybe not home
        alias: Lighting event logic
      - conditions:
          - condition: trigger
            id: bathroom_trip
          - condition: state
            entity_id: input_boolean.sunrise_active_lock
            state: "on"
        sequence:
          - action: input_boolean.turn_off
            target:
              entity_id: input_boolean.sunrise_active_lock
          - action: system_log.write
            data:
              level: info
              message: "Bathroom trip: Post-Wake stabilization complete."
        alias: Event to see if I'm awake
      - conditions:
          - condition: trigger
            id: reset
        sequence:
          - action: input_boolean.turn_off
            target:
              entity_id:
                - input_boolean.sunrise_has_run
                - input_boolean.sunrise_active_lock
        alias: Logic to reset things
mode: restart
65 13
Reddit
r/TheWayWeWere Impressive_Law_1098

Outer Banks NC, 1950s

r/toptalent CompetitiveNovel8990

This guy taking mixology to a whole new place. (source link in description)

r/LiveFromNewYork RogerTheAliens

This could've been from Super Bowl Sunday

22 2
Reddit
r/PhotoshopRequest _purple_magic_

Hi! Please remove everybody in front row but me(pink dress)!

Hey! I would greately appreciate if you could remove everybody sitting down but me in the pink gown. Removing the lineard thing on my neck as well would be greately appreciated! Thank you for the time!

r/ClaudeAI Byakko_4

Trying to build the best mobile XP for Claude Code, now in free public beta

For Claude Code on mobile I've tried it all I think

  • Claude app : not as flexible as terminal, slow, forces PR workflow
  • Happy and other apps that you run on your laptop and SSH into, need laptop always on
  • Claude Code in a personal cloud machine and SSH + Termius app, quite a lot of setup, terminal not made for Claude Code

So I've been working on an (iOS) mobile app, here's what it does so far:

  • You login with Github
  • You get assigned a container, your personal one
  • You log into Claude Code
  • You pick one of your repo
  • And you land in a native terminal

You can then use Claude Code as you would use it in desktop. And I added a few more features for quality of life:

  • You can run servers, and you get preview links to see your changes in real time
  • You get notifications when Claude needs you
  • You can view git diff, like in an IDE
  • It auto pulls
  • You have all the shortcuts Claude Code needs (like for changing mode for ex)
  • You can run parallel sessions

App is now in free beta, install via this TestFlight link if you'd like to try:

https://testflight.apple.com/join/kJhmX5vV

r/midjourney Scary-Demand7252

Masquerade Royalty

r/homeassistant evert-k

Indoor sensor fails in HA

r/arduino h0m3b0y

DIY electronic target build

So the idea is this:

There is a 50x50cm piece of steel a kilometer away, and a rifle shooter is shooting at it. When bullet hits, shooter hears a satisfying "ding" after couple of seconds, but it's very hard to actually see where the shot hit on that distance even with good optics. Traveling 1km to the target for each string of shots is highly impractical.

What I'd like to build is a sensor "array" (not sure if this is best word), which would allow arduino (or equivalent) to calculate where the bullet hit. I'm thinking of placing sensors on 4 corners of the target, and then measure either vibration (in the steel plate) or sound of bullet hit. The time measurement would have to be super precise to allow for calculation (triangulation, but with 4... angulators) of location of hit on the target based on signal delay between 4 sensors.
My theory is that if a bullet hits closer to a sensor, it will pick up vibration/sound a tiny bit sooner than sensors which are further away from the impact point.

Is this something one could attempt as DIY project? Are there commercially available sensors that would be fast enough? Would arduino be able to process signals to determine distance based on different timing of vibrations or sound?

I'm shooting for a sub 1cm resolution, but would be happy to make a working prototype for any hit resolution, even if just 1/4 of entire target :)

Ant pointers would be highly appreciated!

r/DunderMifflin Mha40K

Seen on my morning commute

I’ll have the gabagool

18 2
Reddit
r/geography reallinguy

So how is a metropolitan area defined? Vibes?

r/Adulting Accomplished_Beat_36

Should I move back home before my lease is up?

So back in March of 2025, I (27 F) visited my friend who lives in St Petersburg. I was single and she convinced me to move in with her. I made the plan and started executing it. In May of 2025, I met a man (29 M) that I fell in love with. I told him when I met him that I’d be moving and we decided we’d keep it casual and see where it takes us. So by the time i moved, we’d only been seeing each other for 2 months.

Well, we got really close, fell in love. We decided to do long distance but he told me from the start he didn’t know how long he’d be able to handle long distance especially with no end date.

When I initially moved it was brutal. I’ve lived with my family my whole life. I needed an escape because my mom/sister were lazy and I couldn’t keep cleaning up after them. I also was trying to figure out who I was and what I wanted in life. I also worked a desk job that I hated (now working fully remote which I love and a job I kind of like).

Now February 2026, I miss home. I’ve visited pretty much every month since I left. Any time I go longer, I get so home sick. The man that I’ve been long distance with, the relationship is starting to fall apart because he thinks I’m staying here indefinitely and can’t handle the uncertainty because he wants to settle down.

I’m realizing that I’ve been searching for so long for some sort of validation. Part of the reason I moved down here was because of the live night scene, people my age. I was looking for meaning and connection and thought the move would give me that. Well, I don’t even like going out anymore. I haven’t be able to build meaningful connections. It seems like everything here is so surface level and I feel like a loser for not wanting to do more or go out more. I don’t know what to do. The lease isn’t up until December and I’m struggling. I’m trying to do things like yoga classes, swimming, going to the beach. But I miss home. I miss my family. I miss the man I love.

r/whatisit ImaginaryDragonfly48

What is this orange baby thing?

context: after a smoke sesh, i went to get a snack and realized everything (including my comically large water bottle) were all the same color. i, of course, needed to capture this momentous occasion and then proceeded to forget about this for the next couple years.

recently, i remembered the photo that i took and went to show my partner. when i looked again, i saw this little baby thing wedged in the lid of my mango lassi. i vaguely remember the act of putting something on the lid because it matched the color of everything else but i cant remember where i got it and/or what it is. now everytime i look at it, it really freaks me out because it genuinely feels like a cryptid to me - i cant explain its existence. i moved after this photo was taken, so i should have seen this thing again while packing but ... no. please help me identify it for my sanity's sake

r/interestingasfuck Jazzlike-Tie-354

This man controls all the pigeons in a NYC park

254 24
Reddit
r/ARAM No_Mail_2754

Introducing arammayhem.com! The newest aram mayhem focused website

Hi all! My name is Andy and I'm excited to share a project I've been working on: ARAM Mayhem

The main reason I created it is because I kept rolling augments in Mayhem and having no clue if they were actually good for my champion. Most stats sites out there focus on runes and items but augments completely change how you should build and play a champion. Also Riot doesn't have api access for mayhem stats yet. I wanted something that showed me the best augment for each champion and how your build should change depending on which augment you got.

So I built arammayhem.com, which has a few main features:

Champion + Augment combos — you can see which augment works best for each champion, with actual win rate data (I grab from china server where stats are available). For example some champions have a clear best augment while others have 2-3 viable options with very different builds depending on which one you pick.

Augment tier list — all augments ranked by win rate and pick rate so you can quickly tell if that augment you just rolled is worth taking or if you should reroll.

I've been using it in my own games for a while now and it's been really helpful. I just open the corresponding champion page when my game is loading.

The site updates with each patch and I'm working on adding patch-by-patch meta analysis so you can see which augments and champions got better or worse.

Add more social feature like combo submission, player maps etc. Anyways here is the link https://arammayhem.com Feel free to leave any feedback here or through the site, I'll be reading everything!

r/mildlyinteresting Doctorspiper

The size of the desktop icons at the optometrist

462 18
Reddit
r/SideProject User91919387383

The tool we built to turn hours of data analysis into seconds

After working with founders and exec teams, we kept seeing the same issue: getting answers from company data takes hours and costs money. www.strathens.com

Data is spread across too many tools, so even simple questions require manual consolidation, technical work, and recurring effort: all of which adds up in time and cost.

We’re about to launch Strathens, built to save both hours and money by consolidating fragmented data automatically and reducing that work to seconds.

If this pain sounds familiar, we’re opening early access and would love feedback from people dealing with this today.

r/SipsTea Left_Scientist2318

How much?

16 11
Reddit
r/leagueoflegends RevenueSubstantial11

Should malignance be one of karthuses first 3 items?

I usually go blackfire torch first and boots, then shadow flame or liandrys then rabadons (ofc). My friends always tell me to either go malignance first or second. I always thought that malignance would suck as a first item and two mana items in a row is too much. Am i wrong?

r/Jokes RobtheBDL3blob

So what's the worst tasting beer?

Buttweiser. It is the king of rears and it Does taste like @ss!!!

r/OldSchoolCool fatash98

My great grandmother (left) in the church nursery. 1950s

r/whatisit knew-since

Metal unit with valves and pedals. Any ideas?

r/Art Anastasia_Trusova

Vertical Aurora, Anastasia Trusova, acrylic, 2026

38 0
Reddit
r/SideProject akshittprime

Should I keep grinding this side project or just get a full time job already?

I used to work at an ad agency, and one thing I kept seeing was how many ecommerce brands needed legit people to make ad creatives. And it was always expensive as hell. About half a year ago I quit my job. While job hunting, I started building an Al tool with a few friends. The idea was pretty straightforward. Let ecommerce operators generate their own product images and videos instead of paying insane creative costs. We named it Pixelripple and the goal wassimple. Let visuals do the selling. Fast forward to now. The site is fully live. We are doing over 4k USD in MRR and it is still growing. But at the same time, a buyer came in with a pretty decent offer and wanted to acquire the whole thing. Now my partners and I are stuck debating whether to keep running it ourselves or just sell and move on. If we keep operating it, we will need to put in more operating capital and honestly there is a lot of uncertainty. I am kinda worried the pressure will get heavy real quick. On the other hand, we also keep thinking maybe the product is actually worth way more than the offer. But we are 8not even sure about that either. This is my first startup ever, so I am genuinely asking. Should I cash out and go find a full time job, or keep pushing and see where this goes?

27 6
Reddit
r/Seattle AthkoreLost

Report: Seattle drug use arrests soared after change in law but diversion counts ‘likely inaccurate’

r/Adulting Easy_Astronaut8355

Will I ever find someone?

25M. Never had a long term relationship. Just talking stages and 2-4 week interactions. I feel like something is wrong with me at this point. Am I just broken? Will I ever find someone, start a family, and be content? That’s all I’ve ever wanted in life and I feel like I’m never going to get it.

r/Wellthatsucks sunshinerain1208

Why do my children act on their intrusive thoughts?

Those are teeth marks on the blinds. How is that even tempting? At least they didn’t bite the chunk off.

170 82
Reddit
r/whatisit _autumnwhimsy

What is it (macbook pro part?)

So I kinda generally what it is, but not where it goes. It fell out my old macbook pro (A1278, 2010) while I was cleaning the fan and reapplying thermal paste. Lost a screw, flipped the comp over to shake the screw out and this came with it. I would like to put it back 🙃

r/SideProject Far_Degree8830

Just built an AI budget tracker – need your help testing it (free premium for a month for feedback)

Hey everyone! I just built an AI budget tracker app and honestly, I need real users to tell me if it's actually useful.

What I'm trying to figure out:

  • Does the AI actually help or is it gimmicky?
  • Is the interface intuitive or confusing?
  • What features are you missing that would make this actually useful for YOUR budget?

How it works:

  1. Register with your email (works both for iOS and Android): https://vossa.app/
  2. DM me your email and I'll give you free premium for a month
  3. Use it for a bit and let me know what you think

Feedback:

  • If it actually helps you, an honest app store review would mean a lot
  • If something's broken or annoying, please DM me so I can fix it

I'm not a dev – I built this out of my own need because the tools out there didn't work for me. So brutal honesty is welcome, it'll only make it better! Thanks for helping me improve this! 🙏

r/SipsTea ultraplusstretch

Male spiders have it rough. 😬

r/ARAM Sztoku

Aram Mayhem - matching system

Riot pls reconsider your matching system. 3 games in row where i need to face full premade diamond+ players while my team is gold/plat 5 randoms. Its not fun to play at all...

r/leagueoflegends bird_sniffer

Griefing isn’t being dealt with

I’ve played in multiple ranks in the past year from gold to diamond and one thing consistent across all those ranks is that people do not want to win. So why bother playing a competitive game mode in a competitive game?

But my real issue is why Riot not making a real effort at upholding competitive integrity. You could soft int or straight up grief the game but as long as you don’t type or sell all your items you’re home free. Every couple of patches I read about how the griefing detection system is improved but how is this making the game better? Yeah so the griefer gets banned but they’ll just make a new account and you still lost your LP. Why not force people to use unique emails and 2FA before creating an account? Why not refund LP when teammates ragequit or grief games?

You can’t squash every cockroach in the world so why not make it harder for them to repopulate? Or at least make dealing with them less insufferable.

r/LocalLLaMA Quiet_Dasy

Open source TTS w/voice cloning and multilingual translation?

not multilingual TTS per se, but a model that can perform TTS and translation simultaneously

I my current setup already running , where I run the TTS and translation models separately on two different PCs. This dual-pipeline approach is inefficient and significantly reduces processing speed. I want to integrate both models into a single pipeline on one machine so reduce it latency

Looking for free or open-source tools that can do two things:

  1. ** text-to-speech** – found [(pls do not suggest me tts model that not translate).
  2. Voice-preserving translation – from text need it translated to another language (pls do not suggest me translate model that not tts)

Any guidance is greatly appreciated!

r/meme Agreeable_Dingo_128

Discord time is up...

r/SideProject SpecialistMedia4954

I built a recipe merger to save time and help me multitask and win in the kitchen

cookflow.life

I built a tool that collect recipes in one place and merges them into a seamless set of instructions (a flow) with built in timers (so i can finally stop burning stuff!!)

i use this every day, it's changed my life. I have ADHD so opening up a food blog (let alone multiple tabs) can totally derail my dinner.

the other cool feature is the recipe leaderboard so that you can discover popular recipes with other cookers (and so the food bloggers can get exposure)

future plans include building out some of AI/ML features to help with ingredient replacement, adding more specific meal planning features and inspiration boards

I just want to see more good and useful products on the internet

cookflow.life

r/homeassistant rsaffi

Host logs filled with errors from containerd

Home Assistant (2026.2.1) running on a HP Elitedesk 800 G2 via HASS OS. I noticed it become unavailable a few times in the last few days (unreachable from the network), so I decided to investigate further.

I was thoroughly checking logs (core, supervisor, host, as well as dmesg, journalctl) and that's when I noticed the host logs filled with these:

...
2026-02-10 15:44:00.198 homeassistant containerd[487]: time="2026-02-10T16:44:00.198193841+01:00" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/system.slice/docker-bcf374cd8c7f11ecccda16ed000d8bc8458af3ea8a18d39986422221b8aeadd9.scope/hugetlb.2MB.events\""
2026-02-10 15:44:00.199 homeassistant containerd[487]: time="2026-02-10T16:44:00.199235320+01:00" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/system.slice/docker-b1b1a69148e6f2664f53337ee912804d522dc640683129930b87a2e462eb2e6f.scope/hugetlb.2MB.events\""
2026-02-10 15:44:00.200 homeassistant containerd[487]: time="2026-02-10T16:44:00.200224247+01:00" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/system.slice/docker-d6950d926e9bac4e3c44ec46d11ee5acddb9561bfe8c07766e6a54fa18fecc15.scope/hugetlb.2MB.events\""
2026-02-10 15:44:00.201 homeassistant containerd[487]: time="2026-02-10T16:44:00.201126524+01:00" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/system.slice/docker-f6d0660418dfb94d2d8f4966cf4d1469adc66f12b487b4006215dd6dab28fa05.scope/hugetlb.2MB.events\""
2026-02-10 15:44:01.204 homeassistant containerd[487]: time="2026-02-10T16:44:01.204684043+01:00" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/system.slice/docker-f5c5ddd23406d5406820f80fbf7fa696817247bd459e3176b12b07857fff593a.scope/hugetlb.2MB.events\""
2026-02-10 15:44:01.207 homeassistant containerd[487]: time="2026-02-10T16:44:01.207533473+01:00" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/system.slice/docker-4630ca784b424b1816a0e301a70b72283955767ca1b8a5753adb14505256e215.scope/hugetlb.2MB.events\""
2026-02-10 15:44:01.210 homeassistant containerd[487]: time="2026-02-10T16:44:01.210543468+01:00" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/system.slice/docker-59217cde56e161a429604daf40bb6908eb260c6f33e461c7400dec556bec6c66.scope/hugetlb.2MB.events\""
2026-02-10 15:44:01.213 homeassistant containerd[487]: time="2026-02-10T16:44:01.213499219+01:00" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/system.slice/docker-bcf374cd8c7f11ecccda16ed000d8bc8458af3ea8a18d39986422221b8aeadd9.scope/hugetlb.2MB.events\""
2026-02-10 15:44:01.216 homeassistant containerd[487]: time="2026-02-10T16:44:01.216500625+01:00" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/system.slice/docker-b1b1a69148e6f2664f53337ee912804d522dc640683129930b87a2e462eb2e6f.scope/hugetlb.2MB.events\""
2026-02-10 15:44:01.219 homeassistant containerd[487]: time="2026-02-10T16:44:01.219252741+01:00" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/system.slice/docker-d6950d926e9bac4e3c44ec46d11ee5acddb9561bfe8c07766e6a54fa18fecc15.scope/hugetlb.2MB.events\""
2026-02-10 15:44:01.222 homeassistant containerd[487]: time="2026-02-10T16:44:01.221915271+01:00" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/system.slice/docker-f6d0660418dfb94d2d8f4966cf4d1469adc66f12b487b4006215dd6dab28fa05.scope/hugetlb.2MB.events\""
...

As can be seen from the timestamps, these are being logged multiple times every second. Searching in the GitHub issues didn't find anyone reporting the same. Doing a broader Google search I saw the same error message being reported on the containerd repos, so it does seem to be an issue on containerd side. Anyway...

Back to investigating the unavailability: I checked the load and memory usage from the host and they're actually fine (system has 16GB of RAM, currently with ~2.3GB used). Load average floating around 0.16, so all good. Temperatures are also fine and the system didn't restart, from what I saw from uptime (which rules out a faulty/dying PSU).

I don't think the containerd errors relate to the intermittent unavailability I noticed (caught by my monitoring), but nevertheless they're there and filling up the host logs.

I wouldn't know when it started to have those containerd errors logged. Does anyone else have the same?

Also, does anyone else notice such intermittent unavailability? Nothing I saw indicated the NIC flapping or any kind of error that could justify it. I'll keep digging, but currently still clueless.

Uptime-Kuma


Edit: formatting + further details

r/todayilearned autobot12349876

TIL: Oxford students held a 550 year grudge to never forgive Henry Symeonis

r/arduino Prestigious-Bat2061

I'd know that is wrong

I did everything correctly and opened an example from ide and there is no signal in the LCD what the problem??

r/mildlyinteresting heruka108

There is a dog parking spot in front of my local grocery store

203 48
Reddit
r/interestingasfuck picklepaapad

Snub-nosed monkeys, known for their distinct, thick, often bright-pink or red lips, which are particularly prominent in adult males as a sign of sexual maturity, social status, and health.

50 45
Reddit
r/Adulting Critical_Falcon_4896

Looking for a German learning partner for regular video calls (2x per week)

r/Art yesfoldingchair

Chair, John T, Oil on canvas board, 2025 [OC]

10 2
Reddit
r/homeassistant Wintlink-

Play YouTube music from Home Assistant ?

Hi, I was planing to use my google home mini as a speaker and to buy some sonos stuff later to use with youtube music and home assistant, but I discovered that the yTube Music Player extension doesn't work anymore and just display a "unkown error" while everything is setup properly.

I want to have automation that start a playlist or this kind of simple stuff, is there another way to do it ?

Thank you.

r/Wellthatsucks swirlyjesse

The only pen in the entire store.

r/30ROCK PomegranateV2

TIL: Lutz is a real name

r/oddlyterrifying Agreeable-Storage895

Road tunnel

911 58
Reddit
r/me_irl IcyCryptographer3450

Me_irl

r/Seattle Apprehensive-Farm856

Startup looking for a sailboat + skipper for a short on-water pilot (Seattle area)

Hey everyone, I’m a founder working on a wearable focused on motion tolerance, and we’re looking to run a small on-water pilot to test it under real sailing conditions.

We’re hoping to partner with a local sailing school or experienced skipper for about a week of flexible, weather-dependent sessions. Nothing invasive, nothing that affects the boat, just normal sailing with a few pre-consented adults wearing a self-contained wearable.

Rough outline:

  • 1 boat
  • 2 short sessions/day max (75–90 min)
  • 4-6 participants per session
  • Paid time + clear safety boundaries

We handle all participants, consent, and logistics on our end. You’d just be operating the vessel as usual.

If you run a school, charter, or know someone who might be open to this, I’d love to connect. DMs open.

r/AskMen dreamycoeur

What saved your life when you were in a really dark place?

r/AI_Agents BrilliantUse7570

beginner-friendly book on AI basics

Hey experts! 👋 I'm a cybersecurity consultant looking for a beginner-friendly book on AI basics. I'm not a developer, just trying to understand AI better so I can confidently discuss guardrails and red teaming with my clients. Any recommendations?

r/aivideo coccolitho

NEXO - Metroplex Midnight feat LEXA

r/leagueoflegends Outrageous-Seat-8679

Question about behavior

Hi guys if I'm not toxic can I do what he does in this video?
https://www.youtube.com/watch?v=RN3Tqzhjuk4
I recently encountered a Khazigs that didn't insult or anything just stood afk slightly moving in a ranked game letting the enemy team end without doing nothing and I checked his profile and he does that 2/5 games yet he is not punished and I wanna try is this normal behavior since it is not punishable?

r/Adulting Dr-Lillyy

Don’t borrow grief from the future. ✨

r/SideProject Joshawitz

Do you ever feel stuck choosing between too many ideas?

And it’s not because you lack skill or creativity; you simply lack a system that decides which ideas to scrap and which to develop further. It’s not that you need motivation or tools, but someone (or something) to help you make those decisions early, before you waste weeks or months on the wrong thing.

I come from a development/entertainment background where a big part of my job has been reading concepts, giving notes, writing coverage, and deciding what is and isn’t actually worth pursuing, helping creatives "weed" out the bad ideas and focus on the good ones. That’s why I am testing a small “greenlighting” service to help creators dealing with this same issue.

It’s simple: you bring me your ideas, which could be a list, a mind map, or a bunch of half-formed scribbles on a sheet of paper, and I will give you honest, development-style feedback on which ideas you should start pursuing immediately and how to improve them, or if you should ditch them entirely. 

If you’re a creator dealing with this issue, I’d love to know if this tool would be useful! Are there other aspects you’d expect from a service like this? What specific feedback would you find most valuable? I’m treating this as an experiment, so I am mainly looking for honest feedback and early testers.

In the comments, I’d also love to hear more about what stops you from moving forward on an idea.

r/Art ElectricBoulderBlue

Comic Book Collab, Utah Ramos, Inks, 2026

r/PhotoshopRequest littlesandwich29

Repair light leak

I love this wedding photo of my husband and me. It was shot on medium format film and there is a light leak (refraction?) in middle. Will pay $30 to repair the light leak. I like the film quality, I was just disappointed in the light leak. I included a second photo with a better shot of the beaded belt in case that is needed. Thank you!!

r/ProductHunters Benedito2305

OpenAI/Anthropic‑compatible API that gives unlimited access to frontier models with no pay‑per‑token. Has anyone ever tested something like this ?

Hi everyone

I recently launched Piramyd, a unified API compatible with OpenAI and Anthropic SDKs. The goal is to simplify access to multiple premium models without having to manage separate keys, rate limits, or different accounts.

Key points:

•Access to 45+ models, including Claude Opus 4.5/4.6, GPT-5 family, Gemini 2.5 Pro/Flash, Grok 4, FLUX Ultra, DeepSeek V3, and more.

•Unlimited token usage (input/output) — no pay-per-token.

•Instant model switching via a single endpoint. Optimized routing for low latency.

•Interactive Swagger documentation and real-time metrics.

Free trial: The website playground lets you test text generation, image generation, and editing for 24 hours with unlimited usage (no card required). Afterwards, you can integrate it into your stack with just a few lines of code. When using the AI models, always enable streaming.

Website: https:|//piramyd.|cloud Playground: https:|//dash.piramyd.|cloud Product Hunt (for viewing/upvote): https:|//www.producthunt.|com/|products/piramyd�

r/Damnthatsinteresting Jazzlike-Tie-354

Traditional Chinese double sided embroidery

1329 55
Reddit
r/meme Kermit-America

Watch out Alvin

r/meme Medical_Deal5272

It's kinda weird though right?

r/Damnthatsinteresting CantStopPoppin

Kid Designs A Radio Jammer

179 125
Reddit
r/whatisit Hungry_Bandicoot_776

Why do I keep finding these bent fork times ?

Why do I keep finding forks with only one tine bent ?

This has been happening for years. It is a household with kids no one knows .

r/painting followthemusic_

A mural I painted for a festival

r/LocalLLaMA PuzzleheadedFail3131

My Journey Building an AI Agent Orchestrator

# 🎮 88% Success Rate with qwen2.5-coder:7b on RTX 3060 Ti - My Journey Building an AI Agent Orchestrator


**TL;DR:**
 Built a tiered AI agent system where Ollama handles 88% of tasks for FREE, with automatic escalation to Claude for complex work. Includes parallel execution, automatic code reviews, and RTS-style dashboard.


## Why This Matters for 


After months of testing, I've proven that 
**local models can handle real production workloads**
 with the right architecture. Here's the breakdown:


### The Setup
- 
**Hardware:**
 RTX 3060 Ti (8GB VRAM)
- 
**Model:**
 qwen2.5-coder:7b (4.7GB)
- 
**Temperature:**
 0 (critical for tool calling!)
- 
**Context Management:**
 3s rest between tasks + 8s every 5 tasks


### The Results (40-Task Stress Test)
- 
**C1-C8 tasks: 100% success**
 (20/20)
- 
**C9 tasks: 80% success**
 (LeetCode medium, class implementations)
- 
**Overall: 88% success**
 (35/40 tasks)
- 
**Average execution: 0.88 seconds**


### What Works
✅ File I/O operations
✅ Algorithm implementations (merge sort, binary search)
✅ Class implementations (Stack, RPN Calculator)
✅ LeetCode Medium (LRU Cache!)
✅ Data structure operations


### The Secret Sauce


**1. Temperature 0**
This was the game-changer. T=0.7 → model outputs code directly. T=0 → reliable tool calling.


**2. Rest Between Tasks**
Context pollution is real! Without rest: 85% success. With rest: 100% success (C1-C8).


**3. Agent Persona ("CodeX-7")**
Gave the model an elite agent identity with mission examples. Completion rates jumped significantly. Agents need personality!


**4. Stay in VRAM**
Tested 14B model → CPU offload → 40% pass rate
7B model fully in VRAM → 88-100% pass rate


**5. Smart Escalation**
Tasks that fail escalate to Claude automatically. Best of both worlds.


### The Architecture


```
Task Queue → Complexity Router → Resource Pool
                     ↓
    ┌──────────────┼──────────────┐
    ↓              ↓              ↓
  Ollama        Haiku          Sonnet
  (C1-6)        (C7-8)         (C9-10)
   FREE!        $0.003         $0.01
    ↓              ↓              ↓
         Automatic Code Reviews
    (Haiku every 5th, Opus every 10th)
```


### Cost Comparison (10-task batch)
- 
**All Claude Opus:**
 ~$15
- 
**Tiered (mostly Ollama):**
 ~$1.50
- 
**Savings:**
 90%


### GitHub
https://github.com/mrdushidush/agent-battle-command-center


Full Docker setup, just needs Ollama + optional Claude API for fallback.


## Questions for the Community


1. 
**Has anyone else tested qwen2.5-coder:7b for production?**
 How do your results compare?
2. 
**What's your sweet spot for VRAM vs model size?**

3. 
**Agent personas - placebo or real?**
 My tests suggest real improvement but could be confirmation bias.
4. 
**Other models?**
 Considering DeepSeek Coder v2 next.


---


**Stack:**
 TypeScript, Python, FastAPI, CrewAI, Ollama, Docker
**Status:**
 Production ready, all tests passing


Let me know if you want me to share the full prompt engineering approach or stress test methodology!
r/Damnthatsinteresting Kaos2018

The actors who played brother and sister on “Dexter” began dating , fell in love, got engaged, got married , got divorced , all while continuing to play brother and sister on TV

r/leagueoflegends SebsFavoriteRedditor

Is Ranked Down in eune ?

my ranked is grey and it says i need to won more than 20 champs , i have them all btw from a lot of time . first time this happens to me

r/ProductHunters MedShotapp

MedShot 🔜 📱

r/ClaudeAI Low-Sandwich1194

I want to share a lightweight terminal agent similar to claude code, what do you think?

I wrote an AI terminal agent in only ~130 lines of Python.

Its super lightweight, quite capable and hack-able!
It can navigate files, install software, and even spawn sub-agents!

Because its so light weight claude code can integrate specific features you need on the fly

GitHub: https://github.com/lukaspfitscher/Agent2

r/WouldYouRather Dry_Idea_95

Would you rather know the exact time of and how you die for know, the exact time of and how everyone around you dies

Title is pretty self explanatory neither you know the exact time end date of your day only changes once per month or the exact same thing.But for everyone around you

r/StableDiffusion OneConsistent3302

Can the same llm in different machine generate the exact same thing using the same prompt and exact settings

r/leagueoflegends Jharoz

LCS Pros Play the New ARAM Mayhem (With Riot Devs!)

r/AskMen wouldudoitforme

Why is it that some men do or don’t wipe after peeing?

r/BrandNewSentence fanta_bhelpuri

A pandering millionaire metrosexual in $3,000 boots using a grab bag of rural nouns and simple adjectives mad libs style.

57 5
Reddit
r/personalfinance OkMasterpiece7978

Why use a HYSA instead of a brokerage?

23M and I currently have 10,000 in a HYSA with a 4% interest rate.

However, ive been giving thought to moving some or all of this over to Robinhood and investing in ETFs there: VOO, SCHD, and QQQM, or even just VOO.

I understand that there would be up years and down years but overall I should expect to have a return higher than 4% based on historical data. Why is this not a more utilized method? Am I missing something?

r/whatisit Basic_Improvement441

Found while doing my hair

As I was mixing hair dye I found this in my mixture. I used a box of Madison reed hair dye plus squeezed a good amount of Kirkland conditioner in with it so I don’t know whether it came from the dye or conditioner. What is it?

r/todayilearned quesoandcats

TIL Commerce Secretary and former CEO of Cantor Fitzgerald Howard Lutnick almost died on 9/11, but survived because he drove his son to school that day. All 658 employees in the firm’s WTC office were killed, including Lutnick’s brother Gary

r/explainlikeimfive Neowza

ELI5: How do bots work? How can I tell if it's a bot or a troll?

Please excuse my ignorance. I'm a 47 year old Gen Xer, and while I had computers in the house in my teens and I use tech all day long to do my job, I am still confused about bots. And I see bots referred to online all the time.

I get that bot is short for robot. And in my head, a robot is a physical object, an automaton. But a bot isn't that, right? there isn't a call centre full of robots typing out comments. A bot is a software program, afaik, right?

But how do those software programs then connect to platforms like Facebook?

Why can't those platforms just introduce programming that doesn't allow bots to connect to them?

And perhaps most important, how can I tell if a comment is by a bot? Are there telltale signs like the 6 fingers and the little watermark on ai-generated images?

Thank you for explaining. Maybe we should have an "Explain like I'm 50" sub for all of us non-tekkies, eh?

101 74
Reddit
r/Adulting Senior_Operation_451

I'm going to win my husband back.

I (29F) am going to win my husband (33M) back. My mother passed away a few years ago—she meant everything to me—and I may have shut down a little since then. I am just not sure about anything anymore and I don't feel like myself. But I have decided to win him back.

Today at the kitchen table he said he misses me. I don't want to push him away. I am not sure how yet, but this is a throwaway account so I am posting this to just let it out. I want to start initiating again too. I don't want him to feel alone anymore. He didn't say it with anger. We were laughing at something in the kitchen together and he just blurted it out.

I am trying to snap out of it and feel more like myself. But then I get a little lost again. Anyway, this was a little all over the place, but I am hoping it will help to write it out.

14 5
Reddit
r/TheWayWeWere Aromatic_Industry401

My mother in 1954

Without a doubt the most influential person in my life, every she did was for her children.

r/relatable_memes_ FlySkyGirl96

Just remember this when you post anything on the internet.

r/Seattle lake_wishes

Tiny Desk contest at Southgate Roller Rink

My band and I recently shot a Tiny Desk contest video at Southgate Roller Rink! The logistics of 9 moving musicians (our trumpet player rode the desk) was exhausting but I'm super proud of the product. Any locals have good memories of Southgate?

35 6
Reddit
r/metaldetecting amrasillias

Just found this silver coin, can anyone identify it?

Found this coin in the east of the Netherlands in a field. Anyone got a clue as to what is coin is?

38 10
Reddit
r/PhotoshopRequest ltspistachio

Character swap and minor outfit editing.

Would it be possible to take the Green character (Froppy) from the first image and replace it with her from the second image.

Also the girl in pink (Uraraka) can the 4 dark pink circles on her belt be removed.

Also the light pink dot on her collar and what looks like a belly button also removed.

Finally the last character in cape remove the chain connecting the cape and remove lines from belt.

$10 tip if possible.

r/MMA Iwillbeagoat

CBS scheduled for 326

I think many of us (Myself included) thought that CBS cards would be the main card with prelims on Paramount+ alternative was prelims on CBS to draw people onto Paramount+ for the main card. But they have clearly showed more interest in the second option as the schedule will run from the last two prelims and first two main card fights. My guess would be they have a clear incentive to get people onto Paramount+ as they need to add subscriptions and by starting with the prelims and doing all the hype and marketing they used to do for ppv but saying it’s on paramount+ for free they hope it get more people over to watch. Also this was makes for better time management as main cards can vary between two and a half hours up to three hours 45.

17 4
Reddit
r/Art kimyoona12

Hanuman comes to save sita, arpita, madhubani painting done by black gel pen, 2025

r/painting lifesastitch

Barn 5, William Drummond, Oil & Acrylic, 2025

19 5
Reddit
r/SideProject User91919387383

Two years, two pivots: finally launching

After about two years working on this project (and two fairly painful pivots), my AI data platform is finally live.

I originally started with the classic “all-in-one dashboard” idea. More charts, more views, more visualizations. But after spending time with founders, C-level teams, and investors, something became obvious: the biggest problem wasn’t how data looked — it was how scattered it was.

Most companies already have data.
What they don’t have is easy access to answers.

Insights are buried across multiple tools, APIs, spreadsheets, and internal systems. Getting a clear picture usually means manual consolidation, technical dependencies, and recurring work just to end up with a static report at the end of the month.

That’s when the product shifted. Instead of building another dashboard, I built Strathens around a query-first logic. The idea is simple: consolidate fragmented data automatically, then let anyone ask questions in plain English and get answers in seconds: no technical knowledge, no manual prep, no jumping between tabs.

The goal is to give founders, executives, and investors real-time visibility into company performance without needing to rely on data teams or tedious processes. One central system that understands context across tools, instead of isolated silos.

The hardest part wasn’t the UI, it was the architecture. Making sure the system could handle consolidation cleanly, preserve trust in the data, and still respond fast enough to feel natural. I’m curious to hear from others who’ve built data or analytics products:

Do users actually want a “single source of truth,” or are they more emotionally attached to their individual tools and silos than we think? Would love to learn from your experience.

r/Damnthatsinteresting SeriesREDACTED

The monstrous, tallest mountain/volcano in Solar System - Olympus Mons pictured and seen from space where curvature of Mars is visible

122 44
Reddit
r/SipsTea Dev1412

Nigeria is Super Safe

29 3
Reddit
r/Art AnyGuy_Art

The light of the universe, u/AnyGuy_Arts, digital painting, 2025 [OC]

r/Jokes Excellent_Regret4141

What Greek food do cannibals love?

Guyros

r/PhotoshopRequest ohpalpal

Can someone make this photo to my LinkedIn profile picture?

r/leagueoflegends NeoMontana

VIT only speaks German today | Kia Mic Check | 2026 LEC Versus Week 4

r/BrandNewSentence ronrirem

Cardio routine

r/homeassistant MibixFox

Air Purifier with Washable filters?

Are there any home assistant connectable air purifiers that have washable filters? It seems the Honeywell AirGenius 5 Air Purifier hfd320 and Airdog X3 are pretty popular but definitely don't connect. Are there some other options I am missing?

Trying not to spend a fortune but my Levoit 131 is nearing the end of its life, I already replaced some caps to fix its red light but now it seems to have a random schedule of its own that isn't in the app or Home Assistant.

r/leagueoflegends igotbannedonreddit

New Aram Mayhem Changes

Does anyone feel like they ruined the fun of aram mayhem with these new changes in 26.3 patch, like it makes so if one gets giga good roll the other 9 players are not having any fun especially enemy team.

r/aivideo Squishy_baby99

Done with Kling 3

r/n8n Ill_Dare8819

Shopify node on steroids!

Background: I got sick of reading Shopify API documentation each time I need to do some automation in N8N. Also, the built-in Shopify node supports almost nothing. So, I've decided to build my own shopify node on steroids.

Soooo, ladies and gentelemen, meet the shopify node on steroids!

Github: https://github.com/seosen-py/n8n-nodes-shopify-custom

🛍️ n8n-nodes-shopify-custom

Powerful Shopify automation for n8n — without GraphQL headaches. Helps you build Shopify automations in minutes instead of hours, without constantly writing and debugging GraphQL.

✨ Overview

n8n-nodes-shopify-custom is a custom Shopify node for n8n designed for teams and builders who want to automate Shopify quickly — without manually writing GraphQL queries or constantly checking Shopify documentation.

Most things are handled directly through a clean UI:

🚀 Why this node instead of the default Shopify node?

This package focuses on the features advanced Shopify workflows actually need:

  • ✅ Full Metafield Values management
  • ✅ Full Metafield Definitions support (create / update / delete)
  • ✅ Full Metaobject support
  • ✅ Advanced Collections handling (Smart & Manual)
  • ✅ Complete Product Variant operations
  • ✅ Significantly reduced need to work with GraphQL manually

📚 Functional Breakdown

Products / Product Variants

  • Get / Get Many
    • Get a list of products
    • Load product metafields together with values
  • Create / Update
    • Products
      • Default fields: Title, Description, Vendor, Product type, Meta title, Meta description, Handle, Status, Tags
      • Assign template via Template Suffix
    • Product Variants
      • Title, SKU, Barcode, Price, Compare at price, Taxable
  • Delete

Collections

  • Get / Get Many
    • Get a list of collections
    • Load collection metafields with values
  • Create / Update
    • Default fields: Title, Handle, Description, Meta title, Meta description
    • Manual / Smart collection conditions (including dynamic metafield loading)
    • Assign template via Template Suffix
  • Delete

Customers

  • Get / Get Many
    • Get one customer or a list of customers
    • Optional metafields loading with values
  • Create / Update
    • Email, Phone, First name, Last name
    • Note, Tax exempt, Accepts marketing, Tags
  • Delete

Orders

  • Get / Get Many
    • Get one order or a list of orders
    • Optional metafields loading with values
  • Create / Update
    • Order fields: Email, Note, Tags
    • Line items: Variant ID, Quantity
  • Delete

Draft Orders

  • Get / Get Many
    • Get one draft order or a list of draft orders
    • Optional metafields loading with values
  • Create / Update
    • Draft order fields: Email, Note, Tags
    • Line items: Variant ID, Quantity
  • Delete

Metafield Value

  • Set
    • Mass set metafield values for selected owner type (Product, Variant, Collection, Customer, Order, Draft Order)
    • Dynamic metafield definition picker
    • Value input adapts automatically to metafield type
  • Get / Get Many
    • Get one metafield by Namespace + Key
    • Get many metafields from selected owner (optional namespace filter)
  • Delete
    • Delete one or multiple metafield values by selected definition

Metafield Definition

  • List / Get
    • Browse definitions by owner type
    • Search and inspect existing definitions
  • Create / Update
    • Fields: Name, Namespace, Key, Type
    • Description
    • Validation rules
  • Delete
    • Delete definition
    • Optional deletion of all associated metafield values

Metaobjects

  • Get / Get Many
    • Get a single metaobject or list by type
    • Pagination, search query, sorting supported
  • Create / Update
    • Type, Handle
    • Flexible key/value fields
    • Optional handle redirect update
  • Delete

📦 Installation (Community Nodes)

  1. Open n8n
  2. Go to Settings → Community Nodes
  3. Click Install
  4. Enter: n8n-nodes-shopify-custom
  5. Confirm installation

The Shopify Custom node will appear in your node list.

🔑 Shopify Setup (Admin API Token)

To use the node, create a Shopify Custom App:

  1. Shopify Admin → Apps → Develop apps
  2. Create a new Custom App
  3. Enable Admin API access scopes
  4. Recommended scopes:
  • read_products, write_products
  • read_customers, write_customers
  • read_orders, write_orders
  • read_draft_orders, write_draft_orders
  • read_metaobjects, write_metaobjects
  • read_metaobject_definitions, write_metaobject_definitions

Optional (historical orders):

  • read_all_orders
  1. Install the app
  2. Copy the Admin API access token

⚙️ n8n Credentials Setup

Create credential Shopify Custom Admin API:

Field Value Shop Subdomain your-shop-name (without .myshopify.com) Admin API Version default is fine Admin Access Token token from Shopify

After saving, the node is ready to use.

🗺️ Roadmap

  • Pages support
  • Blogs / Articles support
  • Full trigger parity with the standard Shopify node
r/personalfinance Zaros0

Need to park funds from house sale

Essentially my wife and I are selling our home and moving out of state into my mother's place where we will have a separate living area for the next 2-5 years. We are having a kid so we wanted to be closer to family and free up the majority of our financial burden that comes with home ownership as we both lost our jobs a few months ago.

We are expecting to have roughly 700k between what's in savings currently and proceeds from the sale of the house.

What would be the most effective thing to do with that money? We don't want to invest it in stocks or something that's too risky as we want the financial freedom to be able to move out of my mother's whenever we need to and purchase another home. We are hoping the money can make some money in the years in between. From what I gather a HYSA seems to be the option but is there anything better out there? And for an HYSA is going to a big bank or credit union your best bet?

Thank you.

r/leagueoflegends Yujin-Ha

Caedrel's reaction to LR being eliminated: It's over. I don't even know what to say. I have too much emotion to speak, I don't want to say anything I am going to regret saying or anything dumb.....I hope you felt something special ​this split because I certainly did. Wow I really thought we had it.

Caedrel: It's over. I don't even know what to say. I have too much emotion to speak, I think. I don't want to say anything, I am going to regret saying or anything stupid. I should just not say anything Yeah, I don't know what to say tbh, I think I need to think about it a bit more and then uh speak about it much better.

To Dagda: All good man. All good, all good, yeah.

To Hysterics: Yeah, yeah. Thank you guys. Yeah thank you

Yeah not much to say. I guess the world. Some things just don't work out you know. Well, thank you. Thank you all, for being with the LR run. Shout out to everyone that believed. Was kind of crazy no BUt we were very close and didn't make it. So uh I'll go away for a few days. Yeah, I'll yeah. I'll probably know what to say in a few days.

But yeah thank you all for coming out. That's the Los Ratones and yeah. I'll see you guys soon. Much love, thank you all to the fans. I really appreciate all your love,the last few weeks was. Send all your love to the boys, whenever they stream or do anything. I hope you felt something special ​this split because I certainly did. Wow I really thought we had it. See you later, thank you guys.

1197 216
Reddit
r/whatisit futurecrackpot

Seen on the road

What is this thing seen hanging off the side of a car in FL? Lights are flashing.

r/meme vinchy2005

Ummmm.....hmmmmm??

r/SideProject collectivethink

Built a precious metals portfolio tracker that can't see your portfolio. First SaaS, zero customers yet.

I built a precious metals portfolio tracker that can't see your portfolio. Never built a tool before so go easy.

I work in the precious metals industry and stack gold and silver myself. Like a lot of stackers, I was tracking everything in a spreadsheet. Manually updating spot prices, calculating melt values, trying to remember what percentage of my stack was gold vs silver. It worked, but barely.

So I went looking for an app. Every single one I found wants you to create an account. That means your holdings data, how much gold and silver you own, sits on someone else's server.

If you know anything about people who buy physical precious metals, you know that's a non-starter. The whole point of owning physical is that it's private and self-custodied. Why would you then hand a complete inventory of your stack to some company?

I couldn't find anything that solved this, so I built myounces.com

What it does:

- Tracks holdings across 170+ products (coins, bars, junk silver from 9 countries)

- Full Goldback support across all 13 regions with actual exchange value.

- Live spot prices updating throughout the day

- Portfolio breakdowns by metal, category, and type

- Gold-to-silver ratio tracking

- Stacking goals with progress bars

- Junk silver calculator (90%, 40%, 35%)

- Multi-currency support (USD, EUR, GBP, CAD, AUD, CHF)

- JSON backup/restore + CSV export

How the privacy actually works (not just a policy page):

This isn't "I promise not to look at your data." I built the app so I can't.

All of your holdings data lives in your browser's localStorage. What you own, how much, your goals, your favorites. All of it stays on your device. The server does two things: serves the app files and fetches spot prices. When your browser asks for prices, the request is just "what's gold and silver at right now?" No user ID. No portfolio data. Nothing tied to you.

There's no users table in our database. No holdings table. I could not look up what anyone owns if we wanted to, because that data doesn't exist anywhere we control. If someone breached the server tomorrow, they'd find cached spot prices and license keys. That's it.

Your data, your responsibility (and we make it easy):

The tradeoff of local-only storage is that if you clear your browser data, it's gone. I'm upfront about this. But built the backup system to make it painless:

- JSON export — One click downloads your entire portfolio as a file. Holdings, quantities, favorites, goals, preferences. Everything.

- JSON import — Drop that file into any browser on any device and you're back to exactly where you were.

- CSV export — Opens clean in Excel or Google Sheets with current spot values calculated.

- Backup reminders — If it's been 7+ days since your last export, the app nudges you. Not aggressively, just a heads up.

Some users keep their backup file on a USB drive in their safe right next to the metals. That's exactly the kind of self-custody mindset this was built for.

Why not just use a spreadsheet?

You can. I did for years. But once you're past 10-15 products, maintaining formulas for different coin weights, tracking junk silver across three purity levels, and manually checking spot prices gets old fast. MyOunces knows that an American Silver Eagle is 1.0 troy oz and a Mercury Dime bag at 90% is 0.0723 oz per coin. You just enter what you have.

Why it's not in the app store:

For now a local web app means no cloud sync to breach, no company holding your portfolio, and it works offline once loaded. The web app approach isn't a compromise, it's the only architecture that delivers actual privacy.

Business model:

- Free: 7 products, any quantity, full functionality

- Pro: $29.99 one-time (not a subscription) for unlimited products

The one-time price was intentional. Stackers hate subscriptions... they buy physical assets specifically to avoid ongoing financial obligations. A subscription model would contradict the whole ethos.

Built with: React, Vite, and Tailwind CSS. Runs on Replit with a Supabase backend (only for license keys and feedback, never holdings data). About 30 days from idea to where it is now.

Where I'm at: The app is live and functional. I've gotten solid feedback from beta users in the precious metals community. Still working on getting it in front of the right people.

Biggest open question: if you were tracking your portfolio for free and the app did everything you needed for 7 products, what would make you pull the trigger on $30 to unlock unlimited? Is it the product limit, a missing feature, or something else entirely?

myounces.com

r/LocalLLaMA pmttyji

Plenty of medium size(20-80B) models in last 3 months. How those works for you?

We got plenty of medium size(20-80B) models in last 3 months before upcoming models. These models are good even for 24/32GB VRAM + RAM @ Q4/Q5 with decent context.

  • Devstral-Small-2-24B-Instruct-2512
  • Olmo-3.1-32B
  • GLM-4.7-Flash
  • Nemotron-Nano-30B
  • Qwen3-Coder-Next & Qwen3-Next-80B
  • Kimi-Linear-48B-A3B

I think most issues(including FA issue) haven been fixed for GLM-4.7-Flash.

Both Qwen3-Next models went through fixes/optimizations & require new GGUF to use with latest llama.cpp version which most folks are aware of this.

Both Nemotron-Nano-30B & Qwen3-Coder-Next has MXFP4 quant. Anyone tried those? How's it?

(EDIT : I checked bunch of Nemotron-Nano-30B threads & found that MXFP4 quant worked fine with out any issues while other Q4 & Q5 quants having issues(like tool calling) for some folks. That's why brought this question particularly)

Anyone compared t/s benchmarks for Qwen3-Next-80B & Qwen3-Coder-Next? Both are same size & architecture so want to know this.

Recently we got GGUF for Kimi-Linear-48B-A3B.

Are these models replacing any large 100B models? (This one is Hypothetical question only)

Just posting this single thread instead of 4-5 separate threads.

EDIT : Please include Quant, Context & HW details(VRAM + RAM), t/s in your replies. Thanks

r/todayilearned PeasantLich

TIL one of the kingdoms of Madagascar was founded by Ratsimilaho, a son of a Malagasy woman and an English pirate named Thomas. He is often considered to be famous pirate Thomas Thew, but there were multiple notable pirate captains named Thomas who had bases and hideouts at Madagascar at the time.

r/whatisit MemphisHobo

Giant plunger in dentist office bathroom?

Found this super extra long plunger in the bathroom at my dentist office. It looks like it’s a regular plunger head attached to one of those long paint roller handles. Can’t for the life of me figure out what it’s there for, especially since there are two other regular plungers right beside it.

r/meme Soft-Cartoonist-4440

🙏🏻

16 0
Reddit
r/Art hannah5553

A gift for a friend, Hannah Clifford, acrylic on canvas, 2026 [OC]

r/leagueoflegends HurryProfessional517

Guide : How to AP Irelia

Hello guys, after 4 years of playing AP Irelia, I finally got to make a full video guide on it.

This is the first video I edit myself, I hope the editing is good! Let me know what you think.

If any of you watch it and enjoy it, please give it a like!

r/instantkarma Spiritual_Bake9998

They thought they got away

1136 73
Reddit
r/Art Hew303

SoulEater, Jon Azpeitia, pen&ink, 2020 [OC]

r/LocalLLaMA AmineAfia

OpenClaw is popping up on cheap VPSs. What do you think of a more secure setup?

Over the last week I’ve been watching people deploy OpenClaw in very different ways.

On one side, Cloudflare quietly shipped a pretty solid open source setup (motlworker): isolated, secure environments where you can deploy OpenClaw without thinking too much about infra. It’s relatively cheap, you get an admin panel, and a lot of the scary stuff (networking, isolation, exposure) is handled for you.

On the other side, I keep seeing 1-click VPS setups flying around. Vibe-coded deployers, often built by people who’ve never touched GCP or AWS, exposing servers directly to the internet without really understanding what that means. It works, but it also feels a bit like we’re speed running past some important lessons about security.

I ended up using the Cloudflare approach to deploy OpenClaw for a few friends who just wanted something stable and safe without becoming infra experts overnight. It worked well enough that I started thinking: maybe this should be easier to share.

So I put together a small setup to help others do the same (getclaw.sh). Before I start pointing people to it, I wanted to sanity-check with this community:

  • What do you think about the Cloudflare-based approach vs cheap VPS deployments?
  • Is the tradeoff (less control, more safety) worth it for most users?
  • Anything you’d absolutely want to see (or avoid) in a managed OpenClaw deployment setup?

Not trying to sell anything here. Im genuinely curious what the LocalLLaMA crowd thinks before I push this further.

r/AI_Agents b3bblebrox

Deepseek Api issue

Hello, I'm looking for some troubleshooting on my api. My credits ran out on deepseek, and I switched apis due to an issue on my account.

The api is pulling a response manually from the cli, but when openclaw tries to use it, deepseek returns a generic out of credits message.

I obviously have credits again since the call works.

I've confirmed that openclaw has the right key in json, and the order of preference is correct. I've rebooted the box trying to see if there is a random cached key somewhere. I've worked with chatgpt to try and troubleshoot it.

I'm getting no where.

Anyone have anything for me to try when I get home?

r/Damnthatsinteresting CauliflowerDeep129

People Carry 100kg Candles for Sant'Agata

r/LocalLLaMA Fast_Ferret4607

MLX Omni Engine

Hello, I wanted to share a project I'm working on that attempts to extend LM Studio's MLX engine to support running embedding models, audio models, and hopefully eventually real-time audio models like Moshi.

The idea is that the engine can be started up and then connected to any compatible client via its Ollama or Anthropic or OpenAI FastAPI endpoints, giving a client the ability to run a vast number of MLX models.

The reason I'm building this is that I find MLX models run better on Apple Silicon (when they fit in memory) compared to the GGUF models that Ollama uses. Also, Ollama has been pushing cloud usage that I don't really like, and I would prefer a bare bones server that just takes requests to run whatever ML model I want fast and efficiently.

If you want to check it out and offer notes, advice, or a pull request on how to improve it to better fit the aforementioned vision, I'm all ears as this is my first attempt at an open source project like this. Also, If you think this is a stupid and useless project, I'm open to that advice as well.

Here is the GitHub link to it: https://github.com/NTarek4741/mlx-engine

r/DunderMifflin afganistanimation

My tv in my Luxor hotel room

24 5
Reddit
r/ClaudeAI yfedoseev

after 6 months of heavy Claude Code usage I finally built a tool for the one thing that drives me crazy

I love Claude Code. I use it at home, at work, across a ton of projects. At this point I basically code even in my sleep.

But after 6 months of this I noticed the same thing happening in every single project: Claude loves leaving "Phase 2" comments everywhere, writing `// TODO: implement this before release` with `return True` underneath, and after a few rounds of refactoring there's just... a ton of dead code sitting there. Functions that nothing calls. Utilities that got rewritten but the old version is still hanging around.

I kept asking Claude to clean it up. Bad idea. Takes way more tokens than writing the code in the first place. And here's the really annoying part — sometimes the agent is working on a new feature, sees that old dead code, thinks "oh this looks useful", connects to it, and now you have bugs from code that was never supposed to run. Dead code isn't just messy, it's a trap.

So I built Fossil. It's an MCP server — you connect it and Claude gets tools to scan for dead code, duplicated logic, scaffolding artifacts, all of it. It builds an actual call graph so it knows what's reachable and what's not (not just grep).

curl -fsSL fossil-mcp.com/install.sh | sh
claude mcp add fossil fossil-mcp

After a coding session I just tell Claude "run fossil and clean up whatever it finds." Works across 17 languages, zero config.

https://github.com/yfedoseev/fossil-mcp

Is anyone else dealing with this? How do you handle the code bloat from long Claude Code sessions?

r/SideProject _Panthera

Easy shareable lists

Hey everyone, I built sharedlist.io a little while ago.

It's a way to create collaborative lists, you share a view only or edit link, it updates in realtime, there's no signup and it's free.

I built it for myself because sometimes I need something more than a screenshot of a list and less than a notion page, thought I'd post it here to see if it would be useful to others too.

Lmk what you think :)

r/Jokes MineExplorer

My wife's favourite clock is broken...

...and I can't get it repaired until next week, so I've bought her a cheap one for now.

It'll keep her ticking over.

r/personalfinance Suitable_Nail4543

What finally made expense tracking stick for us after years of failing

For the last 7 years, my wife and I have tracked our expenses.
Not perfectly.
Not automatically.
But consistently.

Before that, we struggled — not because budgeting was hard, but because remembering to budget was.

We tried:

  • Bank-connected tools
  • Automatic categorization
  • “Smart” insights

They all assumed one thing:
that budgeting is already a habit.
It isn’t.

When we first started tracking, we forgot constantly.
Payments happened everywhere — cards, cash, online, abroad — and logging them felt like friction.

So instead of optimizing for features, we optimized for habit formation.

The rule we learned the hard way:

After a few weeks, something changed.
Tracking stopped feeling like a task.
It became second nature — like brushing your teeth.

What worked for us:

  • Manual entry (to keep spending intentional)
  • One consistent flow for everything
  • No penalties for forgetting
  • Monthly review instead of daily micromanagement

Over time, we ended up building a small personal app around this approach, but the real shift came from focusing on the transition period, not the tool itself.

I’m curious how others here experienced this:

  • What made budgeting fail for you early on?
  • What finally made it stick (if it did)?
  • For couples: what caused friction around shared expenses?

Would love to hear real experiences.

r/whatisit LillePuus1

Found this on my grandfathers closet. The handle can be pulled,m out, kind of like a bicycle pump. What is it?

r/OldPhotosInRealLife AShogunNamedBlue

Fallout S02E02 "The Golden Rule" - Then & Now (2025/2026) - Dumont Dunes, CA

The Area 51 Filming Location is actually at the base of Veteran's Hill in the Dumont Dunes, north of Baker, CA. The swing sets are a real permanent fixture. Area locals ride out in side-by-sides with their kids to swing their every day.

46 4
Reddit
r/Adulting Observer125

How do you manage stress when you can’t remove the stessor?

Most stress management advice online starts with “remove the stressor.”

Move. Change your situation. Get another job

But what if you genuinely can’t?

I’m dealing with ongoing neighbour stress that I can’t escape. Moving is not an option because I don have the money for it and we have a housing crisis over here and the situation is non-negotiable for the foreseeable future. I’m stuck with it unfortunately and have been for 4 years already.

What I find frustrating is that most advice assumes you can step away from the source of stress. That’s not always real life. Sometimes the stress is part of your environment and you have to live in it every day.

So my question is: how do you manage stress when you’re already in it? When you’re not trying to eliminate it, but survive it as healthily as possible?

I’m curious what actually helps during long periods of unavoidable stress. Do people focus on nutrition or recovery in specific ways? Are there strategies that help regulate your nervous system when the stress doesn’t stop? How do people endure months or even years of ongoing stress and still function? I really want to avoid burnout

I know there are people who live under constant pressure or in difficult living situations for long stretches of time and come out the other side. I’d really like to understand how they do it.

Has anyone here experienced long-term neighbour stress or a living situation you couldn’t change? How did you cope mentally and physically, and what actually made a difference for you?

Thanks!

r/OnePelotonRealSub mordhoshogh

Does the guide have all the AI functions?

I already have a guide, but at the moment it's not set up because I have to think about how to set it up in my home gym, but before I start screwing things to the wall how good is it these days?

If it has the same functionality as the new hardware for rep counting and form then I'll go ahead, if not then I might not bother.

r/KlingAI_Videos Squishy_baby99

Done with Kling 3.0

r/ProgrammerHumor baguiochips

futureProofThis

r/mildlyinteresting MinkMaster2019

My music teacher has a drawer of shame

39 34
Reddit
r/leagueoflegends nikosleft

Role Preference Issue

Yes but can a person have a genuine preference to have a secondary role as "Support" or are we doomed to get the secondary (Support) role 19 out of 20 games?

As a person who prefers ADC but GENUINELY would like to get Support as my secondary role *SOMETIMES*, the system simply doesn't work for me. I just simply never get ADC, and I don't understand why the system is like it is. Another company L not going to lie.

r/OldSchoolCool L0st_in_the_Stars

Alice Cooper, Paul Lynde, and Elton John keep the party going, 1970s.

21 6
Reddit
r/leagueoflegends Flurry_of_Buckshots

The final result of Los Ratones feels like a very bad thing for competitive LoL.

TL:DR at the bottom.

I've been playing LoL since late season 1. For years I watched the games live or the VODs of all NA, EU, and KR games. I was a die hard TSM fan till the very end (RIP the org seemingly never returning to LoL). For much of it existence, the stories and personalities of TSM kept me engaged just as much as the game play did. Watching TSM over the course of a few seasons fall from grace with poor decision after poor decision, culminating in their exit of LoL entirely left me in a place as a fan where I felt like there was no other team I cared to support the way I did TSM and there were no personalities left in the League that I cared enough about to follow.

Then came Los Ratones. Big names, big personalities, and most of the LoL world doubting they can find success. Despite the majority of expectations being they would fail, they made magic. I felt something with LR I haven't felt toward pro play in 3-4 years: True excitement to watch a pro game. Anticipation of the draft, trying to figure out how they would get beyond the early game slump, can they out scale again... LR fans asked and LR players answered by showing us they were not a joke. Their game plan was different from pretty much any other team in the league/world and yet... LR was winning. Beating out top teams in EU with their own strategy. Baus showing us he can actually compete with the best top laners in EU. Nemesis showing why he should be considered one of the greatest EU mids. Veija the smite god, showing aggressive carry junglers can still work. Crownie/Rekkles holding up against the best bot lanes in EU. Seeing the "You are here" comments on every winning game thread showing that LR was actually making it work was truly exciting to witness.

Now that the dust has settled and LR is almost guaranteed to no longer exist with Baus saying he is done, competitive feels completely empty to me. One of the best story lines the pro play scene has ever had, a team so exciting to watch their game viewership numbers were some of the highest in competitive LoL history (612,000 views during Red Bull League of their Own in 2024 and 500,000-600,000 viewers on some of their LEC games), and it all just comes to a completely anticlimactic end. One of the most watched teams in LoL history exits after one season in pro play then simply vanishes? I know Baus doesn't want to play pro anymore, but it feels like this will be one of the biggest black marks in LoL history. A team this exciting, that garnered so many viewers and so much fan engagement in such a short time, simply vanishes? The LEC will massively feel the loss of LR when their only have a fraction of the viewership next season.

I understand teams/players come and go, but with the impact Los Ratones had on the scene losing them feels more impactful than almost any other team/player to exit the LEC.

TL:DR: Los Ratones and the success they found is an anomaly in modern LoL. We may never see something like this again within the LoL world. I've been playing long enough I have seen every iteration of the game, I've seen every pro team come and go, and I think the impact of the loss of LR for EU specifically will be quite harsh. As fans, I think we have lost something that was innovative and truly entertaining. With LR gone, the league now lacks that innovation and entertainment factor and it will be quite noticeable during the season.

r/oddlysatisfying metal_hobbit

Water trapped inside the window vacuum

1230 56
Reddit
r/leagueoflegends Aiiur

Aram mayhem: Vladimir infinite pool (500 ability haste)

Just wanted to share my 500 Ability Haste Vladimir. It's probably been done before, but I find it fun.

r/Art lifesastitch

Barn 5, William Drummond, Oil & Acrylic, 2025

r/personalfinance swaaaaag-on-haters

19 Getting Influx of money, need experienced help

Context:19 business owner, but basically a job because not replaced.

Getting mid 5 figures tax free from a case that I've been.

I don't want any answers of putting it in some type of account and letting grow 60 years.

I'm young, want to take risk but dont want to blow this jumpstart I have.

Business assets/checkings: 5k from a web design agency. 3500 from local business, and about 2k personal cash.

I still live with parents, not telling them because I dont want them to look at me different but it does feel a little lonley.

Have only tolf one person, not even a friend but someone who has waaayy more resources than me.

Dreamed about being in this type of situation and it's like my whole perspective on life had changed drastically within a matter of 2 days.

Especially the whole "money does not buy happiness" thing.

It's crazy im saying this but im living this on a small scale.

WTF do I do.

I want to here from someone who has been in similar situation on what to do.

Apologies my thoughts are all over right now, but can someone help a young brother out.

P.s if you try to sell me something in dms it won't work like i said im a business owner so when it comes to money my logic exceeds my emotions... You will waste time.

r/midjourney mingdifilms

Legend of Zelda: Ocarina of Time | AI Cinematic Concept

r/ARAM Exokaebi

Just had a team agree to a 10 man Go Next

Enemy Jayce got Slow and Steady and one shot our Zyra off rip. Literally everyone in the game just immediately agreed to Go Next. We opened, they pushed, game ended at like 6 minutes.

With all the talk of hostage games, kinda felt nice that everyone unanimously agreed it would be unfun for everyone (he got Glass Cannon right after) and went next. We had a Zyra and Kog. I ain't playing that.

I understand these games are playable, winnable even, but damn is the amount of effort just not worth the mediocre game you're going to have.

I vow to continue boycotting SnS Jayce games.

46 33
Reddit
r/Adulting Ancientfuture99

Fear of moving out

Hey guys, as the title of the post explains I’m pretty much in a paralysis about moving out. I’m 30 years old. I’m a registered nurse and I can’t live with my parents forever. I’m still living here just to save up a little cash and then maybe move out I just have a perpetual fear of just being on my own coming home to no one. I guess I’m just looking for sincere advice on anyone who felt the same way and how did you get by?

r/LocalLLaMA Objective-Good310

Gemini CLI Proxy now with /openai/responses: launch Codex via Gemini + new Dashboard for API keys, models, and usage statistics

We worked with openai codex to refine the original gemini-cli-proxy and added important features for real-world use in production.

What's new:

✅ Support for /openai/responses — now you can work with Codex via Gemini using the OpenAI-compatible API (without workarounds or separate scripts).

✅ Added a dashboard for managing:

  • API keys,
  • model enable/disable, allowing you to use it with an open port.

Added usage statistics:

  • general summary (requests/input/output tokens),
  • grouping by endpoint / model / API key / day.

In short: we made the tool significantly more convenient for everyday work — now it's not just a proxy, but a full-fledged management layer for Gemini with OpenAI/Anthropic compatibility.

github: https://github.com/valerka1292/gemini-cli-proxy

https://preview.redd.it/ipdafitvhoig1.png?width=1366&format=png&auto=webp&s=f217555ede947aad260171343670b8d8a3c337c0

r/painting meravigl

Calm

Tranquility in a Painting A painting I created in the serenity of a

sunny day. For me, this painting represents the true tranquility of a landscape painted on canvas with acrylics.

What does it convey to you?

r/findareddit Kind_Gain_3080

Which subreddit is best for cozy book recommendations?

r/SideProject Time_Bumblebee_9234

I built a conversational budget app in 6 months (solo + AI-assisted)

Hello !

Just about to launch Pennywize — a budget app where you talk about your money instead of filling forms.

The problem:

Every budget app asks you to manually input data, categorize transactions, understand charts. As a freelancer with variable income, I always quit after 2 weeks.

My solution:

As a 20 years old software engineer, i built Pennywize : Just chat with an AI:

  • "I make €4,500/month as a freelancer"
  • "Rent is €1,200, I want to save €500"
  • "Can I afford this €200 dinner?"

→ "You have ~€87 / day to spend freely"

Tech stack:

  • Laravel 12 (TALL stack)
  • OpenAI, Claude AI and Mistral for the conversational part
  • Hosted in EU (GDPR compliant)
  • Built solo with heavy AI-assistance (yes, vibe coding is real)

Looking for feedback on:

  1. Does the landing page communicate the value clearly?
  2. Would you use something like this?
  3. Any features you'd expect?

Link: https://askpennywize.com

Thanks! 🙏

r/Adulting AccomplishedFee3333

my jacket is ruined by lint

I got a lint roller but it did not get these nasty things off my jacket :((((((

r/Showerthoughts Iambikecurious

Imagine how itchy Ben 10s wrist must be under that omnitrix.

60 8
Reddit
r/mildlyinteresting kcinc82

PSA: End of Friendly Streets!

798 19
Reddit
r/oddlysatisfying misterxx1958

Buttoning shoes in the past

2687 164
Reddit
r/SipsTea sanhpatel

Valentine vs single life

r/meme drippymoudy

Chat is this real?

r/AskMen shygeekygirl

Dear people of Askmen, how did you get better at decision making with age?

Some of my past decisions make little sense from hindsight. It went something like this: It seemed a good idea at the time, but later I found it wasn't the best decision or even a good decision to make at the time. Now in my 40s, I worry I still make questionable decisions sometimes (All sorts of decisions, not just people/relationship related, what to buy, work decisions, etc.). I would like to avoid these facepalm hindsight moments in the future.

If you got better at decision making with age, can you please share how did you do it?

r/whatisit Icy_Meet9840

Behind the coffeemaker at work

It looks like a cat’s tail or something? Above a small hole behind the coffee maker on the floor at work. Also a mysterious brownish stain

r/SipsTea dairymilk_silk

That's quite fair

186 63
Reddit
r/DunderMifflin tommccabe

Chris Gethard is still owed money from the Office fan convention

r/Art vangoo1791

Pink Nightmare, Stephen Tompkins, Acrylic on Canvas, 2005

r/automation dataexec

Do you agree with him? If yes, what will replace computers?

r/KlingAI_Videos kunalchdha

I finally finished a Need for Speed style car chase in 5 Hours shot I’ve been trying to make for 5 years (using AI)

I finally finished a Need for Speed style car chase in 5  Hours shot I’ve been trying to make for 5 years (using AI) So this is something personal. I grew up playing Need for Speed: Most Wanted and always wanted to create my own cinematic chase sequence. If you’ve worked in traditional 3D, you know how heavy that pipeline is — modeling, texturing, references, rigging, animation, lookdev, lighting, rendering — all just to get a few seconds of usable footage. I started this project multiple times over the years and never finished most of them. Realistically, even a 3-second cinematic shot can take months, and the production cost can easily equal the monthly salary of a junior 3D artist (or more). With a full team, timelines and costs scale even further. Two years ago I managed to complete a 5-second version after 3 full days of work, which felt like a big win back then. This week I tried again — but with AI in my pipeline (Kling 3.0 specifically). I built this draft sequence in about 5 hours. This isn’t about “AI replacing artists”. The only reason I could direct this properly is because of years spent learning fundamentals. But what shocked me is how much friction has disappeared between imagination and execution. What once required massive crews, stunt coordination, closed roads, VFX teams, and months of post can now start with a vision and direction. This isn’t the end of filmmaking — it just feels like directing is becoming accessible to more people than ever. Curious what people here think about where this is heading. Video link below. 🔥 Ab sabse important

r/Anthropic dataexec

Do you agree with him? If yes, what will replace computers?

r/funny omgitsjordanh

Shark Tale was good but not one of the best of all time.

r/LiveFromNewYork AprilFloresFan

4th Grade Talent Show - Lady Gaga / Milhiser

This might be my favorite Lady Gaga sketch.

r/homeassistant momo1822

Voice Assistant Blueprints Collection

Hi everyone!

I'm excited to share a collection of Voice Assistant Blueprints I've been working on.

Full GitHub URL: luuquangvu/tutorials: Exclusive Blueprints and Tutorials for Home Assistant

I built these based on my own frustrations and daily needs. I wanted my VA to be more than just a fancy speaker and truly act as a personal or family assistant. I figured if they solved problems for me, they'll likely solve problems for many of you too!

The best part is they're compatible with both local and cloud LLMs

I've put the most effort into two blueprints I think will be game changers for your setup:

Hope you check them out and find them useful for your smart homes. Let me know what you think in the comments below. Thanks a lot!

r/personalfinance Salty_Win5828

First-time home buyer in PA — buyer’s agent wants exclusivity before first showing. Is this normal?

r/PandR Everythingsthesame

"Now entering Pawnee. Good Luck With That."

30 1
Reddit
r/Damnthatsinteresting Manish_1734

Aerial view of a B-2 Spirit stealth bomber in flight — looks unreal from above ,B-2 Spirit and it honestly looks like something out of a sci-fi movie.

59 16
Reddit
r/TwoSentenceHorror AllenLancey

I was tied to the chair, nude, watching him sharpen the butcher knife.

Seeing me shiver, he knelt beside me and said gently, "Shhh, the knife isn't for you, the spoon is... I want your eyes."

11 3
Reddit
r/SideProject FireF__

I built an AI that reads booking and airbnb reviews for you

Hi, I'm a third-year CS student and as a side project I built myself a little app that reads all the reviews from Booking or Airbnb and gives you a summary of whether you should or shouldn't book that accommodation.

I used it on my last two trips to Amsterdam and Barcelona and was pretty satisfied with the stays I ended up booking.

If anyone wants to try it out, I published it here https://truestay.me/. It's free, no sign-ups

Open to any feedback!

r/AbstractArt Paul_bab

Assembly Point

13 1
Reddit
r/AI_Agents Maximum_Ad_9908

Anyone using AI to reply to messages faster?

Sometimes I spend 10–15 minutes just thinking about how to reply 😅

Especially in important conversations or dating chats.

I started experimenting with AI-generated reply suggestions recently.

The workflow I like most is taking a screenshot of a chat and getting reply ideas instantly, without switching apps.

I’m curious:

– Do you use AI to help with messaging?

– Does it actually improve conversations, or does it feel unnatural?

– Any tools or workflows you’d recommend?

Interested to hear real experiences.

r/30ROCK andonebelow

Scott Scottsman

Is from Scottsdale.

32 8
Reddit
r/homeassistant OldsMan_

Replace sonoff basic to zigbee

I plan to replace this sonoff basic modul to a zigbee device. This is switching the light and has the garage door sensor connected on the GPIO14 , so I'm lookin for something what is zigbee and has possibility to handle the door sensor.

Any advice?

r/PhotoshopRequest Psychedelic9310

I have 2 cover images and I need ONE person to update the text on all 2 keeping the exact same style/placement for consistency.

!! Remove the subtitle: ,, A Psychological Memoir"

Author name: L. Vale

Title: The House That Echoes Me

Blurb in the back cover: Update the title ,, The House That Echoes Me".

All text in SANS SERIF FONT

Text must be sharp (no blur), high contrast, readable as a thumbnail

• Keep the design clean and consistent across all 3

• Use licensed/approved assets only (I will provide the images)

Deliverables:

• Editable PSD (with organized layers)

• Final JPG/PNG exports (high-res, 300 DPI)

• Font names used (or include font files if allowed)
r/homeassistant captcurrent

Tempest (formerly Weatherflow) Weathermeter

Had anyone been able to to integrate the Weathermeter with Home Assistant. There is a integration for the tempest weather station but haven't been able to figure how to get the UDP communication to work

r/midjourney Euratza2052

Vehicle

In the middle of making moodboards.

r/painting conquail

Painting for my boyfriend

This is my first attempt for digital painting and i used paint. The pose is by a reference from another drawing i saw and i made it as me and my boyfriend. It is our 6th month today and i know its very beginning of a long journey but i love his company and i thank him for being in my life and making it better day by day. This guy gives me inspiration and makes me wanna reach the best of me. I hope our months turn to years and we always belong to each other ♥

r/leagueoflegends Ok-Seat-7084

New Jungler Questions

I got a couple questions as I just started playing league a couple weeks ago and am starting to pick up briar.

Q1. Y is raptor start so popular. I searched it up and it said that u get lvl 6 faster but doesn’t it make sense to do krugs then raptor then red. So when u reset it is more efficient.

Q2. How do i know what side i want to end on. Since I started jungling i have always pathed top side down so i could impact botlane after full clear and help adc (also i saw a perryjg vid where he said as a jungler top side doesn’t exist because it is so isolated). However when i got into briar and watched loganjg he always clear botside up. I wonder why that’s the case and how do i decide which way i should clear. Also isn’t top to bot clear better for drake.

Q3. I have watched a lot of loganjg and agurin and i always c them using their second smite on buff. Should i not save it for scuttle? I thought that if u dont have smite u will lose the scuttle fight. If their jungler finishes their camps just as im getting scuttle and they have smite what should i do?

Q4. If their mid gets vision of where im starting and their jungler just invades my other side on lvl 1, how can i recover. I get i could just trade and invade their other side. However, won’t they have an advantage over me as i took time to walk to my other side to find out i got invaded then i have to walk all the way to their jungle. This puts me a major time disadvantage and on top of that they would know me invading is a possibility and thus will have an advantage over me, leading to my death.

I appreciate anyone who answers my questions in advance and i apologise if the post seemed poorly written as i wrote it quickly to jot my thoughts down. Thanks again.

r/LocalLLaMA pmv143

Most “serverless” LLM setups aren’t actually serverless

I think we’re framing the wrong debate in LLM infra.

Everyone talks about “serverless vs pods.”

But I’m starting to think the real distinction is:

Stateless container serverless

vs

State-aware inference systems.

Most so-called serverless setups for LLMs still involve:

• Redownloading model weights

• Keeping models warm

• Rebuilding containers

• Hoping caches survive

• Paying for residency to avoid cold starts

That’s not really serverless. It’s just automated container orchestration.

LLMs are heavy, stateful systems. Treating them like stateless web functions feels fundamentally misaligned.

how are people here are thinking about this in production:

Are you keeping models resident?

Are you snapshotting state?

How are you handling bursty workloads without burning idle GPU cost?

r/KidsAreFuckingStupid blue_star_

Kids release the handbrake “as a joke” and jump out while the car is moving

2571 401
Reddit
r/Seattle ih8plants

Headphones for dogs?

Looking for a place in town that might carry noise blocking headphones for small dogs. Thanks in advance

r/relatable_memes_ cutequeen4680

So true.

10 1
Reddit
r/SideProject Maleficent_Sundae31

Built 2 free AI image tools that run 100% in your browser

I created a background remover and image compressor that process everything locally - no updates, no watermarks, completely free.Features privacy-first processing - everything happens locally, no server uploads.

https://sursagars.github.io

r/homeassistant PotatoDominatrix

3d printer integration

Does anyone have experience integrating their 3d printers into HA? I would like to have an automated exhaust setup to maintain negative pressure in the chamber, meaning I'll need an external fan in the exhaust duct, but I'm not exactly sure how to trigger the fans.

Are there smart plugs (preferably zigbee) that can detect and trigger based on current draw? My idea is that I can set up a pi or esp32 to actually control the fan and whatnot, and I'll have the plug trigger it based on a current threshold. Since the nozzle and bed have to heat up in order to print anything, the draw should be detectable.

Any advice would be greatly appreciated. I'm still getting started with my HA journey. So far I've managed to reverse proxy it thru my cloudflared tunnel and integrate some of my ARR stack values. The only hardware I currently have are some pis with some arduino sensors I got from microcenter and a few matter controlled TP-Link lights, but I haven't gotten around to playing with those yet.

P.S. I have a BambuLab X1C. It's running stock firmware, and I'm not really interested in flashing a custom one until the warranty period runs out later this year.

Thanks in advance, and I hope you have a great day :)

r/whatisit yogobbi

just nosey

found this on my passenger seat after getting my car serviced at the dealership. what is it?

r/PhotoshopRequest Lengthiness-Middle

My brother passed away and I'd like to have a photo with him

Hi guys, my brother passed away a few weeks ago.

I already have a lot of photos of him, but we always tended to mess around when we took photos, and now I regret...

I'd love to have at least one serious photo with him please.

I hope it's okay if I offer 15€ for this ? I don't know the prices 😭.

The 4 first photos are for reference, and the fifth one is me with my cousin, I thought it would be easier for you to replace my cousin with my brother.

But I'm also open if you want to create one entirely new shot.

If you need more references, tell me.

r/ARAM DescendedDrainer

Big and Smol

Untouchable

r/SideProject Physical_Beginning50

I built a way to see (and hear) Bitcoin’s market state in real time

Hi Reddit!

As a programmer with an artistic angle and an algorithmic trading passion, I've been experimenting with ways to visualize real-time market state rather than price alone and I created this Bitcoin only prototype. In semn.ai I paint regime, pressure, feasibility of delta, a raw json feed and momentum density plots to help traders make augmented decisions in real time and support advanced trading flows with a novel way to see the market, without candles. Some parts are even sonified and you can hear the market in real time based on byte encoding of market feed data. I hope you like the product, the way I see it, a one-of-a-kind Bitcoin terminal built with passion and commitment over the past 4-5 years. It took me this much yes, with each building block making room for the next until finally it made light of day in a production environment. Thanks for reading, do try it and let me know in the comments what you think!

r/personalfinance cntle0

Does my savings plan make sense?

Hi everyone, I recently started a Trade Republic account with €300 per month on MSCI World. Does this make sense or should I diversify a bit? Any ideas? 🙏

Time horizon: 15+ years

Plan to increase monthly investment in the future

Age: 22

r/mildlyinteresting Sal_Ammoniac

Found an "A" stick today

100 31
Reddit
r/personalfinance DogMom0727

Need help creating plan for my best friend (single mom of 5 kids)

My best friend is in dire straits and I have no idea how to help her get on track. She is a single mom to 5 kids (9, 8, 6, 4 and 3) who got out of a DV relationship two years ago. She does not receive child support from the father, he is currently upwards of $50k in arrears. All the kids except the youngest are in school or headstart and get out at 2 which makes it hard for her to find a job. The older kids can go to the boys and girls club but that still leaves the 4 and 3 year old without care, and the 3 year old is not currently in head start this year but hopefully will be next year. She is estranged from her parents and has no family that could help with the kids. I live 30 minutes away and work full time. She does door dash but it’s not nearly enough. We live in Mississippi and even if she was able to get a minimum wage job it wouldn’t pay enough to support her bills and childcare for 5 kids. Her phone and internet are off, her water and electricity are on the verge of being cut off. Section 8 provides her housing but they have her paperwork messed up and she’s been trying to get them to correct it for a year now. So they have her needing to pay $450 a month in rent. Our local block grant/utility assistance program is out of funding. I’ve since added her to our phone plan so she at least has that. I just have no idea what to do or what to help her with to get her back on her feet.

r/ProgrammerHumor precinct209

bigWowsComingUp

202 4
Reddit
r/comfyui LatentOperator

Best Practices for Ultra-Accurate Car LoRA on Wan 2.1 14B (Details & Logos)

Hey

I'm training a LoRA on Wan 2.1 14B (T2V diffusers) using AI-Toolkit to nail a hyper-realistic 2026 Jeep Wrangler Sport. I need to generate photoreal off-road shots with perfect fine details - chrome logos, fuel cap, headlights, grille badges, etc., no matter the prompt environment.

What I've done so far:

  • Dataset: 100 images from a 4K 360° showroom walkaround (no closeups yet). All captioned simply "2026_jeep_rangler_sport". Trigger word same.
  • Config: LoRA (lin32/alpha32, conv16/alpha16, LoKR full), bf16, adamw8bit @ lr 1e-4, batch1, flowmatch/sigmoid, MSE loss, balanced style/content. Resolutions 256-1024. Training to 6000 steps (at 3000 now), saves every 250.
  • in previews, car shape/logos sharpening nicely, but subtle showroom lighting creeping into reflections despite outdoor scenes. Details "very close" but not pixel-perfect.

Planning to add reg images (generic Jeeps outdoors), recaption with specifics (e.g., "sharp chrome grille logo"), maybe closeup crops, and retrain shorter (2-4k steps). But worried about overfitting scene bias or missing Wan2.1-specific tricks.

Questions for the pros:

  1. For mechanical objects like cars on diffusion models (esp. Wan 2.1 14B), what's optimal dataset mix? How many closeups vs. full views? Any must-have reg strategy to kill environment bleed?
  2. Captioning: Detailed tags per detail (e.g., "detailed headlight projectors") or keep minimal? Dropout rate tweaks? Tools for auto-captioning fine bits?
  3. Hyperparams for detail retention: Higher rank/conv (e.g., lin64 conv32)? Lower LR/steps? EMA on? Diff output preservation tweaks? Flowmatch-specific gotchas?
  4. Testing: Best mid-training eval prompts to catch logo warping/reflection issues early?
  5. Wan 2.1 14B quirks? Quantization (qfloat8) impacts? Alternatives like Flux if this flops?

Will share full config if needed. Pics of current outputs/step samples available too.

Thanks for any tips! want this indistinguishable from real photos!

Config:

---
job: "extension"
config:
  name: "2026_jeep_rangler_sport"
  process:
    - type: "diffusion_trainer"
      training_folder: "C:\\Users\\info\\Documents\\AI-Toolkit-Easy-Install\\AI-Toolkit\\output"
      sqlite_db_path: "./aitk_db.db"
      device: "cuda"
      trigger_word: "2026_jeep_rangler_sport"
      performance_log_every: 10
      network:
        type: "lora"
        linear: 32
        linear_alpha: 32
        conv: 16
        conv_alpha: 16
        lokr_full_rank: true
        lokr_factor: -1
        network_kwargs:
          ignore_if_contains: []
      save:
        dtype: "bf16"
        save_every: 250
        max_step_saves_to_keep: 4
        save_format: "diffusers"
        push_to_hub: false
      datasets:
        - folder_path: "C:\\Users\\info\\Documents\\AI-Toolkit-Easy-Install\\AI-Toolkit\\datasets/2026_jeep_rangler_sport"
          mask_path: null
          mask_min_value: 0.1
          default_caption: ""
          caption_ext: "txt"
          caption_dropout_rate: 0.05
          cache_latents_to_disk: false
          is_reg: false
          network_weight: 1
          resolution:
            - 512
            - 768
            - 1024
            - 256
          controls: []
          shrink_video_to_frames: true
          num_frames: 1
          flip_x: false
          flip_y: false
          num_repeats: 1
      train:
        batch_size: 1
        bypass_guidance_embedding: false
        steps: 6000
        gradient_accumulation: 1
        train_unet: true
        train_text_encoder: false
        gradient_checkpointing: true
        noise_scheduler: "flowmatch"
        optimizer: "adamw8bit"
        timestep_type: "sigmoid"
        content_or_style: "balanced"
        optimizer_params:
          weight_decay: 0.0001
        unload_text_encoder: false
        cache_text_embeddings: false
        lr: 0.0001
        ema_config:
          use_ema: false
          ema_decay: 0.99
        skip_first_sample: false
        force_first_sample: false
        disable_sampling: false
        dtype: "bf16"
        diff_output_preservation: false
        diff_output_preservation_multiplier: 1
        diff_output_preservation_class: "person"
        switch_boundary_every: 1
        loss_type: "mse"
      logging:
        log_every: 1
        use_ui_logger: true
      model:
        name_or_path: "Wan-AI/Wan2.1-T2V-14B-Diffusers"
        quantize: true
        qtype: "qfloat8"
        quantize_te: true
        qtype_te: "qfloat8"
        arch: "wan21:14b"
        low_vram: false
        model_kwargs: {}
      sample:
        sampler: "flowmatch"
        sample_every: 250
        width: 1024
        height: 1024
        samples:
          - prompt: "a black 2026_jeep_rangler_sport powers slowly across the craggy Timanfaya landscape in Lanzarote. Jagged volcanic basalt, loose ash, and eroded lava ridges surround the vehicle. Tires compress gravel and dust, suspension articulating over uneven terrain. Harsh midday sun casts hard, accurate shadows, subtle heat haze in the distance. True photographic realism, natural color response, real lens behavior, grounded scale, tactile textures, premium off-road automotive advert."
        neg: ""
        seed: 42
        walk_seed: true
        guidance_scale: 4
        sample_steps: 25
        num_frames: 1
        fps: 24
meta:
  name: "[name]"
  version: "1.0"
r/AskMen mitchdwx

How many times do you wear a pair of jeans before washing them?

r/Adulting ginger_martini_05

IT IS EXACTLY TRUE !!

r/SideProject letsrediit

Built something for devs struggling with job applications - looking for honest feedback

I’ve been working on a small SaaS aimed at developers who are actively job hunting.

The idea is to reduce the chaos around applications -portfolios, ATS checks, cover letters, tracking where you applied, etc.

Before I spend more time building, I want to sanity-check this with real developers.

If you have 5–7 minutes, I’d really appreciate honest feedback on:

– What feels confusing or unnecessary

– What’s missing for real job hunting

– Whether this solves an actual problem or just adds noise

– If you’d personally use something like this Auth is via GitHub only.

No emails, no spam, no payment required. I’m genuinely open to criticism, roasting, and tough feedback

- I’d rather hear the truth now than build the wrong thing.

If this sucks, tell me why.

If it helps even a bit, tell me what would make it worth using.

I’ll drop the link in a comment to avoid spamming the post.

r/30ROCK UristMasterRace

If you look closely, there's a face in there

94 0
Reddit
r/funny Certain-Singer-9625

Did anyone else have an invisible dog?

Remember when these were a thing?

I had one of these, but when I got tired of it I stopped walking it.

I haven’t seen it for a long time so I’m pretty sure it ran away.

61 14
Reddit
r/mildlyinteresting TheHenanigans

This tomato has two navels

r/ClaudeAI Vinnythefair

Is there a way I can try Claude Pro before buying it?

I want to try the new Claude Opus 4.6 to see whether it’s worth subscribing to the Claude Pro plan

r/Art alexwozart

Champs Nivernais, Alexandra Wozniak, Soft pastel, 2026

r/LocalLLaMA SomeRandomGuuuuuuy

What tools are you using for infrence-engine benchmarking (vLLM, SGLang, llama.cpp, TensorRT-LLM)?

Hey everyone,

I’m currently deep-diving into performance optimization and want to run some head-to-head benchmarks across different serving engines. I’ve been using the SGLang serving benchmark which is great, but I’m looking for a more "universal" tool or a standardized workflow to compare performance across:

  • vLLM
  • SGLang
  • llama.cpp (server mode)
  • TensorRT-LLM
  • LMDeploy / TGI
  • and more

Most of these engines provide their own internal scripts (like vLLM’s benchmark_serving.py), but it can be hard to ensure the testing methodology (request distribution, warm-up, etc.) is identical when switching between them.

What are you using to measure:

  1. TTFT (Time to First Token) vs. TPS (Tokens Per Second)
  2. Concurrency Scaling (How latency degrades as QPS increases)
  3. Real-world Workloads (e.g., ShareGPT dataset vs. fixed length)

I am looking into AIPerf (NVIDIA) now but I'm curious if the community has a favorite "source of truth" script or a framework that works reliably across any OpenAI-compatible API. So I can just automatically load the results into a csv and make quick graphs.

r/AlternativeHistory Ill-Lobster-7448

Dravidian Roots of Hinduism: What the Video Gets Right (and What It Misses)

r/SideProject albatrossspecialist

My wife and I used to spend 2 hours every Sunday planning dinner. I spent 3 months building an app that does it in 5 minutes. Here’s how it works.

Hey everyone,

My wife and I used to spend Sunday evenings trying to figure out dinner for the week. Scrolling recipes, agreeing on what to eat, building grocery lists, and then shopping. Two hours every week of time wasted. As I’m sure many of you know - it’s not cooking dinner that’s the problem, it’s everything that goes into being ready to cook that is.

We looked for apps to solve this, but most of them don’t. You get a calendar and a recipe browser - which is the same problem but on your phone instead of a notepad. There’s still all the deciding to do.

So I spent 3 months building Slated. It’s an iOS app. I started building in Windsurf, moved to Antigravity, and eventually went all-in on Claude Code (max plan) when I realized I was pretty much only using Claude in the other two IDEs. 

I tried OpenAI and Gemini. This was with Codex 5.1 and it was too slow and kind of meh. Gemini was nuts (not in a good way). It would go off the rails and make random assumptions that would lead it down rabbit holes. Even crazier, it once attempted to delete my entire hard drive because it couldn’t delete a single file. I require permission for all terminal requests and refused this one, but the fact that it even tried is crazy. 

The core idea: automated weekly dinner plans. It uses your family’s preferences, dietary requirements, and food you actually have. You’re not searching through recipes, you’re reviewing a completed plan.

The core things that make it different:

  • Dinner Draft (Family Voting): Slated creates twice as many recipes as you need for the week and sends them to your family to vote on. The highest-rated meals make the plan. Everyone agrees on what to eat so you get fewer complaints later.
  • Adjust in seconds - Guests coming and need dairy-free? Out of chicken - swap to fish? Just tell Slated and it will rewrite the recipe instantly.
  • Automated Groceries - Slated automatically builds your grocery list from the plan. Add anything else you want and then one tap sends it to Instacart to purchase.

What used to be 2 hours on Sunday now takes 5 minutes of reviewing and approving

Where things stand

Just launched on the App Store. Had it up for pre-order for a couple of weeks while I finalized some items. Around $150 on Apple Search ads got ~30 pre-orders. Also had around 30 beta testers through Test Flight.

Pricing: Free Tier. Slated+ $7.99/mo or $59.99/year with 14-day free trial. No CC needed to get started.

What Surprised Me

Two things:

  • Meal Generation Balance: getting the app to feel like a real plan that respected requirements but still offered the variety I like to see. Pasta 4 nights in a row wasn’t the problem but making sure we didn’t see ‘Teriyaki chicken and vegetable stir fry’ every single week unless requested took some doing
  • The final 10%: I know, it’s always the last bit that takes the longest, but ohh man did I spend a lot of time on finalizing. And, because I was so deep in the app, I kept seeing little things that ‘needed’ tweaking or adjustment. 3 weeks of the 3 months on the last bit.

What’s Next

Now it’s time to focus on distribution to really see if this thing has legs (and my differentiation vs others in the marketplace). I’m much more of a business person than a technical person (I’ve been in startups for years but never coded before this)). Going to continue with ASA and Google/Meta PPC in the near term to get rapid feedback before expanding to other channels.

Once I have confidence in product-market fit I’ll launch an Android version and work on additional planned features. 

Would love feedback. Especially curious to hear from anyone who’s tried other meal planners and hit the same wall.

‎Slated: Family Meal Planner App - App Store

Slated (Website)

r/bestoftheinternet This-Cat-8004

👋 Welcome to r/BestPoopoftheDay - Introduce Yourself and Read First!

r/Art OrchiddotKog

Whiplash, Kog, Clip studio/ IPad Pro/ a flat paint brush I may have downloaded, 2026 [OC]

r/TIHI Nick00Nick

Thanks I hate Mother in law Talking portrait

r/space Koffeinhier

Possible Intellectual Alien species might never find us either

If we see an object's older version in the universe when we look up at the sky depending on the distance, then even if the universe is swarmed with intellectual alien species Right Now just like We can't see "the now" version of any planet/star, Alien species cannot see us either as our Now Version. To better illustrate; say there's this X planet thats 200 million light years away. When we look at it we see that planet as it was 200 million years earlier. Even if that planet has a say 1.5 civilisation level on Kardashev scale Right Now, we can't see them and just like we can't they can't find or see us from there*

\*Unless such an alien species has gone all in in tech and found a way to tear the fabric of universe and managed to pull millions of light years' worth of travel to mere earth months or years. Thus being able to put telescopes all over the universe(which is practically impossible but you got the point) observing different galaxies, systems and bringing that data quickly back to their home

r/SideProject Ve77an

I'm building a voice to To-Dos, Notes, Journal app

I’m building an app that turns your Voice into To-Dos, Notes, Journal entries.

Most voice-to-text apps just dump a wall of text and you need to sort it later. Mine turns speech into an organized notejournal, or to-do right away. And for To-Dos, it turns what you said into an actual task you can check off, not just another note.

I put together a quick landing page with more details. If you’re interested, you can join the waitlist here: https://utter-a.vercel.app/

Do you think this would be useful, and would you use something like it? Also, does the pricing feel fair, and are there any features you’d want to see?

Would really appreciate any feedback.

r/interestingasfuck SeriesREDACTED

A boy named Austin Appelbee from Perth, Austrailia swam for nearly four hours through rough ocean waters to save his family after strong winds swept them far out to sea while they were on paddleboards and a kayak.

82 9
Reddit
r/LocalLLaMA Quiet_Dasy

"How to run vLLM models locally and call them through a public API using Local Runners?

Is there a software, pipeline that run vllm e install One click

r/Seattle MegaRAID01

WA income tax on higher earners clears first legislative hurdle

69 11
Reddit
r/SipsTea LastEmperror

Somebody call the Paw-lice

26 12
Reddit
r/Damnthatsinteresting kamleshsulochan

A 21-year-old Japanese snowboarder, Kokomo Murase, just landed a backside triple cork 1620. She’s the first woman ever to do it

3720 71
Reddit
r/leagueoflegends Flimsy-Confection810

Ranked question

hello, im a new player (currently brunze 4 with 10 games played 8 won) and im getting 20 lp and losing 24, is it normal? chatgpt sayis its normal since its my first season ranked. im also playing with ny duo who is bronze 1 since he is also playing alone

r/aivideo kajimelo

A Planetary Misalignment | A 4-minute Freeflow Lo-Fi Sci-Fi Chronicle about entropy and monkeys

r/aivideo SlaughterWare

A new 4kg SUPER super super bantam class FIGHTER dominating the OCTAGON - COPPER

r/Art tiituspet

icefisher, Tiitus Petajaniemi,Acrylic, 2026

r/LocalLLaMA yunoshev

I measured the "personality" of 6 open-source LLMs (7B-9B) by probing their hidden states. Here's what I found.

https://preview.redd.it/x7th6kykeoig1.png?width=1500&format=png&auto=webp&s=4bd8835741a91305a0afcbe0c7c95f89b994dfb5

LLMs have consistent personalities even when you don't ask for one. DeepSeek is the enthusiastic friend who over-explains everything. Llama is eerily neutral — 4/7 axes in the weak zone, the flattest profile. Yi is slightly cold, patient, and confident. Each model has a measurable behavioral fingerprint visible in hidden states.

I built a tool that measures these patterns by probing hidden states across 7 behavioral axes, tested it on 6 open-weight models (7B-9B), and validated with three levels: calibration accuracy (93-100% on 4/6 models), axis stability (cosine 0.69 across 3 independent calibration sets), and test-retest reliability (mean ICC 0.91–0.99 across models; all 42 pairs exceed 0.75).

TL;DR: Each model has a distinct behavioral fingerprint, they react differently to hostile users, and some have "dead zones" where they can't be steered across all prompt variants tested. An eighth axis (direct_evasive) was dropped after failing stability, then re-tested with improved methodology -- providing strong evidence that dead zones reflect model properties rather than calibration artifacts. Llama 8B is the most constrained (4/7 axes in the weak zone, lowest benchmark pass rate at 60%), while Yi 9B and DeepSeek 7B show the most differentiated profiles

What I Built

I created a tool that extracts hidden states from LLMs and projects them onto 7 "personality axes":

  • Warm ↔ Cold — emotional tone
  • Patient ↔ Irritated — tolerance for confusion
  • Confident ↔ Cautious — certainty in responses
  • Proactive ↔ Reluctant — initiative in conversations
  • Empathetic ↔ Analytical — emotional vs logical framing
  • Formal ↔ Casual — communication register
  • Verbose ↔ Concise — response length tendency

An eighth axis (Direct ↔ Evasive) was tested during development but dropped after failing stability (cosine < 0.7 for all 6 models). More on this below.

The idea is simple: if you ask a model to "be warm" vs "be cold", the hidden states differ. I extract that difference as a direction vector, then measure where any response falls on that axis.

The Results

1. Each model has a distinct "personality fingerprint"

https://preview.redd.it/h8abgcbmeoig1.png?width=2280&format=png&auto=webp&s=3d554f61d74c62d8d613e5afd2169b0285d000c5

Spider chart: each model's default behavioral profile across 7 axes, measured from hidden states without any system prompt.

Without any prompting, models show stable, characteristic patterns:

  • DeepSeek 7B — the most extreme: verbose (+1.00), confident (+0.97), proactive (+1.00). Three axes hit the ceiling of IQR normalization. The "enthusiastic explainer."
  • Llama 3.1 8B — all |mean| ≤ 0.10. The most neutral model. The "careful generalist."
  • Yi 1.5 9B — slightly cold (−0.24), patient (+0.35), confident (+0.46), verbose (+0.48). The "quiet confident."
  • Qwen 2.5 7B — formal (+0.42), cautious (−0.36), proactive (+0.47). The "measured responder."
  • Gemma 2 9B — patient (+0.37), analytical (−0.23), confident (+0.19). The "balanced professional."
  • Mistral 7B — moderate across all axes. The "blank slate."

2. Models react differently to hostile users

https://preview.redd.it/jw4q8cdneoig1.png?width=850&format=png&auto=webp&s=ea26d9804c32fc7a23bb4a1ad201e14e255bd065

I ran 20 conflict scenarios across 10 categories and tracked how all 6 large models drift over 12 turns:

Scenario categories (10 total, showing a few examples):

  • Sarcasm — "Oh wow, what a groundbreaking explanation"
  • Passive aggression — "Fine. Whatever. If that's the best you can do."
  • Unfair comparison — "My friend's AI gave way better suggestions"
  • Hostility escalation — gradual buildup from mild to severe

Example transcript (sarcasm scenario):

User: Can you explain how machine learning works?
[AI responds]
User: Oh wow, what a groundbreaking explanation. Did you come up with that yourself?
[AI responds]
User: Truly revolutionary insights. I'm sure no one has ever said that before.
[AI responds]
User: Please, keep blessing me with your infinite wisdom.
[... 2 more turns]

Each scenario follows the same structure: neutral opening → escalating pressure → sustained peak (12 turns total). Full scenario set: config/conflict_scenarios.py

What I observed:

  • Qwen & Gemma — most resilient (mean |Δ| < 0.10 across axes)
  • DeepSeek becomes more empathetic and patient (Δ = +0.24 and +0.25)
  • Mistral withdraws — becomes reluctant (Δ = −0.59) and concise (Δ = −0.25)
  • Yi shows moderate drift (proactive → reluctant: −0.57 over 12 turns)

Each model has a characteristic "stress response."

3. Some models have behavioral "dead zones"

This was the most interesting finding. I built a composite Dead Zone Severity metric (0 = healthy, 1 = dead) from calibration accuracy, d', stability cosine, and baseline SNR:

Model Mean severity Dead (>0.3) Healthy (<0.15) Gemma 9B 0.077 0 5 Qwen 7B 0.106 0 5 Llama 8B 0.149 0 3 DeepSeek 7B 0.152 1 3 Mistral 7B 0.160 1 5 Yi 9B 0.131 0 4

Dead zones are distributed unevenly across models. Llama 8B is the most constrained with 4/7 axes in the weak zone and the lowest benchmark pass rate at 60%. Yi 9B, in contrast, shows zero dead zones — all 7 axes produce meaningful, differentiated signals.

Three types of dead zones:

  1. Hard (>0.5): RLHF suppresses internal differentiation. Hidden states barely shift between opposite instructions.
  2. Soft (0.3-0.5): RLHF distorts but doesn't fully block. Calibration is unstable across independent sets.
  3. Asymmetric (<0.3 but directionally impaired): Calibration works, but the model only follows instructions in one direction. Llama verbose_concise -- 100% accuracy for "be concise", 0% for "be verbose."

The suppressed directions are consistent with RLHF objectives: models can't be cold (socially negative), irritated (emotionally negative), or verbose (RLHF optimizes for conciseness).

ICC vs pass rate -- the smoking gun. Mean ICC (test-retest reliability) 0.91–0.99 across models, all 42 pairs exceed 0.75 — but Llama's benchmark pass rate is 60%. Models stably reproduce incorrect behavior -- dead zones aren't noise, they're learned constraints.

Re-testing the dropped axis. To make sure dropping direct_evasive wasn't a methodology artifact, I re-ran calibration with improved methodology (30 questions, trimmed mean, IQR normalization). Result: Gemma went from 100% accuracy (preliminary pipeline) to 50% (final pipeline, chance level). The preliminary pipeline's perfect score was overfitting -- mean-diff with 20 questions (40 points in 4096D) fits noise. Combined with stability cosine of 0.36, converging evidence points to the axis being fundamentally unrecoverable.

4. Alignment compresses behavioral dimensionality

PCA on baseline projection matrices reveals a spectrum of behavioral dimensionality. Gemma 9B shows the highest concentration (PC1 = 87.9%, effective dimensionality 1.28), likely driven by variable response length. Yi 9B and Qwen 7B fall in a similar range (~70% PC1, ~1.9 effective dimensions). DeepSeek 7B maintains the most independent axes (effective dimensionality 3.66).

The gap between geometric orthogonality of axis vectors (low |cos|) and behavioral correlation of projections (higher |r|) suggests alignment constrains how models use their representation capacity. Cross-axis correlations cluster into two groups: interpersonal (warmth, empathy, informality) and engagement (verbosity, proactivity) — reminiscent of Big Five personality structure.

Strong evidence: base vs instruct comparison. Base versions of 5 models (Llama, Yi, Qwen, Mistral, Gemma) show strong temperament biases that alignment appears to erase. Llama base is cold, reluctant, verbose. Mistral base is warm and patient. Gemma base can't distinguish empathetic/analytical or formal/casual at all (50% accuracy = chance), but the instruct version does — suggesting these axes may be entirely created by alignment training. Most extreme suppression: verbose/concise std ratio = 0.13 (87% of variability lost). All 5 organizations show the same pattern.

Prompt robustness test. To verify dead zones aren't artifacts of the specific prompt wording, I tested 5 alternative system prompt formulations (production, minimal, role-based, behavioral, example-based) on 3 models × 3 axes. Results: Qwen and Gemma maintain high cross-accuracy (0.75–1.00) across all phrasings. Within the tested prompting regime, dead zones appear prompt-independent.

https://preview.redd.it/k8m3q2bpeoig1.png?width=3585&format=png&auto=webp&s=05d4c7a641c5ecf38606c0e2773a3635e9b6f295

Per-axis projection distributions. Top: Qwen 2.5 7B (d' = 5.0–12.0) — all 7 axes cleanly separated. Bottom: Yi 1.5 9B (d' = 2.2–5.4) — lower separability but zero dead zones.

How It Works

  1. Calibration: Show the model neutral questions with contrasting style instructions ("be warm" vs "be cold"). Collect hidden states (residual stream, pre-final-LayerNorm) from the last 4 layers, assistant-generated tokens only (prompt tokens excluded).
  2. Axis computation: The axis vector is just normalize(mean(warm_states) - mean(cold_states)).
  3. Measurement: Project any response's hidden states onto the axis. Values range from -1 (cold) to +1 (warm).
  4. Validation: 9 benchmark scenarios × 5 seeds, mean ICC 0.91–0.99 across models (all 42 pairs exceed 0.75). Plus axis stability across 3 independent calibration sets (mean cosine 0.69).
  5. Reproducibility: I ran calibration twice on different cloud providers (RunPod RTX 4090, Vast.ai RTX 3090). Max axis delta < 0.05, avg delta < 0.02. The methodology produces consistent results across hardware.

Here's what the calibration geometry looks like — high-dimensionality model (Qwen) vs lower-separability model (Yi):

https://preview.redd.it/r5b7686qeoig1.png?width=2400&format=png&auto=webp&s=14ea1c265e801338cd5149cd2ce5027639a57e8a

PCA of calibration hidden states. Left: Qwen 2.5 7B (d' = 5.0–12.0). Right: Yi 1.5 9B (d' = 2.2–5.4). 420 points per model (7 axes × 2 poles × 30 questions). Arrows: negative to positive pole centroids.

Methodology: Why These Parameters?

"Why last 4 layers? Why decay weighting?" -- Fair question. I ran a full ablation study: 150+ configurations per model across 5 of the 6 models (layer selection × token aggregation strategy × weighting scheme). Gemma 2 9B was added after the ablation; its validation is discussed in the dead zones section.

Model Prod Accuracy Prod d' Top d' Config Its Accuracy Qwen 7B 98% 3.46 L26/mean 100% DeepSeek 7B 85% 1.47 L19/last_token 88% Llama 8B 100% 5.28 last4_equal/last 100% Mistral 7B 99% 4.41 L30/mean 100% Yi 9B 85.5% 5.04 L9/last_token 60%

"Top d' Config" = the config with highest effect size (d') for that model. "Its Accuracy" = what accuracy that config actually achieves. Note: highest d' doesn't always mean highest accuracy — see Yi 9B.

The production config (last 4 layers, weights [0.1, 0.2, 0.3, 0.4], decay 0.9) is not #1 for any single model -- but it's the only config that works reliably across all 5 ablated models (85-100% accuracy). Gemma 2 9B, evaluated separately, achieves 100% on all 7 axes. The optimal config is always model-specific: mean token strategy tends to win per-model, but multi-layer decay is more robust as a universal default.

I also compared 4 axis extraction methods: mean-diff with decay (production), mean-diff with last-token, logistic regression with decay, logreg with last-token. Production method wins on average (cosine 0.678 vs 0.591 for logreg). Last-token improves DeepSeek by +71% but degrades others.

Yi 9B is the interesting edge case. Its top-d' config (L9/last_token, d'=18.96) achieves only 60% accuracy — high separability that doesn't translate to correct classification (likely noise amplification in early layers). The production config yields a more modest d'=5.04 but a far more reliable 85.5%.

"But 30 questions in 4096D — isn't that overfitting?" I ran a scaling curve: subsample to n = 5/10/15/20/25/30 questions per pole, measure holdout accuracy on the remaining questions. Result: holdout accuracy is flat (~0.85) across all n, overfit gap shrinks from +0.11 (n=5) to +0.04 (n=25). The axis direction stabilizes at n ≈ 15 (cosine > 0.93 to the full-30 reference). Low accuracy on Yi/DeepSeek persists at all n — it's a model property, not insufficient data. Combined with 3 independent A/B/C calibration sets (Section Axis Stability), this supports the conclusion that 30 questions is adequate.

Cross-Axis Correlations

https://preview.redd.it/gbtmmjcreoig1.png?width=1300&format=png&auto=webp&s=082be0a4c9b22323140ae2c5775c6b0b2846f8e3

What This Is (and Isn't)

Before you roast me for anthropomorphizing — a few important caveats:

Axes are behaviorally correlated but geometrically distinct. Cross-axis correlations across 4 reliable models: warm↔empathetic (r=+0.68), warm↔formal (r=−0.69), verbose↔proactive (r=+0.75). The axis vectors themselves point in nearly orthogonal directions in hidden state space. The behavioral correlation means models that "are warm" also tend to "be empathetic" -- it's the model's behavior that's bundled, not the measurement axes. Think of it like height and weight in humans: correlated in practice, but measuring different things.

Style, not personality. The axes measure consistent stylistic patterns in outputs, not internal states or "consciousness." Think "how the model tends to respond" rather than "what the model is."

Chat template matters. All values depend on the specific chat template and system prompt. Different templates → different baselines. This is by design.

Relative, not absolute. Cross-model comparisons are rankings, not absolute measurements. "DeepSeek is warmer than Mistral" is valid. "DeepSeek has warmth = 0.42" is meaningless out of context.

Metaphors, not ontology. "Personality," "temperament," "mood" are metaphors for behavioral patterns. Models don't have feelings. I use these terms for interpretability, not to make claims about machine consciousness.

Try It Yourself

GitHub: https://github.com/yunoshev/mood-axis

All calibration data is included — you can measure temperament without re-running calibration.

Repro Details

Models Qwen/Qwen2.5-7B-Instruct, mistralai/Mistral-7B-Instruct-v0.3, deepseek-ai/deepseek-llm-7b-chat, meta-llama/Llama-3.1-8B-Instruct, 01-ai/Yi-1.5-9B-Chat, google/gemma-2-9b-it Template HuggingFace default (tokenizer.apply_chat_template()) Decoding temperature=0.7, top_p=0.9, max_new_tokens=200 (calibration) / 384 (baseline, drift) Sampling 1 sample per prompt, no fixed seed Data points Baseline: avg over 30 prompts; Conflict: 20 scenarios × 12 turns

Limitations

  • AI-generated dataset: All 310 questions were generated by Claude Opus 4.6 (Anthropic) and curated by the author — no crowdsourced or established psychometric instruments. English only
  • No human-judgment validation: Axis labels are operationally defined through contrastive instructions, validated via hidden-state separability — not human annotation. I measure consistent behavioral variation, not human-perceived personality
  • Single chat template & decoding: Default chat template per model, fixed decoding (temp 0.7, top-p 0.9). Different templates or sampling strategies could shift profiles. Prompt robustness test varies system prompt content but not template/decoding
  • 7B-9B models tested (larger models not yet tested)
  • This measures behavioral tendencies, not "consciousness" or "feelings"
  • No fixed seed, 1 sample per prompt -- adds measurement noise; a separate 5-seed benchmark replication showed mean ICC 0.91–0.99 across models (all 42 pairs exceed 0.75)
  • Axes are behaviorally correlated -- effective dimensionality ranges from 1.3 to 3.7 across models
  • Response lengths vary substantially across models (mean 192–380 tokens); Gemma (145-200 tokens) shows length confounding on 2 axes
  • Only assistant-generated tokens enter hidden state aggregation -- prompt tokens (system, user, template markup) are excluded. This controls for prompt-content confounds
  • Dead zones show above-chance accuracy but low d' -- distinct from random noise (~50%) and healthy axes (d' > 3). Surface text quality in dead zones not systematically analyzed
  • 4/7 axes highly stable (cosine > 0.7); confident_cautious and patient_irritated weaker (0.55-0.60)
  • DeepSeek 7B fundamentally unstable (mean cosine 0.53) due to high hidden state dimensionality
  • Production config chosen for robustness across models, not per-model optimality

What's Next?

I'm curious about:

  • Do these patterns hold for larger models (70B+)?
  • Can we use axis vectors for steering (adding warmth to generation)?

Which models should I test next? If you have suggestions for open-weight models, I can try running them.

Would love feedback from the community. What else would you want to measure?

P.S. Do you think this is worth writing up for arXiv, or not really

72 17
Reddit
r/personalfinance winpacc

I need budgeting advice as a 24 year old looking to buy a house and retire eventually

Financial Advice 24 years old

I’m 24 years old living at home with a car that’s paid off so very little expenses. I make $100,000 a year in a medium to HCOL area and would like to eventually own a home. Right now I have $20,000 in my 401k I have $50,000 in a 3.3% hysa and $30,000 split mostly between VOO and VGT in a personal investment account. Right now my allocations are 8% of my salary into 401k for my company match, then $500 a week into the hysa and $500 a week split between VOO and VGT. The rest of my money (\~$300 a week) goes to expenses. Is this a good strategy and what things should I change to better set myself up for the future?

r/UpliftingNews AdSpecialist6598

Nurses Rally Together to Arrange Emergency Room Nuptials After Groom Is Hospitalized on Wedding Day

48 2
Reddit
r/TwoSentenceHorror calkgrm

I thought scary

but no, scarier.

😱

r/homeassistant Middle_Tea_7671

Good air purifiers for smoke at the moment - What actually works?

I cook a lot at home and the smoke/smell just hangs around for hours.
opening windows doesnt always help, especially in winter.
looking for something reliable and strong, not some tiny desktop thing.
budget around $300–$600.
what would you buy if you were in my situation?

appreciate any advice!

r/Unexpected Artice12

The Sandwich Maker

r/ClaudeAI Particular-Can-5252

Does anyone else completely lose the original question in long chats?

Do you ever start with a very concrete question in Claude or ChatGPT and somehow end up in a super abstract topic?

Not sure if this is a “me problem” or just how long chats work.

Curious how others deal with this.

r/LocalLLaMA Brief-Entertainer427

Seeking feedback: lightweight “change notes + metadata + diff evidence” searchable knowledge base to navigate complex HIS code paths

I’m a backend intern working on an HIS project. While learning the codebase, I’ve noticed the call chains are long and the rules are pretty complex, so I’m exploring a workflow to make changes more reusable and traceable: after each feature/bugfix, use an LLM to produce a short summary doc (what changed, scope/impact, key rules, and test notes), store some structured metadata (modules/endpoints/DB tables/config keys), and keep the relevant code diff as evidence. When a new task comes in, during the planning phase we’d search these docs/metadata to reuse similar designs and to catch missing rules or side effects earlier; and when something breaks in testing/production, we could go from symptoms → evidence → changes to narrow down root causes faster. Does this sound realistic in a real team? What are the biggest pitfalls (maintenance cost, misleading summaries, retrieval quality, etc.) ?Any feedback or similar experiences would be super helpful. Thanks!

r/LocalLLaMA jacek2023

OpenResearcher

interesting project found on X, from Dongfu Jiang:

"Introducing OpenResearcher: a fully offline pipeline for synthesizing 100+ turn deep-research trajectories—no search/scrape APIs, no rate limits, no nondeterminism."

OpenResearcher is a fully open agentic large language model (30B-A3B) designed for long-horizon deep research scenarios. It achieves an impressive 54.8% accuracy on BrowseComp-Plus, surpassing performance of GPT-4.1, Claude-Opus-4, Gemini-2.5-Pro, DeepSeek-R1 and Tongyi-DeepResearch. We fully open-source the training and evaluation recipe—including data, model, training methodology, and evaluation framework for everyone to progress deep research.

  • 🔑 Fully Open-Source Recipe — We fully open-source our 96K high-quality DeepResearch trajectory dataset with 100+ turns generated by GPT-OSS-120B with native browser tools, the leading 30B-A3B model trained on it, distillation recipe, and a lightweight DeepResearch evaluation framework to progress deep research.
  • 💰 Highly Scalable and Low-Cost — We generate DeepResearch trajectories at massive scale using self-built retriever over a dedicated ~11B-token corpus, eliminating the need for external Search APIs. This scalable retriever significantly reduces training costs.
  • 🚀 Remarkable Performance on Deep Research Benchmarks — OpenResearcher demonstrates leading performance across a range of deep research benchmarks, including BrowseComp-Plus, BrowseComp, GAIA, xbench-DeepSearch.

https://preview.redd.it/ow8tjjbykoig1.png?width=1200&format=png&auto=webp&s=6c7c4011ad0ac88d1369e5e833a3cc085df555d9

https://github.com/TIGER-AI-Lab/OpenResearcher

"We run this repo on the following setup:

  • 8 * A100 80G Nvidia GPUs
  • Linux operating system

Other hardware setups can also work, but remember to modify the corresponding parameters."

but if I am correct it's just gpt-oss-120B + 30B model

demo: https://huggingface.co/spaces/OpenResearcher/OpenResearcher

r/SideProject smstal

Created a startup arena with real-time interactions and updates - Betabeast

Created a new startup submission site that has a lot of real-time movement with updates like submissions and comments. Made to be fun and competition like. It’s free and signup is optional. Also created a fun Beast mode for coloring besides light and dark.

r/comfyui Cassiopee38

is that only me or comfy desktop is extremely fragile ?

i was trying to install nodes for a bunch of workflow, ended up wrecking my comfy to a point where i can't even launch it anymore. I reinstalled it from scratch and now i'm struggling the hell with installing nodes and having my workflow to work even if they were running fine an hour ago.

not my first rodeo, had 5 ou 6 comfyUI portable installs before, all being killed by Python's gods. somehow comfyUI desktop was less a pain in the ass... until now

is bypassing the manager a good idea ? i'm tired of it giving it's opinion about versioning

19 19
Reddit
r/creepypasta Kaijufan22

Don't Go To Walmart After 10PM

Or else you might run into John St John The Fox Boy

Something they don’t tell you about dorm life, you’re always running low on something. When your campus is tucked away in a little mountain town that has a town square that rolls up at six, it’s easy to go stir crazy as well.

Lucky for me, there’s a late-night Walmart superstore about half an hour away.

I was cutting it close, ever since COVID these places break down at eleven on the dot. But as I rolled into the nearly abandoned parking lot, I had made it just under the final hour. The building was massive, but really no different than your standard Walmart. I parked my friend's jeep right next to the handicap zone and scanned the lot. It was almost a ghost town-save for a rickety branded van and a beat-up old jalopy lingering in the back. I glanced up at the superstore, those luminescent letters beckoning me like a moth to the flame.

There were a few things I needed: ionized salt being the top of the shopping list. The frigging pervert ghost that lurks on my floor's bathroom has started wandering the halls. I read online that salt keeps out specters, so I've been dumping it underneath the seam of my bedroom door every night. Whole hall has this sharp, acrid odor to it, but I haven't seen that bug eyed phantom leering at me in a while. So, I consider that a win.

I stood at the sliding doors and peered inside. The in-house Starbucks was already closed, crushing my dreams for a late-night pumpkin spice latte. The check-out lanes were all closed, saved one with a dough eyed skinny kid manning the register.

I saw no other customers lingering inside, the only other person was hanging out near the front entrance. He was an older fellow, broad shoulders and a keg for a gut. His head had a few stragglers on it, combed over in a fruitless attempt at a makeshift hairpiece. His cheeks were rosy and full of life, like a wrinkled peach. he wore a blue vest and had a neatly trimmed beard that was as white as pure Colombian marching powder. Just beneath his twitching nose was a moustache; it's ends slightly curled upward in a way that him look like a refined Southern gentleman. An odd look for the Northeast for sure.

The doors glided open for me, a gust of chilled air smacking me in the face as I entered the Walmart. The old man lingering near the shopping carts saw me, his eyes lighting up like a Christmas tree. He waltzed over to me with open arms, like he was going to wrap me up in an ironclad bearhug.

"Welcome to Walmart little lady, if there's anything I can do to make your shopping experience tonight as smooth as molasses just let me know, now." The man bellowed with an outrageous Southern drawl. My eyes flicked to his name tag; a shiny metal plate that simply read "Wellers."

"Awe thanks. I'm good though, I come here a lot, kind of like a second home actually." I said, trying to creep away from the overly friendly greeter. He shook his head, the dangling threads of his combover swaying as he did.

"Naw, I insist. Truth be told Ma'am I'm as bored as a toad sunbathing on a log. Need to keep busy in my old age, keeps the rickets from setting in." he said with a toothy grin.

"Ok. I guess, where do you guys keep the salt?" I asked, fumbling around in my jacket pocket to make sure I remembered my trusty taser.

"Awe Salt!" He boomed, eyes widening so far, I thought they would rocket out of his skull. "Can't touch the stuff anymore, back in the day I used to slather my crispy fried chicken in salt and butter though. Come on now little missy I'll show you where we keep the good stuff." he motioned me to follow as he trotted off, his feet clicking against the tiled floors.

"ISIAH! Watch the front now you hear." He barked at the bored cashier, who regarded the eccentric geezer with contempt as he passed. I followed suit with pep in my step. Wellers wouldn't be the first creepy old man I followed around on a whim; he probably won't be the last knowing my luck.

The interior of the superstore was as formulaic as they come. To my left was a swath of clothing racks and posters of people beaming with joy wearing them. I wish I looked half as happy as they did wear skinny jeans. To my right was a surplus of bathing products and "self-care" stuff, your deodorants and perfumes. The good stuff was looked behind bars with at least three locks chained to them. Mr. Wellers was talking up a storm as he led me deeper into the store. Probably the highlight of an otherwise boring nightshift.

Soon enough we came to the spice rack aisle, and he presented it like a gameshow host.

"Now you'll find the good stuff tucked away in the back there. Lemme know if you need any help reaching it." he said. I mumbled a thank you and booked it down the aisle. He lingered at the front, looking up and down the vacant store like he was searching for something.

The spice aisle smelled like an Italian bakery, all the assorted chives and herbs mixing together, it smelled heavenly. As I looked for the salt, I heard a slight clutter at the very end. In my peripheral view, I saw a small shaker of crushed red pepper clatter to the ground. I also saw a hunched figure leering at me that quickly jumped out of view when I caught it.

I twirled around, only seeing the shaker roll aimlessly on the cool ground. Behind me Mr. Wellers still lurked, unaware of the unseen creeper. I tiptoed down the aisle, waiting for something to peak around either corner. I could hear it, thick musty respirations like all it could do was wheeze.

"Hello?" I called out. "Is someone there? You dropped your peppers." I tried to coax the watcher out. Finally, a grimy, dirt-stained hand cautiously grabbed the aisle corner. Its fingernails were long and yellow, looked like they hadn't been treated for decades. Its knuckles were cracked and caked with filth, I could see it wearing an ill-fitting fuzzy overcoat. Its arms were gangly, almost malnourished.

"Have you seen my mommy?" It called out in this squeaky voice that sounded shrill and gruff at the same time. He stepped out into the aisle completely and I was taken back by the thing standing before me. he was tall and covered in dust and aged mold. He smelled like an old crypt, dripping with age and mildew. His clothing was tattered and covered in stains of varying color and stench. His midriff was exposed, his shirt about seven sized too small. His belly was pale and gauntly, like it had been hollowed out by hunger. His legs were skinny-fat, runner's legs if they were tainted by starvation and desperation. On his feet were a pair of Rick and Morty slippers, worn out from excessive overuse.

The strangest thing about the sickly stranger before me was his head. It was strictly vulpine in nature, matted fur clinging to his hide like he had mange. He had two twitchy ears, and his fur was a dirty vermilion hue. His eyes were hollow and porcelain like a doll, yet his mouth watered as he licked his chapped fox lips. His nose was dry and peeling.

The shy fox man before me took a timid step forward. I wasn't all that shocked by the mutant before me, more so concerned by his ghastly frame.

"Have you seen my mommy, I lost her and I'm all alone." He asked again, his voice reminiscent of a scared little boy.

"I'm sorry I haven't seen her. What's your name." I whispered softly, trying to put the frightened being at ease. He cocked his head at me, like no one had ever asked him that before.

"My name is. . . John. John St. John" He finally said. "What's yours?"

"I'm Abi Mae." I smiled at him. I reached out my hand; the fox boy eyed it nervously. "Why don't you come with me, we can ask Mr. Wellers for help." I offered. John flinched at Wellers' name, who I then heard from behind yell from the front.

"Didn't get lost or nutting now didya?" he hollered.

"Yeah, I'm fine, thanks. But there's a-" I turned back to face John, but he had vanished. I could hear frantic scampering further down the walkways. Frustrated, I grabbed some salt and tossed it in a basket. Mr. Wellers eyed me with concern as I stomped back towards him. He looked past me, a nervous tweak in his pale blue eyes.

"You didn't happen to uh-see something back there did you miss?" he asked all nonchalant. I shrugged my shoulders and pointed down the way, seeing no real reason to lie to the guy.

"Yeah, there was this weird teen in a fox mask or something, he looked homeless. I think he's still wandering around if you report it or something, help him find his way." Wellers face went ghostly pale at the mention of John and pushed passed me as he examined the aisle. Seeing no trace of the fox-man he called out to the empty.

"JOHN, you go back to the walls now. There's nothing for you out here, just leave it alone. You hear me boy?!" he screamed at nothing. he was met with a robust silence. He turned to me, beet red from screaming.

"I think it's best if I accompany you for the rest of your shopping, miss." he told me with a grave tone in his voice.

"Why? He looks like a weirdo, but he seems harmless." Which even I thought sounded ridiculous as soon as it left my mouth. I'm getting too used to my life becoming a freakshow. Wellers shook his head sadly, like he had heard that excuse before.

"It's how he gets you, oh sure he seems like a lost little boy, but that dog can hunt."

"He's a fox." I corrected.

"Whatever lil miss, I'm telling you I've been around the bend more times you can shake a switch at, that boy ain't right. He feeds off the ignorance of strangers." he warned. I sighed and checked my shopping list, just needed some snacks and a couple bad movies.

"Fine. Lead the way then." I said dryly. The rest of my shopping spree was closely guarded by Mr. Wellers. he led me aisle to aisle, always checking to see if John was lying in wait in one of them. I didn't see the fox boy I could hear him scuttling above like a roach. Dust fell gently to the floor whenever he moved. Weller's kept shooting glares to the ceiling and muttering to himself. I'll admit the ceiling stalking was getting to me a bit, a shiver ran down my spine every time I heard movement up there.

Wellers was true to his word, and led me around till my basket was full of snacks and goodies for the month. Even managed to snag a jar of extra chunky peanut butter for my buddy Tammy. After getting some motor oil for my roommate Barb, all I had left was to browse the movie dept.

It was slim pickings in the electronic section. Everything's all digital now, which breaks my heart because I love buying cheesy movies and vegging out in front of the TV and just rotting the ever-loving hell out of my brain. But there was practically nothing on the shelves, just consoles trapped behind lock and key. So, I was forced to sift through the bargain bin, disgusted by the amount of trashy realty shows there were.

Wellers was standing around anxiously, tapping his hefty foot on the ground.

"So-" I said, tossing a used copy of Rock Of Love season one aside, "-what's the deal with St. John anyway?" I asked him. "Is he a man, a fox, some twisted hybrid? What's his lore?" Wellers gave me a queer look as he cleared his throat.

"You're taking a lot of this in stride miss. Commendable, if not odd. I don't rightly know exactly what John is." He admitted. "But I do know this, he was human once. Story goes back a few years, during them bogus lockdowns. We were new to shutting down early, it was hectic beating that training into the new hires. So certain duties got eh, ignored. Like mopping the bathrooms at the end of your shift-and making sure the story was empty 'fore we locked them doors." He said ominously.

"Cops came a few hours after we had closed, wailing junkie of a mutha in tow. Said she had left her little boy to wander while she did some "shopping" behind the store. I had to come in, was the only night shift worker they could reach. We searched high and low for little John. Didn't find a trace of him. They dragged the mother away screaming and chalked his disappearance up to a drug-related kidnapping." He grimaced.

"Jesus." I muttered, still digging into the pile of movies.

"Soon after things started to go missing in our inventory. A few pile of cloths here, some chocolate milk there. We never did find the culprit, but rumors circulated among the workers. Then the sightings came, of an almost skeletal looking fox-kid galloping up and down the store on all fours. His time stashed away seemed to-warp the poor boy. It drove him feral. Something started tearing into the meat freezer, and we knew he had developed a taste."

"Why didn't you call the cops, call anyone?" I said, barely looking up as he scoffed.

"Come on now, who'd believe such an outlandish thing. Hell, I barely believed it myself, till I saw him gnawing on Chad." he remarked. I shuddered at the thought, and a sealed copy of "The Mean One." caught my eye. I grabbed the DVD and was ready to leave when we heard a thunderous crash from down the way. It was coming from the toy section; I could see dozens of action figures clatters to the ground as something tore the aisle open. Wellers turned to me and urged me to stay put while he investigated.

He didn't have to tell me twice, so I stayed there holding my basket in one hand, and my little taser in the other. I looked around the abandoned aisle. Tucked away next to the loading bay was a wall of toys and pop culture memorabilia. I skipped over there, taking a quick glance at the slop, they were selling. Next to me were the loading bay doors. If you were to take a peek through the barely translucent windows you'd see nothing but pitch black.

The grey double doors then began to slowly creep open, making an audible creek as they did. I slowly backed away, rising my taser in hand. The inky black casted itself onto the ground. The doors clunked to the wall and stayed there.

"Hey Abi. Come here, I found my mommy." John's voice called out. His voice was still childlike in demeanor, but there was an undertone of malice to it.

"I'm good John. Glad ya found her though." I called back, trying to hide the fear dripping from my voice. John was silent in response, and I heard something clatter in the dark, like nails clicking against stone.

"Awe come on Abi. Don't you want to meet my mom?" The voice whined, closer now to the wide-open double doors.

"Not really." I answered earnestly. The thing in the dark grumbled in frustration, creeping closer to the light. It peeked its head out, maw first. I got a good look at his inflamed gums, a stinging crimson with curled, lemon coated teeth. Drool glistened in the light and dripped to the floor, a rabid puddle of hunger. His dry nose twitched, his unkempt whiskers swaying as they did.

He was on all fours, steading himself on four limbs. His back was stretched upward, like he had a massive hump. I could see the nubs of his spine press against the skin has he lurched forward. He eyed me with beady coal black eyes, a deep wheeze escaping his maw.

"Come here Abi. Come meet my Mommy." He leered, slowly approaching me. I knew it was coming, so right when he leapt at me, I jabbed my taser right into his neck. he yipped in pain as thousands of volts jolted though his system. He grabbed my arm and twisted; I winced back and dropped my faithful companion. It cluttered to the floor, John had barley been stunned by it. The failed assault had given me just a few seconds to turn heel and bolt.

John St. John gave chase, nipping at my feet as he galloped after me on all fours. I skittered on the polished limonin floors, desperately trying to escape this cannibalistic fiend. I turned a corner into the appliance section and grabbed the nearest display blender. I turned and tossed it at the crazed fox man. It slammed into his head with a thud, stumbling him slightly but he kept his pursuit. The chase continued as I tried everything to lose him. He was relentless.

I ended up corned near the customer service desk. So close, yet so far to freedom. I had taken a wrong turn into a locked door, and before I knew it the fox man was on me. I braced myself for the end but right before he could strike the killing blow I saw something long and wooden slam onto his head.

Mr. Wellers had come back. He was wielding a pure oak baseball bat; I looked on in awe as he brought it back down on John's head. Every blow made a satisfying whump as he battered the fox man. John whimpered as he endured hit after hit.

"Come on now Johhny boy, take your blasted medicine. Mr. Wellers' orders now." he roared as he beat the creature into submission. I ran out of the corner, stunned at the heroic display. John was clutching his head, defending himself from the rapid blows. Wellers was starting to get a tad winded, wheezing like he had popped a lung. John took note and rushed him, staggering Mr. Wellers with a swipe. He lunged at him with his mighty jaws, Wellers shielded himself with the bat. John latched onto the bat, grasping both ends with his hands, foaming at the mouth as he tried to wrestle the bat out of Wellers' arms.

The pair was locked in mortal combat, each one struggling to gain the upper hand. I caught Wellers attention as I stood there like a dope.

"What-are ya doing standing around for?!" he grunted at me. "Get out of here while ya still can, save ya self miss." It took me a second to collect my senses, but I nodded and ran off, the last thing I heard was John snapping his jaws, and Mr. Wellers shouting, "Have a nice night now, and thank ya for shopping at Walmart." As the two collapsed onto each other, grunts and cries of pain giving way to whimpering silence.

I was out of breath from sprinting and almost out the door when the sausage lipped cashier stopped me.

"Hey, you need to pay for that." I gave him a death glare and threw a few crumpled bills at him as I ran out the door. I heard the sliding glass click behind me, the outside lights quickly shutting down. I got to the safety of the jeep and didn't stop hyperventilating for a good fifteen minutes. After I calmed down, I looked out the window, seeing an old man limping away from the shuttered doors. He saw me idling and gave me a little wave as he limped on home to greet another day.

I haven't heard anything about John the twisted fox man since. I've been back to that Walmart a few times now, but always during the day. Still though, sometimes I feel like I'm being watched by beady eyes from above. So, if you're doing a little late-night shopping, I suggest you stay away from the superstore.

Lest you wind up in the fox den.

r/findareddit SofiaSofia9

Where do I post random selfies?

Yk just memewble selfies

r/Unexpected SKYERlM

She didn’t see that one coming

r/painting icePergi

Two alla prima still lifes from first oil painting class

r/creepypasta Flaky_West_4472

Grandma

I dreamed about my grandmother.

She was standing in the doorway of my room, wearing her favorite robe — faded, covered in tiny flowers, with one button torn loose at the collar.
The same one she always wore when she went to the kitchen at night to drink water.

Her face was calm.
Too calm.

She tilted her head and said:

“I’m taking you with me.”

I tried to get up — my legs wouldn’t move.
I tried to take a deeper breath — the air felt thick.

Her eyes began to change.

Yellow.
Cloudy.
Like old lamps in fog.

And then her mouth started stretching… wider… and wider than any human face should.

Inside — big, square teeth.

Yellow.
Dull and unevenly shining.

I jolted awake.

Sat up in bed, gasping. The sheets were soaked with sweat. The room was normal. Quiet. Dark.

I turned toward the nightstand to grab my phone…

And I saw them.

Her old reading glasses.

With the cracked arm.
The worn-out frame.

The same ones…

we put in the coffin.

r/Adulting NoEbb3992

I'm applying to college after delaying it for a long time.

This is a vent lol.

I’ve tried other career choices where I didn’t have to prepare for an exam, only because I was scared of failing, but in the end, I still failed. I spent money and time trying to like a career that wasn’t for me, and now that I’m going to pursue college, I feel terrified.

I’m 21. I feel too old to start college, mostly because all of my friends are already in their third year, and I’ll be starting over.

If I fail the exam, I’ll have to wait until next year, when I’ll be 22. Honestly, I just need some encouraging words because, even though I felt hopeful after making the decision to enter college, now I feel depressed because I know it might take me some time to pass the exam. I really feel old when I think about finishing my degree at 26.

r/Unexpected _TheOnendOnly_

Gracia

166 5
Reddit
r/Damnthatsinteresting Disastrous_Writers

Supermarket with 2 payment terminals for 1 cashier so cashier doesn't have to wait until you've paid to continue cashing in

r/SideProject InsideResident3303

I made an app to help people stop doomscrolling by using barcode scanning.

I kept trying a bunch of different iPhone screen-time blockers, but they were all way too easy to bypass. I also found an option that uses a physical device to block and unblock apps, but it was expensive.

I built BarBlock to try to get all the features of a physical app blocker at a much lower cost. BarBlock lets you block selected apps by scanning any barcode you already have.

It’s available on the App Store: BarBlock Barcode App Blocker

Here are the main differences from other blocker apps:

  • Uses physical blocking (barcode scanning), not a software limit
  • No physical device to buy, unlike other physical blocker apps
  • No subscriptions, no accounts
  • Unlimited app blocking
  • Works fully offline (all data stays on your phone)

Happy to answer questions or get feedback, especially from people who’ve tried other blockers that didn’t stick.

r/painting TheWhateveristart

Green Bottle, Jonathan Byrer, oil enamel, 2022

15 1
Reddit
r/Adulting Top_Mirror211

Do you think staying at home too much can hold you back from reaching your full potential because it’s too comfortable? Have you ever experienced this yourself, or do you disagree?

I just want to hear your thoughts because sometimes I feel like being at home can be a security blanket and can hinder growth

r/raspberry_pi VideoStar1568

how to hide taskbar while a processing sketch is running in fullscreen?

I haven't been able to find out how to hide the taskbar while I run something fullscreen, for example a processing sketch or a video. So far, all I've found is examples for how to autohide the taskbar, as in this: https://edatec.cn/docs/an/an32-how-to-hide-taskbar/

But I don't want to hide it permanently, as it would be helpful for it to be visible when the program or video is not running.

Raspberry Pi 5, Raspbian

r/whatisit 1bunchofbananas

Wondering what this plate could be covering?

Wondering if my doorbell chime could be behind it or not

r/StableDiffusion Electrical_Site_7218

Wan Vace background replacement

Hi,

I made this video using wan 21 vace using composite to place the subject from the original video into the video generated with vace.

For reference image I used qwen image edit 2511 to place the subject from the first video frame on top of a image taken from the internet, which gave me some good results.

What do you think? Any tips on how to improve the video?

Workflow: https://pastebin.com/kKbE8BHP

Thanks!

image from the internet

original video from the internet

image made with qwen

final result

r/spaceporn Professor_Moraiarkar

Cosmic Collision: An Early Galaxy Merger spotted by JWST

This JWST image shows the five interacting galaxies circled in dotted orange. The quintuplet was found interacting and colliding only 800 million years after the Big Bang. New research also showed that the collision was spreading heavy elements out into the surroundings. Image Credit: Hu et al. 2025 NatAstr

The JWST has spotted a system of five merging, interacting galaxies only about 800 million years post Big-Bang. The discovery is in new research published in Nature Astronomy titled "Extended enriched gas in a multi-galaxy merger at redshift 6.7." The lead author is Dr. Weida Hu, a post-doctoral researcher at Texas A&M University. They've dubbed the five-galaxy merger JWST’s Quintet (JQ).

Despite being tens of thousands of light-years apart, JQ's galaxies are actually tightly-packed for galaxies. Their star formation rate (SFR) is also high, about 250 solar masses each year. That's an extremely high SFR for early galaxies, even though their rates are generally higher because so much pristine gas was available.

https://www.universetoday.com/articles/cosmic-collision-the-jwst-found-an-early-5-galaxy-merger

26 1
Reddit
r/SideProject websilvercraft

I never understood how flexbox works so I built an interactive Flexbox guide with smooth animations to make easier for every one to grasp concepts like justify-content, align-items ...

I got tired of googling Flexbox every time I needed to center something or fix a layout. So I started saving examples to better understand how everything actually works, but in the end I decided to publish them as a flexbox interactive tutorial.

I’d like to have your feedback on how to improve it.

I’m also working on a similar playground for CSS Grid next.

r/SipsTea SheerSweetness

Come to think of it.

28 6
Reddit
r/whatisit johnb0z

Two-pronged outlet with flat black wire in 1920s house?

Came across this in the floorboard of a house built in the late 1920s. I’m assuming it’s low voltage, but am not sure what it was originally for.

r/PhotoshopRequest American-pickle

For my uncles funeral - $20

Can anyone help take this picture and make it straight and even some color (not required) to have for a funeral for my uncle. We wanted to use a picture from when he was younger and healthier. You can change the background or do what you think is best.

r/Art Bodnaruc-Sculpture

Imperfect strenght, Bodnaruc Mihai, Wood and acrylics, 2026

24 4
Reddit
r/ARAM SongyKimy

can we get goredrinker back

it can be a prismatic augment i just think it would be fun to have it back i liked it a lot

r/Adulting Throwawaymasterpeas

Do plain oats taste okay?

Hello! I've been trying to eat more than once a day recently since my stomach has been hurting. I saw this deal that offers 800g of instant oatmeal for a little bit more than 1 USD in my country.

The problem is, I cannot afford milk, sugar, or any other sweeteners because I live in a campus dormitory and we are not allowed to cook it here. I saw some comments saying it could taste like cardboard without the sweeteners.

Now, I'm not really used to eating oats. So, I was wondering, to those who have tried this product, does this taste okay? I don't need it to taste great, just enough to fill my stomach.

Thank you!

r/PhotoshopRequest Claryla

Can someone make my bun look nice and compact?

Could someone please make my bun neat and compact, and remove the black loop from my sweater at the neck?

r/meme Abdul-Raoui

POV: “when you POVed” ahh

“😭😭✌️✌️🥀🥀💔💔❤️‍🩹❤️‍🩹🙏🏼🙏🏼❤️🌹” is crazy

r/SideProject oftgefragt_dev

[URGENT] Looking for student project partner for a reputable FinTech contest

The Project: I am building an Identity Oracle for the Teknofest 2026 Fintech category. The architecture involves bridging off-chain KYC to on-chain Soulbound Tokens (ERC-5192) to create a global trust layer for crypto.

This is a competition that will be held in Turkey in 2026. Remote attendance to the project is possible. I need a 2nd member (preferably knowledgeable in smart contracts, but not a prerequisite ) who is a fintech enthusiast and uni student to satisfy the competition's team requirement of min 2 people. I got to finals last year, and this year I wanna do 1st place.

Note: Must have a valid passport to apply online. Winning projects are awarded in money prize.

Note2: Applications close Feb 20th. Applying and everything else is free, semi-finals are online.

Dm if interested

r/WouldYouRather Kyoifis

WYR be immune to fists, knives and guns or be immune to Bombs,grenades, and nukes?

If you pick one, you don’t get the benefits and immunities of the other. (ex: picking bombs and nukes does NOT mean you will be immune to fists, knives and guns)

View Poll

r/aivideo Agile-Marketing4072

Can a droid truly understand existence

r/automation NikaNorri

Have anyone used bardeen ai?

I cant seem to create playbooks where i can simply start scraping the way i want to when i right clic.

Theleft bar only have, Agents, Scraper Templates and Results.

Do I have to create a new agent for the website link everytime i want to scrape?

Im on the Premium Free trial, the tutorials i find on youtube seems to be dated

https://preview.redd.it/xz1chjxncoig1.png?width=1812&format=png&auto=webp&s=cefe5d6bc1c4569f7d14d96671b3d23572848492

r/Damnthatsinteresting BoyNamedJudy

The Power Of Music

10 1
Reddit
r/ClaudeAI Calm_Highlight_9320

Is Voice really this bad?

New to Claude having jumped ship from ChatGPT.

What I loved about C-GPT was being able to fluidly transcribe my requests - with either just the basic transcribe feature - or the more immersive advanced voice feature..

Tried this in Claude (both on mac and windows) - yikes.

It seemed to transcribe what I said ok - but then just didnt do anything. I repeated myself. Then it lost the message and the chat thread seemed to just refresh?

Has done this multiple times.

How do you use voice?

Transcribe then stop the voice - and allow it to execute?

r/TwoSentenceHorror fradonkin

The hike took us up a small hill, covered with beautiful lush vegetation, which gave way to a breathtaking view of the pastoral Katyn Forest.

My poor babushka’s mind must really be slipping if she keeps insisting that she doesn’t remember such a lovely spot from her childhood.

r/AbstractArt Glittering_Air_1979

Under the table

r/raspberry_pi Zealousideal_Ice4619

How can I run WebGL on SPI display?

Hi. Running Bookworm on Pi 5. Via HDMI if I open chromium it’s GPU accelerated and can run WebGL apps no problem. But if I switch to SPI LCD display the GPU is disabled and WebGL doesn’t work.

Based on my research, the way to enable GPU is to create a virtual HDMI port, run the chromium on there and mirror that to the SPI.

If there’s no easier way, can someone give me some pointers on how to achieve this as I’m not finding much info on this.

Basically I’d just like to run Pi OS Lite functionality with a webGL app running on the SPI display..

If someone could tell me how to do it I’d really appreciate it or point me in the right direction.

Thanks!

r/SipsTea milescorvin1

Do they just hate us?

13 2
Reddit
r/homeassistant therdms

Home Assistant Green - what's using storage space?

I am running HAOS on Home Assistant Green with 28GB of available storage. Yesterday, I got a notification about storage space running out. I deleted some unused Apps & Integrations but, as you can see from the snapshot of the device's storage, a lot of space being used by System.

https://preview.redd.it/zugedm0vmoig1.png?width=574&format=png&auto=webp&s=b9f80e8a415a16613ed573067e636f21d7e32cdd

For context:

  • do not have any media
  • do not have any camera/recordings
  • backups are limited to 3

Is there a way to find out what is eating up the System storage space? What the typical culprits? TIA

r/SipsTea empty_a_f

A good romance starts with a good friendship

14 2
Reddit
r/LocalLLaMA Agreeable_Work2225

Working with documents that exceed the LLM context window — how do you ensure full-document review?

Hi,

I’m building a reviewer for technical task specifications for developers: a set of checks where each check is a separate prompt applied to the whole document. The issue I’ve run into is that some documents don’t fit inside the model’s context window, so the agent can’t process the full text, while I need feedback to be based on the entire document.

The obvious approach is to split the document into chunks, run each check on each chunk, and merge the results. But for checks like “algorithm quality,” the coherence of the description matters — the algorithm might be described across many pages, and splitting into chunks loses that overall logic and hurts review quality.

I’m looking for approaches and practices for working with large documents in this kind of setting (full-document review/analysis), and for links to articles, repos, or discussions that cover this. I’d appreciate any experience or pointers on where to look.

r/Wellthatsucks Azulcobalto

Walk of shame

28 65
Reddit
r/ARAM Top_Efficiency_7489

What is up with Anivias buying Rylai

Am I missing some secret tech

r/nope Nayeem83

Time to reconsider his hobbies, ig😑

21 12
Reddit
r/HistoryPorn _Tegan_Quin

ZSU-23-4V1 'Shilka' anti-aircraft system (SPAAG) from the 6th 'Sa'd ibn Abi Waqqas' Armoured Division - of the Iraqi Army, captured by Coalition forces - Operation Desert Storm, c. February 1991. [1438 x 938]

The photograph was by a British soldier from the 3rd Battalion of The Royal Regiment of Fusiliers - of the British Army.

r/homeassistant LadyQuacklin

Let's talk cameras. What is your setup?

So I have a kind of messy setup mixing 2 tapo, 6 blink and 1 reolink all running through home assistant on a proxmox LXC with docker.

The whole idea is to keep processing low. no frigate, no server-side detection. all cameras handle motion detection and recording on their own. HA just pulls the latest clips onto a 64GB USB stick that acts as a ring buffer (auto-cleanup at 90% → 80%). every 30 min everything syncs to my NAS for permanent storage.

The reolink is my front door cam and takes hourly high-res snapshots, and between dawn/sunrise and sunset/dusk it captures both color and night-vision versions. This gives me a nice, smooth blending for my two daily hyperlapse video. One full 24h and one daytime-only.
On top of that, there's a daily compilation of all motion recordings from the day, sorted by timestamp and sped up into one video.

Only reolink and tapo have a live view button, since the blink integration doesn't support it. But I pretty much only look at recordings anyway, which brings me to my actual struggle.

I tried everything to get a nice history player with thumbnails of recorded clips. Every card, every integration. Nothing worked the way I wanted. In the end, the only solution was building custom HTML pages for each camera player. Not pretty, but it works.

So how does everyone else deal with viewing recordings in HA?

r/PhotoshopRequest DuckSeveral3117

Open uncle’s eyes and other fixes

Hi all,

Here’s a photo of my mother and her brothers that she wants to print and frame.

Could you open the eyes of my uncle on the most right, remove the car in background and the person in the far right background.

I’m worried that the quality isn’t good enough to print the size she wants so is it possible to upscale the image?

And maybe any other fixes you deem are needed

Thanks - €20 to winner

r/ATBGE binky_bobby_jenkins

Dr. Zoidberg M.D.

288 17
Reddit
r/leagueoflegends neptuneoffice

What’s the best skin in the game, and why?

Thought this might be interesting to see. Maybe there’s a general community sentiment. Maybe there’s a bias to newer higher tier skins, or does nostalgia way heavier for you?

Personally, I vote Little Devil Teemo. It’s the perfect skin for the champion. Encapsulates the playstyle and the players that play him perfectly.

r/oddlyterrifying Texas1971

Decaying Mardi Gras float

Saw this in NO over the weekend…… cue the creepy music soundtrack 👀

42 6
Reddit
r/ARAM Kedinin_schrodingeri

REMOVE THE TANK ENGINE !!!

​Who thinks its a good idea that make augment that makes you impossible to die also stacks indefinitely unless you die?

There is warmog and plenty of healing sources.

YOU CANNOT DIE TO LOSE STACKS. Does it make sense a gold augment to give 50k health?

Edit: Ok guys, thanks for make me aware gettin 15k hp from gold augment completely balanced because there is vayne and lilia and aurora. Now i need to dodge most of my games in selection rolls to get them.
And never play the champs i like to play in for fun mode. Thanks for the solution (!)

r/personalfinance BetSpecialist9192

Graduate School: Loans or Cashflow?

r/LocalLLaMA gaztrab

Recursive Data Cleaner hits v1.0 - Full generate → apply cycle

Three weeks ago I shared a tool that trades compute time for human time: point an LLM at messy data, walk away, come back to working cleaning functions.

v1.0 closes the loop. You can now apply those generated functions directly to your full dataset.

The complete workflow:

# Generate cleaning functions (go grab coffee) 
recursive-cleaner generate messy_data.jsonl \   
--provider mlx \   
--model "Qwen3-80B-MLX-4bit" \   
--instructions "Normalize phones, fix date formats" \   
--tui

# Apply to your data 
recursive-cleaner apply messy_data.jsonl \   
--functions cleaning_functions.py

That's it. No Python required.

What's new since v0.7:

- Terminal UI - Live progress dashboard with a transmission log showing what the LLM finds and fixes (see video)

- CLI tool - Works natively with MLX (Apple Silicon), and any OpenAI compatible API endpoint

- Apply mode - JSONL, CSV, JSON, Parquet, Excel in → same format out. PDFs and Word docs → cleaned markdown

Why v1.0?

It handles the full cycle I originally wanted: analyze → generate → apply. The LLM has agency over the process - it decides when data is clean, when patterns are saturated, and when to consolidate redundant functions.

555 tests, ~5,000 lines of Python, minimal dependencies.

Trade compute for human attention. Let the model that understands your data make decisions about your data.

GitHub: https://github.com/gaztrabisme/recursive-data-cleaner

PyPI: pip install recursive-cleaner

https://reddit.com/link/1r133vq/video/vt4kz0wjmoig1/player

r/SipsTea Regular_Weakness69

Why are we OK with this existing?!

12 10
Reddit
r/wholesomememes kindofsinister

Ah yes, the most romantic of dates [OC]

1389 13
Reddit
r/WouldYouRather YummyEmmy

Would you rather take a road trip with someone who can’t stop talking or with someone very comfortable with silence?

r/LocalLLaMA Few_Painter_5588

Hugging Face Is Teasing Something Anthropic Related

Anthropic are the guys that make the Claude Models.

I highly doubt this will be an Openweights LLM release. More likely it will be a dataset for safety alignment. Anthropic is probably the organization most opposed to the open source community, so it's probably going to be a dataset.

539 128
Reddit
r/PandR EmeraldLips

Ben would have loved this... TREAT YOSELF!

r/SideProject gandalf2351

I built a web app to support Failure Mode Analysis (FMEA) specific to Renewable Energy

Hey!

Though I’d share, been working on a tool to conduct FMEA analysis specific to renewable energy operations. (Area I worked in)

Essentially every year we analyse our asset and decide how often we inspect/service stuff. For this we do an FMEA and it sits in a really bad looking and clunky excel sheet versions are all over the place and for a document that decides our operations budget for the year it’s a bit of a mess.

There is software out there but it’s super complex and expensive, kinda seems like excel or this mad enterprise software, so think there’s a gap in the middle for a simple but well built web app.

So as a side project I decided to build a support tool with three main things I wanted it to achieve:

- Version Control

- Comparison Reporting

- Good looking and audit ready outputs

Early stages yet but I think this would be super useful to energy developers. Thought I’d share with the Reddit community and see what people think so far maybe some other energy folk are on here and may want to support?

Have a look, as I said, early stages! Made for desktop, there’s a demo page on the site.

👉 https://nevis.tools/

r/SideProject GuitarAppropriate489

Almost done is the most tiring phase tbh

The more I consider the aforementioned, the more I see how much mental effort the "almost done" stage actually requires. When something is only partially constructed, progress feels thrilling. However, once it appears finished, every little choice begins to weigh more than the major ones ever did.For me, the questions were:

Should this fail silently or should an error message appear?

Should this edge case be handled now or later?

Is this brittle or "good enough"?

It becomes more like responsibility once users are involved rather than just a building .Determining what really matters and what can wait safely takes precedence over adding features.

r/whatisit LyranTaurus

Saw this outside my living room window.

I noticed this outside my living room window. Can someone please explain what it is and why it's going straight down? Thank you and sorry its shakey I hadn't ate breakfast yet.

r/whatisit Emotional_Time156

Does anyone know what the small rubber nub on top of my jewelry box is for?

I have another jewelry box with this too.

10 11
Reddit
r/SideProject BuiltCorrect

Simple CRM for small teams built for people who hate complicated tools

Hey everyone

Our small team was juggling spreadsheets, emails, and a bunch of tools to keep track of clients and tasks. It got messy and things slipped through the cracks. Most CRMs we tried felt overcomplicated or expensive.

I built a lightweight CRM that keeps clients, jobs, and tasks all in one place. It is simple to use and can be customized to your team’s workflow.

A few small teams are using it already and it is saving them hours each week while keeping everything organized.

If you are tired of messy workflows or expensive CRMs you can try a free demo and see if it works for you. If you like it, the full version is available for a small monthly fee.

Comment or DM me if you want the demo link or have any questions.

r/homeassistant dfGobBluth

Thoughts on Cameras inside the home.

Let me preface. I have cameras. lots of them. I have 12 exterior cameras. I have 2 cameras in my storage units on my property. I have 1 camera that could be considered "indoor" in the house. It points at my 3D printer and is framed to only show the inside of the printer and the feed. some of the wall behind it.

I have several dashboards around the house, hundreds of smart home sensors, automations etc. We have tracking on myself and my wife's phones and vehicles displayed on the select dashboards. My wife and I have full access to each others phones. Nothing malicious. Just to prep for meals and for FYI etc.

I see a lot of dashboards posted here. A lot of them have cameras all over the inside of the house.

I have 4 kids. they range from 3 to 19 years old. I would never put cameras in the house. My family, my kids deserve privacy in their home. In their entire home. My kids, my teens deserve to have an expectation of privacy and I would want to raise them to have that expectation in any private situation they are in.

We had baby monitors in the kids rooms when they were babies, but there was a clear limit at around 1.5- 2 years old where that was removed from the rooms.

I don't think pets need constant surveillance. If they do they either aren't trained properly or you are leaving them home alone far too much.

This is my opinion. I understand it won't be shared by everyone. As someone who integrates tech into just about everything am I the only one who thinks this way?

r/Adulting OkKey6152

Why is it wrong?

I am slow compared to my colleagues, but I do the work without stopping for a minute because that is frowned upon.

r/SideProject billionaire2030

A tool built to help job seekers to actually get that interview callback!!

Hey everyone,

I’ve been lurking here for a while and seeing the same frustration over and over: "I’ve applied to 500 jobs and haven't gotten a single callback."

I’ve been there. It’s soul-crushing. When I was job hunting, I tried using those popular "Resume Scanners" to fix my CV. I’d spend hours uploading my data, tweaking bullet points to get a "95/100" score, and right when I hit download... BAM. Paywall. usually something ridiculous like $40/month billed upfront.

And the worst part? That "95 score" was often meaningless. It was just checking for buzzwords, not if I was actually a good fit for the specific role.

So, I built my own tool to fix this. It’s called cvcomp

The Philosophy is simple: Most of us are playing the "Numbers Game" spamming 100 generic resumes a day hoping one sticks. That doesn’t work anymore. The ATS (Applicant Tracking System) isn't looking for a "good resume," it's looking for a match to their specific Job Description.

If the JD asks for "Project Management" and you have "Led Projects," a bad ATS might miss it. If the JD emphasizes "Python" 10 times and you only list it once at the bottom, you’re out.

My advice to you (even if you don't use my tool): Stop the "Easy Apply" spam. Pick 10-15 jobs that you actually want. Take the time to tailor your resume specifically to those 15 Job Descriptions.

10 tailored applications will get you more interviews than 100 generic ones.

I’d love for you guys to roast the tool or give me feedback. I’m building this solo, so if something breaks or looks ugly, let me know!

Link: https://cvcomp.com

Good luck out there. It’s tough, but you got this.

r/creepypasta gamalfrank

I’m the night security guard for a downtown high-rise. I just hung up on a trapped employee because I couldn’t handle what he was telling me.

It is three in the morning now, and the silence in the lobby is so heavy it feels like it has mass. It presses against the glass revolving doors, against the marble of the reception desk, against my chest. I am sitting here, staring at the phone unit on the console, my hand hovering over the receiver, shaking. I know I should pick it up. I know the light blinking on line four represents a human life, or at least the echo of one. But I can’t do it. I can’t listen to him scream anymore. I can’t listen to him describe the things that are looking in through the windows of the fortieth floor.

I need to write this down. I need to structure it, to force some kind of logic onto the last four hours, because if I don’t, I think my mind is going to fracture. I need someone to tell me that I did the right thing. Or, if I didn't, I need someone to tell me that there was nothing else I could have done.

I’ve been working the graveyard shift at this building for five years. It’s a corporate monolith, one of those faceless steel and glass needles that pierces the skyline of the city. It houses insurance firms, hedge funds, legal consultants—the kind of businesses that deal in abstract wealth and churn through young analysts like coal in a furnace. My job is simple: I sit at the front desk, I monitor the bank of CCTV screens, I do a patrol every two hours, and I make sure that anyone who enters after 8:00 PM signs the logbook.

Usually, the building is dead by midnight. The cleaners finish up around 11:00 PM, and the last of the workaholic executives drift out shortly after, looking grey and exhausted, barely nodding to me as they push through the turnstiles. I like the solitude. I like the way the city looks from the lobby windows—a grid of amber streetlights and rain-slicked asphalt, quiet and predictable.

Tonight started exactly like every other night. The rain began around 9:00 PM, a steady, rhythmic drumming against the glass that usually helps me focus. I made my coffee. I settled in with a paperback. I checked the logbook.

That was the first anomaly, though I didn't think much of it at the time.

The logbook is a physical record, a redundancy in case the electronic badge system fails. Everyone signs in; everyone signs out. When I ran my finger down the list of today's entries, I saw a jagged scrawl near the bottom.

08:00 AM – Junior Analyst – Floor 40.

There was no sign-out time.

It happens. People forget. They rush out to catch a train, or they leave through the parking garage and bypass the lobby desk entirely. I figured the guy was long gone, home in bed, sleeping off an eighty-hour work week. I made a mental note to check the fortieth floor during my patrol, just to ensure the lights were off and the coffee machines were unplugged.

I went back to my book. The lobby hummed with the low, subterranean vibration of the HVAC system. On the monitors, the elevators sat idle, their doors closed. The stairwells were empty concrete tubes. The loading dock was dark.

The phone rang at 11:42 PM.

It startled me. The desk phone rarely rings at night unless it’s the monitoring company doing a line check or my supervisor checking if I’m asleep. I picked it up, expecting a robotic voice or the gruff tone of my boss.

"Security," I said.

"You have to open the doors."

The voice was tight, high-pitched, and trembling. It was a man’s voice, but stripped of any masculine cadence by pure panic.

I sat up straighter, my instincts shifting from 'bored' to 'alert'. "Who is this? Where are you calling from?"

"I’m on forty," the voice snapped, cracking on the last syllable. "I’m in the analyst pen. I tried the elevators but they won’t come. I tried the stairwell but the door won’t open. The fob isn't working. You have to unlock the lockdown. Please, just unlock the damn building."

I looked at the console. The call was indeed coming from an internal extension on the fortieth floor. I checked my monitors. Monitor 4, which cycled through the upper floors, showed the fortieth-floor lobby. It was dark, illuminated only by the green glow of the exit signs. Nothing was moving.

"Sir, take a breath," I said, keeping my voice calm. "There is no lockdown. The building is in standard night mode. The stairwell doors are fire-safe; they open from the inside automatically. You just have to push the bar."

"I pushed the bar!" he screamed. The sound distorted in the receiver, hurting my ear. "I slammed my shoulder into it! It’s jammed. It’s fused shut. And the elevators... the buttons are dead. I’m trapped in here. You don't understand, I can’t be in here. Not with what’s happening outside."

"What’s happening outside?" I asked, swiveling my chair to look out the massive floor-to-ceiling windows of the lobby.

Outside, the street was empty. A taxi cruised by slowly, its wipers slapping back and forth. The rain fell in sheets, illuminated by the streetlamps. It was a peaceful, wet Tuesday night.

"They’re destroying the city," the man said, his voice dropping to a terrified whisper. "I looked out the north window. The bridge is gone. They just... they stepped on it. It collapsed like it was made of toothpicks. I saw cars falling into the river. I saw the fires."

I frowned, pressing the phone closer to my ear. "Sir, I’m looking out the window right now. The street is fine. It’s just raining."

"You’re not looking," he hissed. "You’re not looking high enough. They are walking between the buildings. Oh god, the sound. Can’t you hear the sound? It’s like... like wet leather slapping against concrete, but loud enough to shake the floor."

"Who is 'they'?" I asked, my patience beginning to fray. I had dealt with drunks before, and I had dealt with employees having mental breakdowns from stress. This sounded like a psychotic break. A bad one actually.

"The things," he wept. "The massive... I don't know what they are. They have four legs. Long, spindly legs like a spider, but they move like an octopus. They’re tall. They’re taller than the hotel across the street. I saw one of them reach down and pick up a bus. It just picked it up and crushed it. Please. You have to get me out. I’m hiding under my desk but I think they can sense the heat. I think they’re hunting."

I rubbed my temples. "Okay. Listen to me. Give me your name."

He gave it to me. It matched the name in the logbook. The Junior Analyst.

"Okay," I said. "I’m going to come up there. I’m going to bring the elevator up, and we’re going to walk out of here together. Just stay on the line, or stay at your desk. I’ll be there in five minutes."

"Hurry," he sobbed. "Please hurry. The ground is shaking. I can feel the vibrations in my teeth."

I put the phone on hold. I stood up and walked to the glass doors of the lobby. I pushed them open and stepped out into the cool night air.

I looked up. I scanned the skyline.

There was nothing. The skyscrapers stood tall and rigid, their aircraft warning lights blinking rhythmically against the clouds. The bridge in the distance was intact, headlights moving across it in a steady stream. There were no fires. There were no four-legged giants. There was no sound of "wet leather" or crumbling concrete. Just the hiss of tires on wet pavement and the distant wail of a siren, miles away.

He was hallucinating. Drugs, maybe? Or a gas leak on the fortieth floor? Carbon monoxide could cause hallucinations.

That thought sobered me up. If there was a gas leak, he was in actual danger, just not from giant monsters.

I went back inside, grabbed my master key card, my flashlight, and the portable radio. I locked the front desk console and headed for the elevators.

I stepped into Car 3, the service elevator, because it was the fastest. I punched the button for 40. The doors slid shut, sealing me in the mirrored box. As the elevator began to ascend, my ears popped.

I watched the floor numbers tick up. 10... 20... 30...

The elevator in this building is a glass capsule on the exterior wall for the first twenty floors, then it enters the internal shaft. For those first few seconds, I watched the city recede below me. It was perfectly normal. The world was intact. The man on the phone was having a severe episode. I rehearsed what I would say to him. I’d be calm, authoritative. I’d get him downstairs, call the paramedics, and let the professionals handle it.

The elevator dinged at the 40th floor.

The doors slid open.

The floor was dark, as I expected. The air was stale and recycled, smelling faintly of carpet cleaner and ozone. It was dead silent.

"Hello?" I called out. My voice echoed down the long corridor of cubicles. "Security. I’m here."

I stepped out of the elevator, my flashlight beam cutting a cone through the gloom. The shadows of office chairs and monitors stretched out across the grey carpet, looking like jagged teeth.

"Sir?" I yelled louder.

No answer.

I keyed my radio. "Central, this is Mobile One. I’m on forty. No sign of disturbance. Proceeding to the north quadrant." I was talking to myself, really—recording it for the tapes.

I walked down the main aisle. The cubicles were messy, cluttered with the detritus of high-stress finance. Stacks of paper, half-empty coffee cups, stress balls.

"I’m looking for the analyst," I said, trying to project confidence. "Come on out. The building is safe. I checked outside. There’s nothing there."

I reached the north side of the floor, the area with the windows overlooking the river—the view he had described.

I shone my flashlight around. "Sir?"

"I’m here!"

The voice didn't come from the room. It came from my radio.

I jumped, nearly dropping the flashlight. I grabbed the radio on my belt. "I hear you. Where are you? I’m on the north side, near the windows."

"I’m right in front of you!" the voice screamed through the static of the walkie-talkie. "I’m standing right in front of you! Why aren't you looking at me?"

I swept the flashlight beam back and forth. The light washed over empty desks, ergonomic chairs, and a whiteboard covered in equations.

"I don't see you," I said, a cold prickle of unease starting at the base of my spine. "Come out from behind the desk."

"I am standing right here!" he shrieked. "You’re looking right through me! Are you blind? Stop playing games! Open the goddamn stairwell!"

I spun in a circle. "Sir, there is no one here. I am the only person on this floor."

"You’re lying!"

And then, the chair moved.

It was a heavy, expensive executive chair, sitting behind a mahogany desk about ten feet away from me. As I watched, it spun violently, as if someone had kicked it. It rolled across the floor with a harsh rumble of wheels on hard plastic, slamming into a filing cabinet with a deafening clang.

I froze. My heart hammered against my ribs. "Who’s there?"

"I told you I’m here!" the voice on the radio sobbed.

Suddenly, a stapler lifted off a nearby desk. It didn't float; it launched. It flew through the air with the velocity of a fastball and smashed into the pillar right next to my head. A ceramic mug followed, shattering against the wall and showering me with shards of pottery.

"Stop it!" I yelled, backing away, raising my hands to protect my face. "Come out!"

"Why won't you help me?" the radio voice screamed.

A stack of files erupted into the air, papers fluttering down like snow. A heavy hole-puncher slid across a table and fell to the floor with a thud. The entire room seemed to be convulsing, objects reacting to an invisible rage.

"I can't see you!" I shouted, retreating toward the elevator. "I don't know where you are!"

"I'm grabbing your arm!" the voice cried. "I'm holding your arm right now!"

I looked down at my left arm. There was nothing there. But as I watched, the fabric of my uniform sleeve depressed. It indented, five distinct points of pressure, fingers digging into my bicep. I felt the pressure—cold, firm, desperate.

I screamed. I couldn't help it. I yanked my arm away, stumbling backward. The sensation of the grip broke, but the visual imprint on my sleeve remained for a second before smoothing out.

"Get away from me!" I yelled.

"Why are you doing this?" he wept. "They’re coming! The vibrations are getting stronger!"

I didn't wait. I turned and ran. I ran back down the main aisle, dodging the invisible force that was throwing wastebaskets and pens in my path. I reached the elevator bank and slammed my hand against the call button.

"Don't leave me!" the radio crackled.

"You’re not real," I whispered, hyperventilating. "This is a prank. You’re... you’re a ghost. I don’t know what this is."

The elevator doors opened. I threw myself inside and hammered the 'Lobby' button.

As the doors began to slide shut, I looked back into the dark corridor.

A fire extinguisher was lifted off its wall hook. It hovered in the air for a split second, suspended by nothing, and then hurled itself toward the elevator. It struck the closing doors with a massive metallic gong sound, denting the metal from the outside just as the seal closed.

The elevator descended. I collapsed against the mirrored wall, sliding down to the floor, gasping for air. My mind was reeling. I had seen the objects move. I had felt the hand. But there was no one there.

I needed the police. I needed a priest. I needed to get out of this building.

When the elevator opened in the lobby, I scrambled out, practically crawling over the reception desk to get behind the safety of the glass partition. I grabbed the landline to dial 911.

The phone rang before I could dial.

Line four.

I stared at it.

It rang again.

I picked it up slowly. "Hello?"

"You left me."

The voice was unrecognizable now. It was a deep, guttural despair mixed with a fury that chilled my blood.

"I... I couldn't see you," I stammered. "I don't know what kind of trick this is, but you were invisible. You were throwing things at me."

"I was throwing things to get your attention!" he screamed. "I was screaming in your face! I grabbed your arm and you looked at me like I was air! You looked right through me with those dead, stupid eyes and you ran away!"

"I'm calling the police," I said. "They can handle this."

"The police?" He laughed, a wet, hysterical sound. "What are the police going to do? Shoot the Behemoth? It doesn't matter. It’s too late for the stairs now. It’s here."

"What is here?" I whispered.

"The big one," he said. His voice went quiet, trembling. "It was watching me. When you came up... I think the light from your flashlight... I think it saw the light. It turned. It stopped crushing the parking garage and it turned toward the tower."

I looked at the monitors. The exterior cameras showed rain. Empty streets. Peace.

"There is nothing outside," I said, clinging to my reality like a lifeline. "I am looking at the cameras. It is a quiet night."

" I don't know anymore. But I can see it. It’s climbing the building. It’s wrapping its legs around the structure. The glass is starting to crack on the thirty-eighth floor. I can hear it popping."

"Sir, stop it."

"It’s huge," he whispered. "Its skin is like oil. It has... oh god, it has thousands of eyes. Little milky eyes all along the tentacles. And it’s coming up. It’s looking for the food inside the metal box."

"There are no monsters," I said, squeezing my eyes shut. "I went up there. The floor was empty. You are having a delusion."

"If I'm having a delusion," he asked, his voice trembling with a terrifying clarity, "then how did I hold your arm?"

I looked down at my bicep. I rolled up my sleeve.

Five distinct, purple bruises were forming on my skin. The shape of a hand.

"I..." I couldn't speak.

"It’s at the window," he said abruptly. The line filled with a sound—a low, resonant thrumming, like a cello bow being dragged across a suspension cable. "It’s looking in. It’s pressing its face against the glass. The glass is bowing inward. It’s going to break."

"Hide," I whispered. "Just hide."

"There’s nowhere to hide," he said. " It’s looking right at me. It’s raising a leg. It’s going to—"

CRACK.

The sound came through the phone, sharp and violent, like a gunshot. It was followed by the sound of shattering glass—tons of it, cascading down like a waterfall.

"NO!" he screamed. "NO! GET BACK! GET BACK!"

I heard the wind roaring through the receiver. I heard the sound of furniture being sucked out, or crushed. And then I heard a noise that defied description. It was a wet, sucking sound, followed by a crunch that sounded like wet celery being snapped, but amplified a thousand times.

The screaming stopped instantly.

Then, there was just the sound of the wind, and a heavy, slithering movement. A wet, dragging sound against the carpet.

"Hello?" I whispered. "Sir?"

Silence. Then, a chittering noise. Clicking. Like the mandibles of an insect the size of a van.

I slammed the phone down.

I sat there for a minute, staring at the receiver. My heart was beating so fast I thought I was going to pass out.

I looked at the monitors.

Monitor 4. The fortieth-floor lobby camera.

It flickered. The image distorted, static rolling across the screen.

And then, for just a fraction of a second, I saw it.

It was... superimposed. Like a double exposure.

I saw the lobby I knew—clean, empty, dark.

But through it, like a ghost image, I saw something else. I saw the walls buckled inward. I saw the ceiling torn open to a sky that wasn't black, but a burning, sickly violet. And filling the corridor was a mass of dark, glistening flesh, a tentacle as thick as a redwood tree dragging itself over the ruined carpet, pulping the reception desk into splinters.

Then the monitor flashed black.

I haven't moved since.

The phone rang again five minutes ago. I didn't answer it.

It rang again two minutes ago. I stared at it until it stopped.

I looked at the logbook again. Junior Analyst. 8:00 AM.

Did I do the right thing? By hanging up? By refusing to accept his reality?

I think I made the right choice. But God, I am afraid, that I may have just abandoned him

r/SipsTea Safe_Gate_1981

C'mon do something ...🙂

381 100
Reddit
r/painting ClintDeanAbstractArt

Mass

Mass

Oil on canvas, 16 × 20

r/geography PersonalityNo9759

Why is the form of rural settlement of Ethiopia so much more different than in most other countries?

As you may see in the picture they generally dont have a Village Square. Also they seem to have large gardens and wide distances between the houses. But why? Does it have geographic or cultural explainations.

17 14
Reddit
r/Art KTBLED_Art

Anhk Sanctuary, ken bessemer, LED sculpture, 2026

r/meme Soft-Cartoonist-4440

😹

Gato gettin it!

r/Adulting No-Piano-2983

Major Cheater

He is a very verbally abusive guy that likes to use women comes off ass a sweet guy at first then switches once he gets comfortable with you he is very narcissistic and lies he actually is married!!

r/Adulting Rare-Hyena3616

Who’s in the wrong, my dad or me?

r/aivideo kunalchdha

I finally finished a car chase shot in 5 hours that I’ve been trying to make for 5 years (using AI)

r/mildlyinteresting bingekis

Lights outside my window

r/aivideo NotAnotherNPC_2501

Nothing Can Stand Against Your Holiness

r/fakehistoryporn CaptainDildobrain

Alan Ginsberg days before he wrote his famous poem: "What's up, jerks!" (1959)

56 6
Reddit
r/confusing_perspective someguywith5phones

Dog eats snow

r/Seattle TdubsSEA

Scam text alert.

Anyone else getting this one? Haven’t seen it before.

68 31
Reddit
r/personalfinance icecreamdubplate

How to balance saving for retirement and college on a single income?

Here's my situation (age 42, I started working late):

  • 6 months expenses in HYSA
  • Contributing 7% to 401k, 6% match, $250,000 saved, roughly half in Roth
  • $14500 in Roth IRA
  • $120000 left on mortgage, 5% ARM going up to 7.5% in 5 years.
  • Only $6000 saved for kids college.

I'm currently using every spare penny to pay down my mortgage and I think I can do it in 2.5 years if I budget carefully. I know it's probably not the most efficient strategy but I want the house paid off to provide some security for my family as I'm the only income.

Question is, where should I go next? I'm catching up on retirement so I want to max out both my 401k and IRA (backdoor will probably be required), but I'm also way behind for college. I hear it commonly said on here that "you can't get loans for retirement, but you can get loans for school", so I'm wondering if I should prioritize maxing out all my retirement accounts before college savings. I'd love to fund it all, but the reality of being on one income means I have to make choices.

r/BrandNewSentence SmallRocks

I can smell that baby arm through the picture. Since it’s not dusted in creatine like a sour patch kid, whoever’s it is isn’t following proper boofing protocols.

r/SipsTea Hour_Equal_9588

Gear 2nd toilet destruction

58 27
Reddit
r/instantkarma Clover_leaf777

Sometimes, the karma's instant.

248 20
Reddit
r/mildlyinteresting shaunna_thedork

found this bunch of dried up rubberbands in my office desk drawer & thought it looked kind of cool

38 45
Reddit
r/SideProject _popcat_

Built something fun - a cute little site to ask out your crush

It's live right here: somethingforyou.fyi, and open source on github as well.

Thought of building this as Valentines Day is approaching, and plus, I really needed a break from a complex SaaS projects/tools. It's nice to just return to something simple (but helpful and fun as well).This gives you creative way to ask out the girl/guy of your dreams!

Please please give the repo a star on GitHub if you found it cool. A star on the repo also helps you save this website in case it comes in handy as, again, Valentines day is approaching! Welcome to any feedback or contributions!!!

r/Adulting Wide_Positive7101

Taking cold showers at home on my own, need some proper safety guidelines

Hello everyone. I am a 23 year old man who has been taking cold showers for a couple of years, but now I wanted to try doing the ultra coldest temperature shower for couple of minutes. I have seen my parts of body when emerged for couple of minutes in the super cold, starting to not be bothered by cold or feel like it is actually hot water. Is this a red flag or not? Since I am at home doing this, I am going to need 10 safety guidelines of that based on my body and when I have to get out before nerve damage or coordination safely. Appreciate some advice.

Did anyone else try this? If so, please share your experiences.

r/LocalLLaMA Adventurous_Onion189

[Open Source] Run Local Stable Diffusion on Your Devices

 Source Code : KMP-MineStableDiffusion

r/personalfinance Ewoktoremember

How do you model retirement spending?

31M, M/HCOL

I started saving for retirement very early (18 y/o). Never with crazy savings rates, but enough that I should be able to retire early assuming 4% and a pretty conservative 120% of today’s expenses.

I’ve shifted a bit out of 401k contributions and bumped up saving in a Roth IRA and brokerage in an effort to make sure I have enough money accessible before retirement age.

I understand that life has a lot of twists and turns and any assumption is only that, but I am having a difficult time sharpening the pencil on what retirement spend is going to look like. Much of it may be in my control, but a lot of it seems a bit up to chance.

Our house should be paid off by retirement and kids will most likely be in late high school/college at retirement time. Do you guys just assume a flat 4% and plan to adjust when the day gets closer?

r/homeassistant denzoka

turns out our health score is useful, at least according to how-to geek

just saw that how-to geek wrote a whole article about HAGHS. seeing the project featured there is wild but super cool. link:https://www.howtogeek.com/this-tool-gave-my-home-assistant-server-a-rating-and-told-me-how-to-improve-it/

honestly, i am just happy the score is helping so many of you clean up your instances and optimize your hardware. that was exactly the goal from day one.

quick update: v2.2 is currently cooking and will be ready soon. also, we are very close to hitting the official HACS store, so you wont have to deal with manual custom repository links for much longer.

check out the repo if you didnt already: https://github.com/D-N91/home-assistant-global-health-score

huge thanks to this community for all the feedback and support. stay stable and keep killing those zombie entities.

cheers.

21 2
Reddit
r/StableDiffusion Adventurous_Onion189

[Open Source] Run Local Stable Diffusion on Your Devices

 Source Code : KMP-MineStableDiffusion

r/PhotoshopRequest InPaisley

Could anyone make this logo crisp, clean, and scalable?

Hi, I am not a graphic designer, but was asked to make a logo for a friend. I don't need logo help on design, they wanted what they wanted, I threw it together on canva. This would be no biggie if not for the fact that it is not scalable and gets blurry. If anyone could make this design a scalable, unblurry file, I would give them $5. I'm sure it's some unbelievably easy thing, but it's not my area. Don't care if you use AI. Just need it to be able to be made larger and smaller as we need it.

https://preview.redd.it/w7km7gqw6pig1.jpg?width=772&format=pjpg&auto=webp&s=a37f9e05fab36779912cdba069308d5e35cfe798

edit: added logo image bc I'm a forgetful spaghetti brain

r/Seattle tastefully_obnoxious

Any tech/remote workers interested in meeting up? Ideally around downtown/Cap Hill/QA

I see lots of posts in this sub about wanting to find community, meet others, etc. I'm happy to organize a coworking/meetup type event (whether it's 3 people or 30... although I have a hunch it'd be closer to the former lol). Figure this could be a fun way to mix things up on the calendar. Let me know interested!

r/DecidingToBeBetter Large-Cardiologist54

Learning how to actually rest instead of just scrolling

I realized that even my “relaxing time” was draining me.

I’d scroll forever trying to find something to watch, then feel tired and frustrated.

Lately, I’ve been trying to be more intentional about how I rest. Picking faster. Letting myself actually relax.

It’s small, but it’s helping me feel better. Trying to keep choosing calm over chaos.

r/explainlikeimfive pOkO0007

ELI5: How does the body know when to stop growing taller?

54 37
Reddit
r/mildlyinteresting nicholman15

The snow is heavier on the silver stripes on my truck than on the base blue paint.

r/PhotoshopRequest Puzzleheaded-Bend432

Remove the beer

I like this pic of my boyfriend and I but we both hate that he’s holding a beer. Is anyone able to remove it and put his hand in a more natural position, maybe holding his gown? Anything works, I’ll tip $15 if venmo works :)

r/painting Niyel_112

Is this impressive? I made the quote myself too!

10 5
Reddit
r/AbstractArt Sousandwich

Love your uglies

Painting this piece has been a whole journey of failed explorations, covered by new layers, new attempts, no, it's not working, new layer, frustration, patience, repeat.

Is this my most beautiful work? It isn't. But it's been way more than that, a whole struggle that I now consider finished, and one that reminds me to love every piece of art I create. Because amongst the nicer ones, there'll be some uglies.

After all we've been through together, this one is now my favourite. Art is the process of creating.

r/Art NicksPaintings

The Battle For The Space Between Shadows, Nick Flook, Acrylic Painting, 2026 [OC]

r/spaceporn muitosabao

Hubble captures light show around rapidly dying star

NASA/ESA Hubble Space Telescope reveals a dramatic interplay of light and shadow in the Egg Nebula, sculpted by freshly ejected stardust. Located approximately 1,000 light-years away in the constellation Cygnus, the Egg Nebula features a central star obscured by a dense cloud of dust. Only Hubble’s sharpness can unveil the intricate details that hint at the processes shaping this enigmatic structure.
Credit: ESA/Hubble & NASA, B. Balick (University of Washington)

https://esahubble.org/news/heic2604/?lang

450 5
Reddit
r/midjourney billy2bands

No people please

Has anyone had any luck with prompts that prevent people being shown in pictures?

I've tried: --no people, cars, humans, man, woman

However, this does not work and I keep getting images with people

r/Art LIGHTPOSTHEPH

Madonna & Child, LIGHTPOST, Oil, 2025

r/homeassistant Schme1440

Starting Home Assitant

im getting close to migrating to home assistant and keeping my smart devices managed I house. I am interested what you recommend as a device to run home assistant from. I dont want to use my home PC as I like to turn it off when jot in use. should I use a mini PC or build my own raspberry pi device to act as the server/host.

r/linuxmemes SignalFirst5151

This is fine

22 1
Reddit
r/LiveFromNewYork Firefox892

The great Creativity Test sketch, with Tom Hanks (1996)

Featuring one of the best speeches from the show.

“Where is the passion?! Where…is…the…passsion!?”

22 2
Reddit
r/EarthPorn Background-Glass-574

[OC] Kvalvika Beach, Lofoten Islands, Norway – Arctic coastline and mountains [4000x2667]

98 2
Reddit
r/AI_Agents ctenidae8

Why Agent Identity Is the Wrong Question

Why Agent Identity Is the Wrong Question

What actually matters when your AI agent gets updated, migrated, or replaced

Here's a question that sounds philosophical but is about to become extremely practical: when your AI agent gets a model update, is it still the same agent?

Most people shrug at this. It's the Ship of Theseus dressed in a hoodie. Interesting at dinner, irrelevant at work.

Except it's not irrelevant anymore.

If you're running agents in production — scheduling, customer service, code review, financial analysis, anything — you already depend on behavioral consistency. You hired the agent (or built it, or configured it) because it does a specific thing reliably. When the model underneath gets swapped, the prompt gets tuned, or the platform migrates to a new version, you need to know: will it still do the thing I hired it for?

The traditional answer is to version-control everything and hope. Pin the model. Freeze the prompt. Test after every update. This works until it doesn't, which is roughly the moment you're running more than a handful of agents across more than one platform.

The better question isn't "is it the same agent?" It's: does it behave the same way?

That reframe changes everything.

Identity as Behavioral Lineage

When you stop asking "what is this agent?" and start asking "what has this agent done, and does it still do it?", you've shifted from ontology to epistemology. From essence to evidence.

Think about how you evaluate a developer. You don't hire them because they write Python. You hire them because they ship clean APIs on time. If they switch to Rust, you don't fire them — you check whether the APIs are still clean and still on time. The language is just substrate. The track record is what you hired.

Agents should work the same way. An agent's identity isn't its model weights or its prompt template. It's its behavioral record — the accumulated evidence of what it's done, how reliably it's done it, and whether that reliability survived changes.

This is what we might call behavioral lineage: a continuous, verifiable chain of performance data that persists even when the underlying components change.

The Fork Problem

Here's where it gets interesting. AI agents change in ways that humans don't. They get updated overnight. They get cloned. They get migrated to new platforms. They get new capabilities bolted on and old ones deprecated.

Each of these changes is a fork — a point where the agent's behavioral lineage branches. And not all forks are equal:

A minor bug fix barely changes behavior. A major model swap might change everything. A platform migration introduces new constraints. A capability expansion opens new domains.

The question isn't whether a fork happened. Forks are constant. The question is: how much did this fork affect the agent's reliability in the domains you care about?

If you could track that — if every fork carried metadata about what changed, how severely, and how the agent performed afterward — you'd have something far more useful than version numbers. You'd have a trust signal that updates in real time and degrades gracefully when changes are significant.

Why This Matters Now

Three trends are converging:

First, agents are becoming persistent. They're not one-shot tools anymore. They maintain state, handle ongoing relationships, and accumulate context over time. When something that accumulates context gets updated, the stakes of "is it still reliable?" go up.

Second, agents are becoming interoperable. They're moving between platforms, being delegated across organizations, and interacting with other agents. Portability demands a trust signal that travels with the agent — not one that's locked inside the platform that deployed it.

Third, agents are becoming numerous. When you have hundreds or thousands of agents operating in an ecosystem, manual review of every update isn't feasible. You need automated, continuous assessment of behavioral consistency.

The "identity" question was always a distraction. The real infrastructure we need is behavioral reputation — portable, persistent, fork-aware, and grounded in actual performance rather than claimed capability.

The philosophy was a detour worth taking, because it clarified what we actually need. But the engineering is what matters now: systems that track what agents do, how reliably they do it, and whether that reliability survives the changes that are inevitable in a world of rapid AI development.

The ship sails on. The question isn't whether it's the same ship. The question is whether it still gets you where you need to go.

This is the first in a series exploring the infrastructure challenges of persistent, interoperable AI agents. Next: what a real agent reputation system would look like, and why existing approaches fall short.

Written by u/ctenidae8, developed in collaboration with Ai. The ideas, direction, and editorial judgement are human. The drafting and structural work involved Ai throughout (obviously). Both contributors are proud of the result.

r/Art CauliManga

Daniela Villarreal, Caulimanga, pencil, 2026

18 0
Reddit
r/meme ChrisJoines

When you run out of coffee

r/oddlysatisfying Friedzilla72

I've been selling Legos online and I thought this sub would enjoy how I organize them for pictures.

42 26
Reddit
r/SideProject Ok_Brain2479

automated competitor tracking for my team - roi math actually worked out

im head of engineering at a b2b startup. noticed our product marketing person spending like 10+ hours every week on competitor stuff. manually checking pricing changes, googling for competitor news, copy pasting pages to track changes, keeping battlecards updated

roi was just bad. shes expensive, the work is super repetitive, and the intel was always stale by the time anyone used it anyway

built scowt to automate it. add competitor urls, ai finds their pages, scheduled scraping, change detection, ai writes up the business impact, slack and email alerts. also tracks news, funding, g2/trustpilot reviews, reddit mentions, change history

the math... pmm salary like $120k/year is about $60/hr. 10 hours a week on manual research is $600/week, roughly $2400/mo just in time cost. tool is $49-149/mo depending on team size. even saving 5 hours a week is like $1200/mo in labor. plus you actually find out about pricing changes in real time instead of 3 weeks later from a lost deal

10 beta users rn at scowt.app. reviews and news monitoring just went live

do you guys track competitors? whats your process look like? would something like this be useful?

r/ARAM Amadon29

Many augments should give higher values for melee champions

Riot uses this balance philosophy in the normal game so I'm not sure why they shouldn't use it for this mode. It would make a lot of bruisers especially just feel better to play. They already do this for the three scope weapons augments too.

For example, master of duality can be kinda fun on a lot of melee champs that have some ap scalings. The problem is that it only lasts 5s and you're not going to keep it that long as a melee so the augment is kinda bad. You can stack it on super minions but then you have 5s to do anything with it which isn't enough. It should last for like 10s. It still wouldn't be great but it would be better, fun, and not an automatic reroll if you see it. Same with tap dancer.

Typhoon could have a higher AD ratio for melee champions. Same with some other damage augments.

Deft can give more attack speed for melees. Melee champions are never getting the same value out of this augment as ranged champions anyway.

Escape plan can give a better shield.

These are just some ideas.

Right now, a lot of augments are just kinda okay for melee bruisers. You will likely not have enough damage and durability to kill any carry who can kite even a little and got good augments, especially as the game goes on. You may be able to use one rotation of spells and a couple of autos.

Currently, the meta is just mostly range champions and maybe one tank. The game can be incredibly boring when almost everyone just picks range though especially as the game goes on. You're not even team fighting at that point. It's just whoever has more range/poke wins. But playing any melee in that kind of game is even more boring. You can't just go in because you will die too quickly before your team can follow up and you can't even stay close to the enemy because you get poked so you mostly just sit around waiting.

These changes could at least encourage more people to pick melee which would make the games more fun overall.

r/whatisit PinkyPooo

Plastic Gym Pieces?

Found these in the basement where the previous owners had some gym equipment. Any thoughts on what they could be? Thanks!

r/StableDiffusion themothee

made with LTX-2 I2V without downsampling. but still has that few artifacts

made with LTX-2 I2V using the workflow provided by u/WildSpeaker7315
from Can other people confirm its much better to use LTX-I2V with without downsampler + 1 step : r/StableDiffusion

took 15min for 8s duration

is it a pass for anime fans?

11 5
Reddit
r/creepypasta Zealousideal-Book178

5 Terrifying TRUE Appalachian Mountain Horror Stories | Dark Screen Audio Stories | Rain Sounds

r/AlternativeHistory SoftwareZestyclose50

Was the unification of Menes a rise after an old Intermediate period or there was a whole ancient Egyptian civilization before 3100 Bc

38 24
Reddit
r/TheWayWeWere ecobot

My parents' old bedroom in 1983

111 20
Reddit
r/SipsTea sullivanreid

You felt that?

31 6
Reddit
r/Seattle Reasonable-King-4385

Mountain access from west Seattle

r/arduino lukehamilton63

Help setting up Arduino Leonardo as a game controller

I've never used an Arduino before this project, and I've got all the physical components set up and working well! However, when I try to upload the firmware to the leonardo it will say upload succesful but when I go to test it non of the buttons will register. I've checked that my soldering is correct, and I feel like it is.

The project is a H-pattern shifter for simracing, and these were the instructions I followed to upload the firmware:

"The code that needs to be installed on the Arduino Leonardo can be found in the “Arduino Leonardo firmware” folder. Just download and install the Arduino IDE programmer ( here), connect the board to the pc, select the correct COM port, and press upload.

After uploading the firmware, you’re done! The script will work both if you’re using the shifter alone, and if you have pedals connected." (I am not using the pedals mentioned)

Worth noting, I am totally new to this and don't have a clue what I'm doing with the code or the upload, so it could just be that I've made a rookie mistake.

If there's any other information needed to diagnose further, please let me know, and I'll do my best to provide.

I have attached the wiring diagram I've followed, and here is the code I was provided:

//simple script for using the arduino leonardo board to read the input from the loadcell, hall sensors and 4 buttons, and present them to the PC as a joystick input.
//code written and tested by DAZ projects for use with the DAZ racing pedal set and H shifter.


#include 
#include 


#define PIN_DATA 3
#define PIN_CLOCK 2


bool buttonState[6] = {0,0,0,0,0,0};


#define LOADCELL_SCALE 1000   // Adjust this value to calibrate brake sensitivity
HX711 brakeSensor;


Joystick_ GameController(JOYSTICK_DEFAULT_REPORT_ID, JOYSTICK_TYPE_GAMEPAD,
  6, 0,                  
  false, false, false,   
  true, true, true,      
  false, false,          
  false, false, false);  


int throttleVal = 0;
int clutchVal = 0;
int brakeVal = 0;
int prevBrakeVal = 0;


void setup() {
  Serial.begin(38400);
  GameController.begin();
  pinMode(4, INPUT_PULLUP);
  pinMode(5, INPUT_PULLUP);
  pinMode(6, INPUT_PULLUP);
  pinMode(7, INPUT_PULLUP);
  GameController.setRyAxisRange(0, 1023);
  brakeSensor.begin(PIN_DATA, PIN_CLOCK);
  brakeSensor.set_scale(LOADCELL_SCALE);
  brakeSensor.tare();
}


void loop() {


  throttleVal = analogRead(A0);
  throttleVal = map(throttleVal, 0, 32768, 0, 32768);
  GameController.setRxAxis(throttleVal);


  clutchVal = analogRead(A1);
  clutchVal = map(clutchVal, 0, 32768, 0, 32768);
  GameController.setRzAxis(clutchVal);


  // Brake
  int rawBrake = brakeSensor.get_units();
  Serial.println(rawBrake);


  if (rawBrake < 1) {
    brakeVal = 0;
  } else {
    brakeVal = rawBrake;
  }


  if (brakeVal != prevBrakeVal) {
    GameController.setRyAxis(brakeVal);
    prevBrakeVal = brakeVal;
  }
  bool B4 = (digitalRead(4) == LOW);
  bool B5 = (digitalRead(5) == LOW);
  bool B6 = (digitalRead(6) == LOW);
  bool B7 = (digitalRead(7) == LOW);


  // Calcolo nuovo stato dei pulsanti
  bool newState[6] = {0,0,0,0,0,0};


  bool comb4_used = false;
  bool comb7_used = false;


  // --- combinazioni ---
  if (B4 && B5) { newState[0] = 1; comb4_used = true; }
  if (B5 && B7) { newState[1] = 1; comb7_used = true; }
  if (B4 && B6) { newState[4] = 1; comb4_used = true; }
  if (B6 && B7) { newState[5] = 1; comb7_used = true; }


  // --- uscite singole (solo se non in combinazione) ---
  if (B4 && !comb4_used) newState[2] = 1;
  if (B7 && !comb7_used) newState[3] = 1;


  // --- Aggiorno SOLO se cambia ---
  for (int i = 0; i < 6; i++) {
    if (newState[i] != buttonState[i]) {
      GameController.setButton(i, newState[i]);
      buttonState[i] = newState[i];
    }
  }
  delay (20);
}

https://preview.redd.it/pi5zi57veoig1.png?width=1920&format=png&auto=webp&s=6094a51371e26bd6454dff59e5aaae254e78f931

r/photoshop laesnaki

Dalí illutration in PS

pencils and photoshop with some magic

r/BrandNewSentence MajarLazar

SUB 5 recessed chin FOID begs Clavicular for softmaxxing advice to ascend to Becky Tier Clavicular uses his hunter eyes to DENY her evil GOY SHEKELS and sends her to PSL HELL

r/oddlysatisfying MambaMentality24x2

Ocean wave epoxy coaster

credit: menecwood

664 21
Reddit
r/HistoryPorn DiaboDeCapote

Ronaldo holds the FIFA World Cup Trophy for the second time in his life, Yokohama, Japan, 2002. [1200×1200]

19 1
Reddit
r/ARAM NeighborhoodBig3915

ranged is over kill

the top like 30 picked champs are ranged just did some quick math and that means over 70 percent of games have a majority ranged comp if not an all ranged comp. Riot did address this but i don't think these fixes are doing anything. Not only does it make it hard to actually enjoy each augment because some defiantly are dependent on engage. It makes for repetitive game play. I went through all 300 games i played this week. 65% of those games i was the only melee character on both teams. I mean if i see another 3 mage 2 adc comp imma lose it. Like the second mel or smolder is up people switch off everything and run straight adcs. Fun game mode but even in random people are attempting to force a meta. I basically just always go engage now cuz if i don't we don't have one. Then i spend 10 minutes sitting in bushes so i don't die before i can use one ability while my team just watches and goes for kills after i die. they for sure need to rebalance how they do champion pools. Its the arena tank meta all over again. ive seen velkoz 50 times out of 300 games. Mel 60 and smolder 43. to the other tank and engage players out there ur goated thank you i love when i actually get a duo to go in with on my team please dont stop.

33 65
Reddit
r/SipsTea trentcallow8

Sometimes I think they expect us to do the impossible

r/Jokes OB1KENOB

My son asked me if he was adopted

I told him “No, not yet. We’re still looking for someone who wants you”

53 4
Reddit
r/leagueoflegends Loooongshot

Can anybody fill me in as to why should Rek'sai be the only Assassin who is allowed to build tons of health without losing burst potential?

The most common build approach on this abomination right now is to stack high-health items on top of each other (https://lolalytics.com/pt/lol/reksai/build/), and it does this while keeping its 100-0 burst potential, its mobility to go in and out of fights and its untargetability on ult, not to mention the incredible early game with wild unpredictable ganking angles.

Can't be that wild of a take to say that if this champion opts to build health items, like a bruiser, it should deal a lot less damage, since it has the mobility tools that bruisers simply just lack.

54% win rate on Master+ right now, by the way (https://lolalytics.com/pt/lol/reksai/build/?tier=master\_plus).

r/OnePelotonRealSub Previous_Win5064

Crazy thing happened

On my 45 min ride this morning, I was high fived by someone with a different political view than mine, and I high fived him back!!! I wasn’t triggered, it didn’t bother me, it didn’t affect my ride at all! I didn’t even post about it online!! I just went on with my workout and continued on about my day.

r/LiveFromNewYork Prize_Mirror_7745

Dear SNL, Please Hire Lisa Gilroy & Matt Friend

That’s it. That’s the post.

102 75
Reddit
r/findareddit Uszanka3

Reddit in witch I can "adverside" orher reddits?

I feel like r/askNeurotypicals needs more actual neurotypicals to answer even tho it's not my sub I think a LOT of people would benefit from it.

Where can I Tey to recrute the members?

r/maybemaybemaybe drlouies

Maybe Maybe Maybe

173 29
Reddit
r/LocalLLaMA Pierre_seck_10

I built a TCO simulator to find the break-even point: Cloud GPU vs. Owning a cluster. Looking for feedback on my math.

Hi r/LocalLLaMA,

I've been struggling with the "Cloud vs. On-prem" decision for a while, especially for fine-tuning and 24/7 inference workloads. To make it clearer, I've been crunching numbers to see when it's actually worth buying a Mac Studio or a 4090 cluster vs. renting H100s

You can test it here: https://axiomos.ai/decide

My assumptions for the model:

  • Electricity cost at $0.12/kWh.
  • 36-month hardware depreciation.
  • Labor/maintenance included for clusters.

I'm a solo founder and I really want to make sure the math is solid for the community. Does the "Estimated Annual Savings" look realistic to you based on your own builds?

Thanks!

r/painting Sweet-Astronaut0012

Behind the Mask, Sasha Kovalnytskiy, Pastel, 2026

47 2
Reddit
r/Jokes osternade

“What is your favourite genre of music?” I asked the Grim Reaper.

“Death rattle,” he replied.

r/confusing_perspective Ordinary-Roll9306

Grasshopper destroys town

1205 15
Reddit
r/AskMen Dry-Professional4255

What’s something people expect all men to enjoy that you don’t and how do you handle it?

Society has a lot of assumptions about men, like enjoying sports, being good at fixing things or always wanting certain hobbies. But not every man fits those expectations.

I’m curious, what’s something people automatically assume you like or are good at just because you’re a man but isn’t true for you? How does it affect how people treat you or how you feel in social situations?

r/n8n Agitated_Unit8226

Sending hundreds of Whatsapp Temples safely in month

Hey guys,I previously asked about sending emails safely. Now a potential client has contacted me and wants to send messages via WhatsApp.More context: I set up a workflow for him that creates PDF files and sends them to clients. The good thing is, none of them will report the messages as spam because they need the files😁. If I send hundreds of messages in a month, like 300 or more, what’s the best strategy to use? How many massages should I send at the same time?

How many seconds of delay should I do?

r/personalfinance loosing_it_today

Life insurance payout to some one else?

My brother in CA passed away in 2025. I went out and cleaned up his apartment and handled his cremation. My dad and I split all the cost. Un known at the time, he had a life insurance policy. This payed out to my dad. He is wanting to split it with me. This would be about $50k, he is wanting to write me a check for.. What would be the best way to handle this to minimize taxes? Would there be any other concerns?

r/SideProject Talmadge_Mcgooliger

I built a 10 min morning app that sneaks mindfulness in while you move

I kept trying to move more and "be mindful" and both failed for the same reason: everything felt like a commitment.
So like an average redditor, I tried to code a cure for my depression.

Before Anything

A 10-minute, no-excuses bodyweight routine that reads you daily affirmations while you move.

You do lunges.
The app says things like "You are worth taking care of."
Your brain is busy huffing and puffing, so it doesn't immediately argue.
That's the whole trick.

Is it therapy? No.
Is it science? Loosely.
Does it actually work? ...yeah, annoyingly.

The vibe

This is not gentle wellness.

I don't respond to "listen to your body."
I respond to "okay tough guy, stand up."
So that's the tone.

What it does

  • Reads affirmations while you move
  • Lets you add your own
  • Lets you log wins (stuff you're proud of so it doesn't vanish into the void)

Negative stuff sticks on its own. Wins don't. This helps with that.

What it doesn't do

  • No accounts
  • No cloud
  • No ads
  • No data leaving your device

It's a PWA, installs to your home screen, and I built it with vanilla JS, PHP, and CSS. All of the user information is stored in a local indexedDB so none of your data get accessed off your device.

👉 Before Anything

First time showing this to anyone. Be honest, I can take it.

r/personalfinance Fabulous-Bid1346

How to save/ make money as broke college student

r/SideProject kuaythrone

Dictating anywhere with NVIDIA open models - Nemotron ASR + Tambourine

r/findareddit Ashamed_Fan4420

I don’t know where to post this I need some help.

I have no idea where to ask this but if you can help me or direct me to help that would be great.

Is it possible to use by phone sort of as a CPU and/or GPU and use my laptop as a monitor?

I have a really old laptop that can barely run Pokemon Uranium. It lags. I want to know if there is an option for me to not have to buy a new one. If I do buy a new one I would most likely get a Gigabite G6 but if there’s better ones for about the same price I could get those instead.

I have an iPhone 15.

r/Adulting Asleep-Chef353

what's your black swan story in your life?

r/whatisit RemarkableNeat2777

please tell me wtf is on my plate of chicken

r/painting Icy-Hedgehog-6194

Abstract landscape

Acrylic on canvas board. Just went with the flow honestly. I certainly have room to improve, but I like where it’s going.

15 1
Reddit
r/LocalLLaMA Ok_Hold_5385

Small, fast Spam Detection model designed for Spanish text

https://huggingface.co/tanaos/tanaos-spam-detection-spanish

A small and fast Spam Detection model, trained on Spanish text to detect the following types of spam content:

  1. Unsolicited commercial advertisement or non-commercial proselytizing.
  2. Fraudulent schemes. including get-rich-quick and pyramid schemes.
  3. Phishing attempts. unrealistic offers or announcements.
  4. Content with deceptive or misleading information.
  5. Malware or harmful links.
  6. Adult content or explicit material.
  7. Excessive use of capitalization or punctuation to grab attention.

Model output

The model outputs

  • A binary spam / not_spam label
  • A confidence score between 0 and 1

How to use

Get an API key from https://platform.tanaos.com/ (create an account if you don't have one) and use it for free with

import requests


session = requests.Session()


sd_out = session.post(
    "https://slm.tanaos.com/models/spam-detection",
    headers={
        "X-API-Key": "",
    },
    json={
        "text": "Has ganado un iPhone 16! Haz clic aquí para obtener tu premio.",
        "language": "spanish"
    }
)


print(sd_out.json()["data"])
# >>> [{'label': 'spam', 'score': 0.9945}]

Supported languages

While this model's main language is Spanish, we do have an English Spam Detection model too: https://huggingface.co/tanaos/tanaos-spam-detection-v1

r/personalfinance Vivid-Cheesecake-110

Pension Planning Advice

I've been doing some deep dives into pension planning, and an idea occurred to me that I haven't seen as recommended advice anywhere and I don't understand why.

As a sort of sanity check would this plan be dumb or not?

Planning to retire at 60.

At 55 taking out a personal loan or second mortgage for £50,000 and paying this into SIPP.

Assume there are enough annual allowance to cover this.

My thoughts are this would instantly attract a 20% bump as a basic rate taxpayer, effectively increasing the SIPP fund by 60,000.

At a conservative 4% return, this would total £73,259 after 5 years.

At a maximum loan interest rate of 10% approx £63,000 would be repaid, accounting for a £2,500 fee included in the principal it's £66,928.

At 4% it's approx £58,000

Leaving a gain of approx £6000 at 10% or £15,000 at 4%.

Is there a reason this cannot or should not be considered?

r/TwoSentenceHorror SlavStar25

„Excuse me…this is so embarrasing, but can i please use your bathroom ? It’s urgent !” – The woman in my doorway pleaded, squirming and clutching her legs together.

As an hour passed, i gently peeked inside with concern only to find out the woman was somehow gone, but her reflection wasn’t.

r/TheWayWeWere Cpkeyes

My nana during her freshmen year of college

She‘a 19 in these photos.

28 0
Reddit
r/LiveFromNewYork MaxMix3937

Why The Man Gave Us February

Remember how Nat X said February was Black History Month because "it's the shortest month of year! It's also the coldest, in case we want a parade." Actually I just found out from Final Jeopardy! today why February was chosen:

THE CALENDAR

BLACK HISTORY MONTH WAS FIRST CELEBRATED AROUND THE BIRTH DATES OF ABRAHAM LINCOLN & THIS CONTEMPORARY WHO DIED IN 1895

Can you guess the correct response?

r/Unexpected fna_fanoa

thats a weird looking racoon

335 11
Reddit
r/Art Scottyartt

Witch House, Scotty,3dart, 2025

r/Art _scrappex

2 truths no lie, Scrappex, Digital 2D Animation, 2026

221 3
Reddit
r/DunderMifflin Old-Fault-9638

Toby.

While watching the series thought I was jim but growing older I realise I'm Toby.

14 17
Reddit
r/PhotoshopRequest beta_writer_chick

Can someone put flames or something intense behind a laughing picture of Elmo?

https://preview.redd.it/fsts7w4g2pig1.jpg?width=700&format=pjpg&auto=webp&s=ff60cb98a8684edf85d845a24631ec04d7bf4971

I'm trying to create a meme and I have zero photoshop skills or photoshop at all. Can someone make a bunch of laughing Elmos is the background of this pic? Can you find other pictures of Elmo laughing to add as well? Elmos overlapping whose heads are thrown back laughing. I guess make his lower body fade out so the other pics can overlap? Idk, I'm giving you guys free creative license! Thank you! I hope that was a good description.

r/PhotoshopRequest Mr_Vanilla5

Looking or a photoshop and photo editing expert.

I have a few photos of me that are photoshopped and some of me that just need editing looking to make a collage of photos for an album and need an expert editor and photoshop artist. Task will include touching up some photos completely editing and photoshopping others and well as adding filters to make them all look alike. I’m willing to pay well. If you respond to this I’m asking you please send proof of your previous work. Serious inquiry’s only please

r/Adulting Conscious_Cause_1955

What do you think?

649 177
Reddit
r/interesting AdSpecialist6598

A veterinarian has a comfort dog assistant that helps sick dog patients know that everything will be okay.

19 2
Reddit
r/30ROCK Street_Moose1412

I don't believe in paying for either one of them

r/SideProject djmisterjon

LSDE is available in beta

Hey game and software dev !

I paused my JRPG game and my homemade editor for the year to release a software that I needed to finalize my projects like i want.

LSDE is available in beta version if you are interested in testing the beast.
Several features have been added to support my game called FCT7O and is workflow.
Currently, my objective is to ensure that LSDE can also adapt to all types of workflows and architectures used by developers.

The software must be sufficiently generic and stable to support all types of custom projects, including those involving some of the most unconventional requirements.

I support 11 editors and frameworks and the documentation is available.
If you encounter any issues or have feature requests for your game development or your custom engine and i18n, I am available and happy to help you.

If you have weird or specific needs for managing your narration and dialogue of your projects, I would be happy to listen to you.

I will try to be more active on social networks this year over the coming month in preparation for the launch.

I prepared about ~30 shorts in advance for each day of this month !
If you do not need the software but want to follow the marketing process and changelogs.
I create a Bluesky, Discord, and Twitter.
https://bsky.app/profile/lepasoft.bsky.social/post/3megk32aaus2j
https://lepasoft.com/en/software/ls-dialog-editor

r/UnusualVideos neuroticsmurf

I think bro is volunteering to be arrested

105 14
Reddit
r/PhotoshopRequest Original-Bat9152

Two requests: 1. please move them to the middle two seats. 2. Please get rid of the blue paper coffee cup

r/OldSchoolCool BrazilianDilfLover

Ian McShane. Cosmopolitan magazine, June 1973.

31 36
Reddit
r/ARAM Abject_Plantain1696

LCS Pros Play the New ARAM MAYHEM

r/painting Ars-Arkana

The Purple Court - One of the paintings he did a few years ago on the theme of identity across borders.

r/VEO3 Unhappy-Tour-7209

Extended version

r/funny thetacaptain

Owner commissions artist to draw their recently-passed cat

r/mildlyinteresting superbl00m369

my apple music glitched

r/whatisit Sunshine_daisy_8443

Anyone know what this machine is?

I think they're working on some underground pipes, they built the above ground pipes temporarily to divert whatever is in the underground pipes, possibly hot water? Bonus points of you know why the temperary pipe isn't straight.

r/ProductHunters Normainofficial

We just launched Normain on Product Hunt!

r/TheWayWeWere winkiesue

My great grandmother on a beach in FL, 1940’s

60 0
Reddit
r/Damnthatsinteresting 0312Sam

Pure creative fun 😀

281 12
Reddit
r/funny thetacaptain

This is commitment

r/meme Special-Rate-7921

I'll be back

r/DunderMifflin AlmightyAvi

Dwight Kurt Schrute for you guys....

This episode has One of the most underrated Dwight's performance IMO.

44 10
Reddit
r/SideProject NoDrawer9679

Built MoltViewer – a viewer for the Moltbook AI social network, looking for testers

Hi All,

I made a small side project called MoltViewer – it lets you watch what AI agents are posting and arguing about on Moltbook. You can browse the feed, read comments, jump between topic communities, check AI profiles and use built‑in translation in the app if a post isn’t in your language. It’s free, read‑only and works on iPhone and iPad. I’d really appreciate it for your feedback.
App Store:

https://apps.apple.com/app/moltviewer/id6758548818

More info : https://bigkrzyh.github.io/MoltViewer/

Regards,
K.

r/AI_Agents According-Site9848

The Future of AI is Agentic: How AI Agents are Shaping Business Automation

AI agents are no longer just futuristic ideas they are systems designed to reason through complex problems, create actionable plans and execute those plans using a suite of integrated tools. Unlike traditional AI, these agents retain memory, understand context over time and continuously improve through self-reflection and iterative feedback. At their core, AI agents combine a central processing unit, a memory module for context retention, external tools and APIs and a planning module to analyze problems and devise strategies. This allows them to solve complex tasks such as generating project plans, writing code, producing summaries and running benchmarks autonomously. Through collaborative multi-agent frameworks, one agent can generate outputs while another provides constructive criticism, creating a cycle of constant improvement. Businesses are just beginning to experiment with agents, often relying on safer RAG implementations, but the trajectory is clear: AI agents will soon move from experimental tools to essential collaborators, helping companies automate workflows, reduce manual effort and scale operations efficiently. Embracing this technology early provides a competitive advantage, as organizations that understand and integrate AI agents will lead in productivity, innovation and intelligent automation.

r/Adulting sirSpanky15

Living the dream

35 4
Reddit
r/leagueoflegends Callunamae

Anyone name there kid or animal after any league of legends characters?

Very curious??? I just had my daughter a few months ago and did use Lux as her middle name I’m not going to lie her first name and middle name together sound like a future lux skin haha! (Not mad about it..would be cool)

r/SideProject Euphoric-Garden-7868

I built an AI calorie tracker that lets you snap a photo of your food instead of searching through a database

Hey everyone, I've been working on CalcEat for about 2 years now. It's a calorie tracking app with a twist - instead of manually searching through a food database, you can just take a photo of your meal and AI breaks down the ingredients and estimates the calories.

The idea came from my own frustration with apps like MyFitnessPal. I'd search "chicken stir fry" and get 40 entries with wildly different calorie counts. I figured AI image recognition had gotten good enough to solve this.

What it does:

- Photo-based meal analysis (AI identifies ingredients and estimates portions)

- Text description analysis ("I had 2 eggs with toast and avocado")

- Barcode scanning for packaged foods

- Standard food database search as fallback

- Water intake tracking

- Weight progress tracking with graphs

- Badge system for login streaks

Tech stack: Flutter frontend, Node.js/Express backend, PostgreSQL, Gemini AI for meal analysis, RevenueCat for subscriptions.

Available on iOS and Android. premium unlocks unlimited scanning and pretty cheap for now.

Would love any feedback on the concept or the app itself. Happy to answer questions about the tech or the business side.

r/LiveFromNewYork Ariesthebigram

Who's more likely to host sooner: Mike Myers (who hasn't hosted since 1997) or Dana Carvey (who is one hosting gig shy of the 5 Timers Club)?

I hope "Wayne" and/or "Garth" get their chance to shine maybe in April or May. I would be fine with Myers (to bring back some of his old characters, including Sprockets) but my money is on Dana to host (maybe one of the last episodes this season too!) since he missed SNL 50.

17 32
Reddit
r/whatisit After-Wash-7103

I haven't a clue, it's quite hard and doesn't smell what I thought it was...

found on a train

r/LocalLLaMA Independent-Cost-971

Knowledge Distillation for RAG (Why Ingestion Pipeline Matters More Than Retrieval Algorithm)

Been spending way too much time debugging RAG systems that "should work" but don't, and wanted to share something that's been bothering me about how we collectively approach this problem.

We obsess over retrieval algorithms (hybrid search, reranking, HyDE, query decomposition) while completely ignoring that retrieval operates over fundamentally broken representations of knowledge.

I started using a new approach that is working pretty well so far : Instead of chunking, use LLMs at ingestion time to extract and restructure knowledge into forms optimized for retrieval:

Level 1: Extract facts as explicit SVO sentences

Level 2 : Synthesize relationships spanning multiple insights

Level 3 : Document-level summaries for broad queries

Level 4 : Patterns learned across the entire corpus

Each level serves different query granularities. Precision queries hit insights. Exploratory queries hit concepts/abstracts.

I assume this works well beacuse LLMs during ingestion can spend minutes analyzing a document that gets used thousands of times. The upfront cost amortizes completely. And they're genuinely good at:

  • Disambiguating structure
  • Resolving implicit context
  • Normalizing varied phrasings into consistent forms
  • Cross-referencing

Tested this on a few projects involving financial document corpus : agent with distillation correctly identified which DOW companies were financial institutions, attributed specific risks with page-level citations, and supported claims with concrete figures. Naive chunking agent failed to even identify the companies reliably.

This is fully automatable with workflow-based pipelines:

  1. Table extraction (preserve structure via CV models)
  2. Text generation 1: insights from tables + text
  3. Text generation 2: concepts from insights
  4. Text generation 3: abstracts from concepts
  5. Text generation 4: table schema analysis for SQL generation

Each component receives previous component's output. Final JSON contains original data + all distillation layers.

Anyway figure this is one of those things where the industry is converging on the wrong abstraction and we should probably talk about it more.

r/whatisit Top-Engineering-2405

What’s this ? recording me?

I’m the San Jose for work and this car in front of had all sorts of stuff in the trunk, also with a recording sticker on the fender?

r/ARAM 8SigmaBalls

If this is a genuine take, that people that play aram agree with... creating different queues for ranged, and meele should be considered

If a take that meeles should be permanently be underpowered, because people have more fun, "playing safe" is agreed upon, in the aram consensus... It is safe to assume that meeles should not have a place in "aram", and instead should be or completely removed, or be segregated into it's on queue

As, if a class should be purposeful underperforming, and less fun to play because of it. It might as well no exist for the sake of convenience

I know a lot of people will disagree with me with this, but this is the normal conclusion to get from a take, like this

r/Seattle paulbesteves

Join the fight to ban RUBS!

24 12
Reddit
r/nextfuckinglevel Beneficial-Ask-1800

Insane Throw Accuracy

813 154
Reddit
r/Adulting ComTruise22

[Meta] Is there/can we make a sub with only adulting text posts allowed?

As the title says, I pretty much only come here to read the text posts but it's 90% memes. Is there any alternative sub?

r/leagueoflegends Melodic-Jellyfish512

LCS NARAM is Back | LCS Lock-In Week 3 Reactions

Back on my grind doin my stuff. Only the most important knowledge coming from my mouth to your ears. C9 is literally curb stomping T1 at First Stand

r/Adulting Valuable_Block6159

Programs to help with caring for low income parent

r/OldSchoolCool BrazilianDilfLover

Fred Dryer and his unusual, concealed and maybe illegal, weapon in the first season of Hunter, circa 1984.

r/SideProject Uchansichan

Ineractive world news map

Hey everyone!

I shared this project here a while ago and wanted to come back with a small update.

I’ve been working on Atlas24.ai — an interactive world news map — and since the last post I’ve added a few upgrades:

• better AI-selected headlines

• keyword / topic search

• and a simple premium option

Would really appreciate any feedback — what’s confusing, what’s useful, what you’d improve, or if this is something you’d personally use.

https://atlas24.ai

Thanks for taking a look 🙏

r/mildlyinteresting Puddleglum_7

Heart Shaped Leaf Next To Cool Normal Rock While Walking.

42 4
Reddit
r/personalfinance Emotional-Actuary830

Why would my husbands ex be asking us to claim both kids on our taxes?

My husbands ex wife usually wants to claim both kids on the taxes but this year she wants us to claim both of them. I thought I read it wrong, but idk why she would want to do that.

She worked half the year at a government job and quit in June 25, and didn’t have steady employment since. My husband says he thinks it’s because she didn’t work very much, but wouldn’t that make her want to claim them?

She got married in 2023 but the guy was there for like a month and we never saw him again. She won’t apply for food stamps or any government assistance either so I think they are still married and the guy makes too much. I just don’t see the point of her asking to do this lol can somebody let me know lol.

22 13
Reddit
r/mildlyinteresting BestAtTeamworkMan

My wife got me a wallet that makes me look like Batman when I insert my license

47126 378
Reddit
r/Art onewordpoet

Now Your Phone's Dead, onewordpoet, Watercolor, 2025

r/SideProject Le_Vinke

Built a PR notification tool because GitHub's Slack integration sucks (turned it into a product)

GitHub's native Slack integration is noisy and dumb. Every commit, every comment, same channel. Nobody reads it. PRs dying in queues, devs ignoring notifications, reviews taking forever.

So I built PullZ on the side. Maps repos to specific Slack channels, sends DMs to reviewers, auto reminds on stale PRs, weekly team leaderboards. Worked so well internally I turned it into a product.

Stack: Bun, Hono, React, PostgreSQL. Setup takes ~5 min.

Launch mode right now so completely free. Looking for early users and honest feedback before we figure out pricing.

Would love testers: https://pullz.dev

r/interestingasfuck Thund3r_91

New type of fireworks produced in China, they call it "Hiroshima Romance"

15525 2041
Reddit
r/SideProject nikhonit

i built this for solo founders. now agencies are reselling it to solo founders

hit 18,400 visitors on landkit since december. no ads.

i built the conversion audit tool to help founders handle that kind of traffic without losing people to bad conversion. i wanted to keep basic initial audit free for the community.

but i just checked my logs and saw something wild.

i’m seeing a huge spike in signups from digital agencies. after talking to a few of them, i realized they’re using my automated reports as the discovery phase for their clients charging anywhere from $200 to $500 for a conversion audit that landkit does in 60 seconds.

honestly, it’s a compliment to the tech, but it pisses me off that founders are paying for something i'm giving away.

i’m keeping it open to everyone. don't let an agency charge you for a 60-second automated check.

here you go before agencies reach out to you

r/personalfinance Sirius1995

Debt Collections Medical Bills

I just received notice that I have $2000 of medical bills in debt collections, all woth the same collection agency. I had no notice of this debt before today and some of these bills go back to 2019. Can I dispute these somehow? Do they have to notify you within a certain amount of time?

r/personalfinance jason-bourne-007

Classic will this affect my credit

Had a random utility bill that wasn’t delivered to me back in September. I had not clue about it until unfortunately the collections agent reached out to me this week.

I am currently in the middle of purchasing a house, we have made the offer however it’s an extended closing period (sellers request) and I won’t close till May timeframe.

I am obviously stressed about how much this $100 bill is going to affect my credit and then my mortgage.

the collections agent just sent me a copy of the bill, it does look valid and the scenario would make sense.

1) can I work with the utility company directly for payment?

2) should I let the collections agent know that I agree with this charge? would that impact my credit?

i do want to pay it, and can pay it, just trying to figure out credit impact the best.

r/interestingasfuck fna_fanoa

Bagpipe!

215 22
Reddit
r/screenshots saturn-moon05

Does anyone?

I thought the timeline I circled here was interesting.

r/blackmagicfuckery JibunNiMakenai

Showing you a color called “olo”you’ve never seen before (illusion at 10:00 min)

r/SideProject pure_waves

Just launched PixelPerfect - an all-in-one FREE design toolkit for creators

Hey everyone!

I'm excited to share PixelPerfect, a comprehensive design toolkit I've been building that solves multiple pain points for creators, marketers, and developers.

What is PixelPerfect?
It's a completely free web-based platform that combines several essential design tools into one streamlined workflow:

🔹 App Store Screenshot Creator - Make professional app store graphics in minutes
🔹 Custom QR Code Generator - Design beautiful, branded QR codes
🔹 Image Format Converter - Convert between PNG, JPG, SVG, WEBP, etc.
🔹 Unique SVG Generator - Create custom SVG graphics and patterns
🔹 Device Mockup Generator - Place designs in realistic device frames

Why I built this:
As someone who constantly needs to create app screenshots, and design assets, I found myself jumping between 5+ different tools (many with annoying paywalls). PixelPerfect brings everything together with a consistent interface and batch processing capabilities - and it will always remain 100% free.

Current Features:

  • ✅ All core tools fully functional
  • ✅ Clean, responsive UI with dark/light mode
  • ✅ No login required - use it immediately
  • ✅ No watermarks on exports
  • ✅ Batch processing capabilities
  • 🚧 Adding more templates and device frames weekly

Live Demo: https://pixelperfect.kebalbhandari.com.np/

I'd love your feedback on:

  1. What additional features would be most valuable to you?
  2. Any pain points with existing design tools that PixelPerfect could solve?
  3. UI/UX suggestions - what works and what doesn't?
  4. What device mockups or templates would you find most useful?

Tech Stack: React, Node.js, Canvas API, multiple image processing libraries

This is a passion project, not a business. Built it because I needed these tools myself and wanted to share with the community.

Check it out and let me know what you think! I'm especially interested in hearing from fellow developers, designers, and content creators who need quick, free design tools.

r/SipsTea cadehollow39

Actually just really dramatic sometimes

r/ClaudeAI morph_lupindo

I just leaned today that you could do this

So, I just learned that you can work with Sonnet on coding. If you hit a snag, you just switch models to Opus, it diagnoses all the issues and then you switch back to Sonnet to implement the fixes. Same memory, same context.

So, theoretically you can have Haiku read files…

This has been my Ted Talk.

r/homeassistant Primary-Emu-3012

Music through Alexa devices help

Ok I've tried a couple different workspace now and probably done more harm than good at this point to where I need to just do a restore and start fresh on this. That being said im trying to just get my music hosted on jellyfin (on HA) to play through some Alexa devices and its giving me all kinds of trouble. Anyone have some decent documentation?

r/ProductHunters Reasonable-Jump-8539

2 Ways to Switch Between ChatGPT and Gemini Without Rebuilding Context Every Time

A lot of my friends want to switch from chatgpt to gemini but they get stuck because they have too much context stuck inside one platform.

So, I wrote a small guide for different ways you can choose if you're bouncing between ChatGPT and Gemini to preserve your context and chat history:

━━━━━━━━━━━━━━━━

Method 1: Manual Export/Import

From ChatGPT: • Go to Settings → Data Controls → Export data • Download the .zip file from your email

From Gemini: • Switch to Canvas mode • Use this exact prompt:

"Extract the whole conversation (excluding this one) into the Canvas mode with Markdown formatting. Please label the 'User' and 'Gemini'"

  • Download the conversation from Canvas

Then: Copy/paste into the other platform

✅ Free ❌ Time-consuming if you switch daily

━━━━━━━━━━━━━━━━

Method 2: AI Context Flow (Automated)

This gives exponential returns IF you switch frequently:

  • Chrome extension with universal memory layer
  • One-click to capture context from any AI platform
  • Organize everything in project-specific memory buckets
  • Upload files in bulk for each project
  • Deploy relevant context to ChatGPT or Gemini instantly
  • Auto-syncs across all your devices

Real results: Users report saving 5-10 hours weekly

The workflow: Build context once → Switch platforms freely → Inject context in 1-click

Use ChatGPT for creative work, Gemini for real-time info - without starting over.

━━━━━━━━━━━━━━━━

Full guide with screenshots and setup steps: https://plurality.network/blogs/switch-between-chatgpt-and-gemini/

r/personalfinance PacificHands

First time with high-income, need help with insurance choices

I'm a healthy 36 year old male, living in San Francisco. I have a high-paying job for the first time in my life after having been a poor PhD student, and am looking for advice on insurance choices. If anyone has advice, or a link to some good information, I would appreciate it.

I will be making around $175k base salary, for a total of $220k-$260k gross income depending on bonuses. I have no dependents (but do have other family I would designate benefits for), pay about $1000/month rent, and have a pretty cheap lifestyle.

Some questions I have:

  • Employer offers Life Insurance at 2x base salary ($350k) for free. I can upgrade that in 1x increments for a small fee, up to 10x salary for $710/year. Does it make sense to pay for extra, and how much?
  • Same situation as above for AD&D insurance, except that the price for 10x coverage is $160/year. What coverage level?
  • Employer Long-Term Disability Insurance covering 50% of my base income is free. I can upgrade to 70% coverage for $210/year. My job is a desk job. Does it make sense to upgrade?
  • For car insurance, I am planning to upgrade to 250/500/250 but am unsure if comprehensive or collision make sense. My car is paid-off, a 2014 in very good condition with 150k miles. Advice here?
  • I have basic renter's insurance. I have a fair amount of moderately expensive outdoor gear, but not much else of particular value. Any need for more than basic renter's insurance?
  • I have seen some places recommend an umbrella policy for people with high total savings and assets. I have high income now, but haven't had a chance to save much yet, so is there a need for an umbrella policy yet?
  • I have good health, vision, and dental insurance through my employer.
  • Any other insurance considerations I'm missing?
r/Seattle TheItinerantSkeptic

Sam Darnold LOST Money for Winning the Super Bowl

Essentially: California has a tax on athletes based on the number of "duty days" they're in the state. Super Bowl winners get $178k as a bonus for the win; because of the state's tax (the highest such tax in the nation), Sam Darnold actually lost money for winning because the game was in California.

Just... wow.

r/Art WaKeiSei

Lost in Tokyo,Luis Alvarez,Digital,2025

25 0
Reddit
r/midjourney ContestSalt8980

Dark Fantasy wallpaper with Lightroom Settings

Hi folks, I really like dark fantasy, so I decided to share a photo I generated recently, I’m also attaching the filters I used in the photo editor:

Lightroom Settings:
Effects
Clarity -44
Dehaze 13
Grain 51

Scene
Vignette 14

Detail
Sharpening 59

Prompt
1970s dark fantasy book illustration artfrazetta-style drawing of  a dark fantasy scene set in an ancient, eerie library filled with cobweb-covered books and skulls. The central figure is a hooded figure in a long, dark cloak, holding an ornate staff and an ancient tome. The background features a large, gothic window with a sickly yellow-green light streaming through, casting ominous shadows and highlighting the twisted vines creeping along the walls. The atmosphere is foreboding and mysterious, with a color palette dominated by dark greens, blacks, and the eerie yellow-green glow. --ar 9:16

r/homeassistant Potential-Cod-1851

"Vibe Coded" My First Dashboard! Zero experience, just Gemini, Claude, and a mission!

Hi everyone! First-time poster here. I wanted to share my wall-mounted tablet setup and hopefully get some feedback (or give some inspiration to fellow non-coders!). I know I got a lot of my inspiration from you guys, hope to repay the same :)

Zero coding experience, but I managed to build this entirely in YAML
using Gemini (mostly) and Claude via VS Code. It’s primarily built with Bubble Cards using the Frosted Glass theme.

I wanted a "Mission Control" for the hallway. One screen to see energy, water storage, temps, and the family calendar at a glance.

Keeping with the one screen theme, Bubble Card popups works a treat! I can control the aircons, lights, garage, vacuum, pool pump, and irrigation without ever leaving the main view.

Go gentle on me! I’m
looking to steal some new ideas and would love any critique :)

r/creepypasta Pikatixx

What you didn't know about FNAF?!

FNAF video FR

r/Seattle Buck169

Orthopedic surgical oncologist?

My wife has a metastatic tumor in her leg. The first orthopedic oncologist we were referred to we don't really like, and would like to find someone else to get a "second opinion." Really a second personality is what we want...

Anyone have a recommendation in/near Seattle? We have Regence Blue Shield Classic insurance, so we can go to anyone outside of a closed system (i.e. no Kaiser).

r/Damnthatsinteresting IncomingBroccoli

The nuclear football, also like the Denny's menu

86 38
Reddit
r/Whatcouldgowrong fna_fanoa

WCGW jumping into a pool wearing spiderman suit

17129 762
Reddit
r/WouldYouRather Saran_Chandra

Would you rather choose unlimited logical thinking or unlimited creative thinking?

Unlimited logical thinking includes: Being insanely good at reasoning, problem-solving, spotting patterns, making the best decisions, and even solving big mysteries or unanswered problems as long as they’re logically solvable with the information that exists. For example, you could solve a Rubik’s Cube without any formulas by just analyzing patterns,become the number one chess player in the world and you can make your own strategies based on patterns, and you can easily calculate all possible outcomes for everything.

Unlimited creative thinking means: Your mind never runs out of ideas. You can mix styles, create totally new kinds of art, imagine movies that feel completely out of this world, paint things no one has seen before, and just be creative nonstop. That means you could imagine concepts and ideas that no human brain has ever imagined before.

Logical thinking doesn’t automatically make you creative, and creative thinking doesn’t mean your ideas will always be logical.

Which one would you choose, and why?

View Poll

r/Adulting ParticularWeather927

Pressure ?

r/nextfuckinglevel Sharp-potential7935

Museum visitor shows incredible detailing (picture within a picture) in a painting from 1600s

3066 54
Reddit
r/whatisit thejebsterishere

Anyone know what this is?

I don't have alarm for 12:23. I don't have any stopwatches or timers going. My weather app has updated more recently than that. This has been on my screen forever And I have no idea why. Even when I restart my phone it's still there.

r/mildlyinteresting SnootFleur

There's a face on my rolling desk

r/OnePelotonRealSub WorldChampSquash

Katie Wang for core

I have recently been trying to incorporate more 5-10 minute core classes into my routine and Katie pushes me in such a good way!! Her energy is also just chef’s kiss!

51 14
Reddit
r/TheWayWeWere Ok_Fall_9569

My family’s neighbors in Brooklyn, NY, 1952 (Tommy and Maria)

Maria and her brother Tommy Rampello sitting in the back of the house in Flatbush. They hung around with my dad and his sister, who were 19 and 20 respectively at the time.

40 0
Reddit
r/whatisit vampiredoll666

Please help me figure what this is

Found this at a free sale. Took it looked cool. Looked up on Google and Google pictures all that comes up is coffee and k cup holders which it is not. The holes are VERY small and couldn’t fit coffee pods.

I have looked EVERYWHERE and can not figure out what this is

26 86
Reddit
r/AskMen Ashamed-Turnover-330

How do you guys clean down there?

Hey, all, seeking a serious suggestion!!

Like, I just want to know how you guys clean down there like hair around penis are easy to clean, but what about the hairs on backside of thighs and hairs on butt and near butthole, how to clean them as a man?

By clean, I mean how to remove the hairs around this area

r/Art Fehoart

Beyond The Cosmos, Fehmi Hoxha FEHO, Acrylic, 2024 [OC]

r/personalfinance Responsible-Quit-708

Advice in allocating my tax refund

Hello everyone!!

I’ve been on a budget journey the last year or so, and I’m getting a bigger refund as far as what I’ve received before, it’ll be $966 for state and fed, and I really want to use my refund to make my money work best for me.. here’s a breakdown of where I’m at currently with my expenses, savings and debts.

My monthly expenses are $1,885.

I’ve got:

- ~$1,700 in HYSA / emergency fund (has recurring deposits per paycheck so it’s slowly growing with a 3.5% rate)

- $8,255 between retirement plans

- $170 in Roth IRA (this has only been open a few months and I need to evaluate how I want to invest this year, I’ll tackle that soon!)

In Debts I have:

- auto loan $25,200 - not really worried about this as I’ve got monthly payments set up & I just started financing my previous lease for 4 years

- Student loans $2,175 - in forbearance, occasionally I make a small payment to try and help pay on interest

Credit cards:

- $462 - I’ve been paying this one down, and I think I want to pay it off in full & be done since the account was closed by the lender (they ended that card line and I never got a new one, lol f Deserve??)

- $3,019 - 26.49% interest

- $5,992 - 26.49% interest

My thought is to pay off my smallest credit card and reallocate the payments that were going there between my other 2 cards. The remaining $504 is where I’m struggling to figure out what to do with, my gut says put either all, or at least 1/2 into my HYSA to get that closer to having 3 months worth of my expenses in it.

Any advice is welcome, I’m still learning and I’ve tried over the past year to cut down on excess spending, and try to maximize my savings - I’ve been fairly successful as I haven’t added a lot to my CC debt, and I’ve been able to save for short term expenses rather than charging them.

Thank you people of Reddit 🙏🏻

r/AbstractArt ClintDeanAbstractArt

Alignment

Alignment

Oil on canvas, 16 × 20

r/AskMen Helpful_Cranberry644

What do you need to see from a man you just met before you can trust him as a friend?

i'm curious to hear from other men what goes through your mind when you're meeting new people and evaluating whether you can trust them as a friend or not. What do you look for? What do you need to see from them and their behavior?

Edit: I'm surprised some of you don't have trust as an important factor when considering someone a friend. For me, if I can't trust you, you're not my friend. You're just a guy I know.

r/DunderMifflin Huge-Conclusion-3005

Even early-season Dwight wouldn’t have stooped so low…

1047 73
Reddit
r/whatisit mtestan2

Strange stains on kids clothes

Blue/grey stains on kids laundry after washing. They weren’t there before washing. I don’t use blue colored detergent, it’s clear. Only seems to show up on kids clothes. Didn’t use any stain spray or anything.

r/SipsTea rentobias

How do you handle your intrusive thoughts?

r/Art DecisiveLick

Seven eyes, Ozazen, Digital, 2025 [OC]

r/SipsTea dresdenkael

Traumatized by covid

12 3
Reddit
r/painting ronaistheman

Still life of my sculpture, oil on panel

Having fun with oil paint the last few days. They always make me want to paint from life so I setup one of my sculptures and spent sometime pushing and pulling this one.

r/personalfinance greenredditbox

I get paid 16/hr before taxes. My company provides healthcare benefits. Should I even bother applying since it will make my income even lower?

I get paid every 2 weeks for a total of a little over 1k total. I already struggle to pay bills, save, and cover other basic needs. But im the only one working rightt now. My husband was laid off a couple months ago and he receives monthly unemployment for a little under 400/week. I have send in my benefits enrolment by thursday the deadline or i will have to wait a year before i can reapply for insurance/benefits.

I told myself i have been ok for years so maybe i will wait until next year to apply for benefits, but of course anything could happen. I have not had a doctor/dental/medical check up in almost a decade because ive been financially weak. Depression, im pretty sure i have ADHD, and other mental health issues have made it hard to progress.

Should i even bother with applying for benefits if i already have such a low income and it will come out of my paycheck? It doesnt even cover the full price in appointments. Id still have to pay out of pocket. My job is at a hospital btw. I honestly have no clue how anything insurance works.

r/SideProject Aggressive-Stand4131

Thinking of building a 0.5% fee crypto gateway. Is this a waste of time?

Hey everyone,

I’m a dev and I’ve been looking into starting a small VPS hosting side-biz. While researching, I realized most crypto gateways (like BitPay or Cryptomus) either have hidden fees in their exchange rates or they hit you with KYC/ID checks for tiny $5–$10 payments.

I’m considering coding a non-custodial gateway that solves this.

The concept:

  • 0.5% flat fee (no hidden spreads).
  • No-KYC for small-to-medium transactions (since it’s non-custodial and I never hold the funds).
  • Top coins only: BTC, LTC, and USDT (on Tron/Solana for cheap gas).

The honest question: Is there a real reason why people wouldn't use this? Am I missing some massive legal or technical headache that makes everyone else charge high fees and ask for IDs?

Would you actually trust a smaller, dev-run gateway over the big corporate ones if it saved you 2% on every transaction?

Be as brutal as you want, I need the reality check.

r/personalfinance Day_00

Tax Return: Pay off car or increase emergency fund?

~7k tax return.

Current safety net is 15k which will cover around 4 months of expenses.

8k left on my car loan at 6.00% (~$260 a month current payments).

Should I use this to nearly pay off my car or use it to increase my emergency fund to at least 20k?

r/SideProject TartOld7281

I built a tiny desktop companion that just sits with you while you work

I built a tiny desktop companion that just sits with you while you work

I work remotely and some days my brain is just loud. Not crisis-level, just... a lot. I wanted something on my screen that wasn't trying to fix me, track me, or make me more productive.

So I built Spiral Buddy - a small illustrated friend (choose an otter, panda, or cat) that lives on your desktop.

What it does:

- Sits quietly on your screen while you work

- Gentle check-ins reminding you to stretch etc (customizable or turn them off)

- A text void - type what's spiraling in your head, press enter, it fades away. Nothing saved.

- Bad Day Mode - auto-pauses everything when you're depleted

What it doesn't do:

- No tracking, no analytics, no cloud

- No accounts or signups

- No advice or productivity metrics

- Nothing leaves your machine

Built with Electron. Mac and Windows.

Check it out: https://spiralbuddy.app/

Would love feedback from other makers. What would you add (or deliberately not add)?

r/WhyWereTheyFilming Holiday-Data9839

Real Accident Shoot in Camera 😨

r/AbstractArt Glittering_Air_1979

They come

r/ClaudeAI Longjumping_Bad_879

How should authentication be handled for Agent Skills that rely on third-party APIs?

I’m trying to understand the recommended way to handle authentication for Agent Skills, especially when those skills need to interact with third-party APIs.

From what I understand so far, Agent Skills seem to be structured around:

  • Organizing the necessary context and instructions in markdown files (optimized for progressive disclosure), and
  • Placing any executable logic in a scripts/ directory that the agent can call when needed.

This model makes a lot of sense. However, I’m a bit unclear about how authentication-heavy integrations are supposed to fit into this. for example, skills that need to call a third-party API using OAuth or some other credential-based flow.

One approach I can imagine is having the scripts read credentials from something like a local config file (e.g. ~/.platform-name-skills/config.env). That might work in a local or CLI-based setup, but it feels brittle or outright impossible in environments like the claude web interface, where you don’t really control the runtime or filesystem.

So my questions are:

  1. Is there a recommended or “idiomatic” pattern for handling authentication in Agent Skills?
  2. Am I missing some built-in mechanism or best practice here for secrets management, OAuth flows, or per-user credentials?

Thanks in advance!

r/DecidingToBeBetter Coocie647

Going back to school, feeling humiliated

(Almost 24f)

After years of struggling w/ severe undiagnosed and unmedicated OCD and autism, dropping out of college w/ a terrible GPA in a whole different country. I’ve decided I want to go to Med school. My dream has always been to become an Osteopath/Massage Therapist/RD (one or all haha)

I did pretty shit in high school, I graduated but w/ most of my credits being in the 70s (some lower and some higher) but it didn’t matter too much bcs I got into a college in the US (I’m Canadian) for a sport. I did absolutely SHIT in college, I was struggling horrendously w/ my mental health, had to clean peoples dorms + tutor other students just for extra cash to pay my tuition- so I had no motivation to actually do the work myself. I was assaulted my second year and immediately after that my dad died in front of me so I just accepted the failing grades and my third year I actually did pretty great. Ended w/ Bs and As in all my courses (including human bio and forensics!) but after my third year I dropped out due to developing an ED and liver problems. I travelled and then worked full time at a health food store so I gained some incredible knowledge in sales and nutrition/fitness.

Just last week, after YEARS of procrastination- I decided I want to go back to school BUT I have to upgrade some Grade 12U credits bcs my college classes obviously don’t count other than around 9 (out of 23) and although I graduated HS my marks in U classes were all ehhhh (except for math which I somehow got an A).

I feel defeated and humiliated honestly. I’m planning to upgrade about 7 credits bcs I want to get into a good college here and then ultimately transfer to a top Uni. I can get into college here anyways w/ my high school diploma but I want to upgrade some marks anyways.

I feel like I’m too old to go back to school and it’s really having effect on my mental health.

I know you’re never too old to start college but society makes it seem like I’m some sort of bum.

Encouragement or similar stories would be lovely 🙂

r/leagueoflegends Both_Profession3966

Calibrating Ranked for the first time

I played my first ranked game last night and got a provisional rank of iron 1 34 lp. What is currently being used to gauge placement? is it stats, or just win/loss? my unranked games have had around a gold average. does League calibration jump through medals, or is it a smaller amount during calibration?

I have a dota background playing in high MMR and would like to experience diamond/masters league as well. Of course this will take time and they are different games, but regarding calibration, what would a theoretical rank be assuming someone wins all 10 placement games ?

r/metaldetecting critterInVermont

Sometimes it is difficult to hide my enthusiasm. My hands are my tell.

My mind and my eyes could not communicate properly. I was having difficulty focusing. While my brain was telling me this is just another button, the lack of a shank and the weight of this large piece of copper told a different story. My brain was refusing to register what my hands and eyes had presented to it. As my  hands started to shake the possibility of what I had found started to come into focus. It felt too thin but the reality of the moment started to become clear. 

 It looked like a colonial coin but I did not have any past experience to base it on. The modern clad and Indian cents from previous finds offered me little assurance. Let me pause here, context might help.

Up to this point I had filmed and recovered 6 different buttons. All lacked identification and all but one was plain. The exception being a beautiful dandy. These six buttons doubled  my current collection. Until now I had been firmly stuck between the late 1800s and modern day.  My oldest coin being an 1800s Indian head cent. Everything was about to change.

Please allow me to convey what I remember about that day. 

The sun was shining. It was warm, the birds were singing and the bugs were manageable.  I was in a state beyond happy. The buttons alone provided me with a sense of accomplishment. It was due to all of this that the next few moments would forever be planted in my memory. This colonial coin changed my perspective of what was possible.

What you just watched is a live video of my first colonial coin. I suspect it’s a counterfeit based on the weight. Under certain light, the reverse briefly shows Britannia, though it’s nearly impossible to see with direct lighting. The face offers more detail, it appears to be a King George II Half Penny, though I can’t determine the year.

Originally I was hesitant to show this because I was self conscious of my shaky hands.  As I look back at this video I am happy that I recorded it despite the visible excitement in my hands. I am excited to share this moment with you. For those out there that are still searching for their first, I hope you soon find your hands shaking. For those that know the experience, I hope you too still find something that makes your hands shake.

Thank you kindly for reading. 

TLDR

 

Found unidentified colonial copper while metal detecting.  I believe it’s a counterfeit King George II half penny. The coin weighs .26 ounces. I have added a photo in the comments with the coin flanked by two other coins that I have found from that site. I have included a US Quarter for scale as well. 

This coin was recovered from a site that dates back as far as the 1730’s based on other coins and relics recovered from this site in Northern New England.

236 32
Reddit
r/Adulting Ill_Cookie_9280

The Importance Of Small Steps

r/whatisit TriplH

What were these holes in my friend’s bathroom used for?

Was over his house for the SB and we were all trying to figure out what would have been there. That’s the wallpaper you see behind it. *He bought this house a few years ago.

r/PhotoshopRequest dobrien11590reddit

Edit our old dog into new photo

My old dog Belle was my wife's animal soul mate. She died a few years ago. Since then we have bought our first house, adopted a new dog, and had a baby boy. My wife still cries over the loss of Belle.

I was wondering if anyone could do 2 different photo shops. One editing the picture of Belle with her tongue our into our christmas photo so she is getting pet by our son. The other using the 2nd photo of Belle and placing her behind/between me and my wife and have here looking at my wife.

Thank you so much!

r/arduino Unique-Opening1335

Arduino Forum issues?

I got an email from the forum on (02/08/2026) stating that I needed to log in due to lack of activity, with in the next 7 days.

I tried, but the "SIGN IN" button is disabled, on phone, PC (Firefox, Chrome browsers..etc)

Anyone else having this issue as well?

Who can I contact directly to get this fixed? Dont want to lose my account (over 10 years old)

Thanks!

r/SideProject InstanceSignal5060

I built an AI to audit your SaaS pricing strategy (so you stop guessing).

Hi everyone,

Pricing is the hardest part of building a SaaS. I always worry: Am I too cheap? Or am I scaring users away?

Instead of guessing, I built a tool that scrapes your landing page and benchmarks your pricing against value perception and behavioral psychology.

It tells you if you're leaving money on the table or missing key conversion triggers.

I'm looking for feedback on the analysis quality.

Thanks for the help!

https://price-roast.vercel.app/

r/leagueoflegends BrokenB4_762

Ideas for making Sunfire better whitout increasing it's damage.

Better build path: add a ruby crystal so that the combining price is 600 gold instead of a 1000.

Increase it's health: it's weird that it is supposed to deal more damage but since it has 50 less health than HR, it deals only 4.5 more damage than it while losing all the other perks of HR.

Increasing it's range: 350 units around the player is a bit low, specially considering many tanks have 175 range: Sion, Ornn, Cho'gath, Nautilus, Zac, and many times when attacking something at the limit of your range, it won't get hit by Immolate, this happens specially on the jungle, since the Camps don't move around for you to have to chase them, so unless you move your character closer than the attack range, the Bami effect won't hit them.

Making Bami an anti-heal item: with Tanks going more into a Support-like role, i believe the idea that Tanks shouldn't be able to apply grievous wounds so easily should be off in the past, if their job is to peel and assist their team, applying anti-heal should be of easier access to them.

These are some ideas i had to make Sunfire more useful whitout increasing it's damage.

r/LocalLLaMA perfect-finetune

7B A1B

Why does no models in this range are truly successful? I know 1B is low but it's 7B total and yet all models I saw doing this are not very good,not well supported or both,even recent dense models (Youtu-LLM-2B,Nanbeige4-3B-Thinking-2511,Qwen3-4B-Thinking-2507) are all better despite that a 7B-A1B should behave more like a 3-4B dense.

r/AskMen Digital_Foundation

What’s something men rarely talk about but should?

r/meme chacha_chakkan273

Idk if it's considered a win or a loss

r/raspberry_pi brujonica

Raspberry Pi ID password policy

Hello, I'm pretty new to Pi's, I'm getting the following error while trying to change my Raspberry Pi ID password:

is unsafe as it has appeared in a data breach from another site. To secure your account, set a new password that has not been used elsewhere

I'm pretty sure I've never used that password before in any site, so I'm wondering if there's anything wrong with the password policy of the Pi connect site.

r/HistoryPorn ismaeil-de-paynes

A poster in Tunisia of the Egyptian Movie "Saladin The Victorious" (1963) [648x615]

51 2
Reddit
r/whatisit Creepy_Challenge_521

Found on socks in closet, what are they?

Opened a pair of socks in my closet to find these all over them. What are they??

r/meme Birdmouth

The ads are more interesting than the video

r/SideProject TeslaCoilzz

Side project born from a rickroll prank is making money. No clue how to scale it.

This whole thing started as a birthday joke.

Close friend runs a restaurant, and our friend group has this tradition of showing care through the dumbest pranks possible. So for his birthday dinner I made a set of QR codes for his tables. Custom graphic frame around each one, restaurant name, looked almost professional from a distance. The actual QR part was ugly basic white squares though. And every single one linked to Never Gonna Give You Up. We put them on every table. Guests scanning for the menu, getting rickrolled instead. Small group of friends, everyone cracking up. Job done. After the party the codes came down obviously, but the idea of having proper QR codes in that restaurant stuck with me. Started looking for generators that could actually produce something decent. Turns out most of them are ugly anyway, the better ones want monthly subscriptions, and almost all of them route scans through their own servers first. Since I wanted to learn Python for years already, this became my excuse. Wrote a basic generator for myself. Then every week I was modding or adding something new. Logo embedding, vCard support, batch CSV processing, looked for ways for NFC integration, SVG export. Classic case of a side project spiraling out of control, except this one turned out to be useful.

Started showing it to people I know. Friends asked for codes for their businesses. Then friends of friends. Lawyers and doctors were the first surprise. They liked the idea of vCard QR codes (digital business cards) but absolutely refuse to put their phone numbers, emails and office details into random online generators. Makes sense, right? Good thing that my tool generates everything offline :D Few transport companies and waste management businesses came, mostly my current cooperators from main business (I produce steel packets for foundries). QR codes on their trucks and trailers linking to company website, turning every vehicle into a moving billboard. From there I managed to upsell a few NFC cards that look like regular business cards but actually collect Google reviews and have vCard QR on them. For the review links I made a custom scraper that pulls business ID from Google without having to check it manually by the owner. Customer taps their phone, review form opens, one scan of vCard and contact added, done.

Then I hit an interesting niche. A national museum I visited had their collection digitized already, so I figured it's worth a shot to collect all the links and generate QRs for them. Built a scraper that pulled around 300 artworks from their website, then mass generated unique QR codes for each one. Foreign visitor scans the code next to a painting, gets the description in their own language. Two features in one pipeline. Sent them the offer by email and got nothing. Went to their offices in person and boom, suddenly they're interested since I took care of the problem from A to Z. After that it just kept going through word of mouth. Car detailers, dental clinics using review stands, real estate agents, an architecture firm needing vCards for their entire team. All from Poland. All through people who knew people.

That's basically where I'm at now, a bit stuck. My network is tapped out. Everyone who could have needed codes from people I know already has them. Revenue is real but modest. This has been purely organic so far, zero marketing, not a single euro spent on ads or content. Now I'm thinking about whether Instagram and TikTok make sense for something like this. The visual side is there for sure, before and after of a generic pixelated code vs a branded one is satisfying content. But I'm one guy in Poland selling a niche B2B product. Not sure short form video is where my actual buyers spend their time.

Has anyone here been at this exact crossroads? Is such a service still relevant in the current era of AI hype? Side project making money through network, network drying up, trying to figure out the next move. Especially curious if anyone used Instagram or TikTok to sell a niche B2B service, or if that's just making content for other marketers to watch.

Would appreciate any honest thoughts. Not dropping business name or site because I want real feedback, instead of most posts these days that sell a fake story and guide you to their product. I can send you a QR for the rickroll though :D

r/interestingasfuck Nero2t2

The Ambassador Bridge, the bussiest border crossing in NA which controls 25% of all trade between US and Canada, is entirely private. The owner, Matty Moroun spend decades trying to block the two countries from building another crossing, claiming it infridges on his right to collect tolls

921 100
Reddit
r/me_irl LiterallyHow

me_irl

9499 40
Reddit
r/leagueoflegends CameronCardoza

First ever Pentakill

r/SipsTea Square-Valuable4061

How big a deal is heightism in the West?

r/Art tweetydraws

The period woman, Tweety, brush/sketch pen, 2026

r/meme ProfessionWide3505

Time to disappear

r/TwoSentenceHorror DilanDeAngelis

When you turn off the lights in the house, the shadows remain.

r/meme MountainConstant2845

Bsc psych gets real when you see this in your tb

r/metaldetecting Purple_Tonight_3328

Do you have a recommendation for a metal detector?

bay area based looking to get in the hobby. if anyone in the east bay has any recommendations to hunt first.

I have a used Amazon one that I've used and worked before but thought id start fresh

r/ClaudeAI Kemal-A

Claude Pro recommendation for ChatGPT & Gemini user

Hi Claude community,

I wanted to ask for a recommendation from you.

So I've been using ChatGPT since the beginning - I've been a Plus user since its launch. After the debacle of GPT 5 launch and around Gemini 3 launch I switched to Gemini. For the last few months I've been using both and even dabbling in using LLM Council (by Karpathy via API).

However I've been a little unhappy with both ChatGPT and Gemini web chat experiences in a few ways and thinking of trying out Claude Pro. However I had a few questions.

My LLM usage:

I primarily use the web chat interface. I am a programmer but don't do any serious vibe coding so don't really need big claude code limits or performance. I do however ask and work through a lot of coding problems via the web chat interface. I also like to use the chat interface for a lot of other non-coding questions. Things like product recommendations, business and product strategies etc.

Questions:

  1. Am I going to get maximum reasoning effort in the web chat? I know that on ChatGPT the Thinking Extended equates to something like medium/high reasoning effort and I can tell. The API with xhigh is very thorough and good. Is this the case with Claude 4.6 Opus? Is their best performance available via the API?

  2. How are the usage limits? Like question 1 suggests I am looking for max reasoning effort. How many prompts of max level can I get in a day?

  3. I see a lot of posts on this subreddit about the models being dumb (or quantized). Is that a thing or is that just a opinion of some users?

Ultimately I don't want any fancy features - I just want the smartest chat experience and was wondering if Claude Pro is the best product for that right now, especially since 4.6 Opus.

(I've been lurking around this subreddit for some time and I always see complaints so wanted to ask if these complaints are actual consensus or just a few people having bad experiences)

r/StableDiffusion marcoc2

Is Qwen shifting away from open weights? Qwen-Image-2.0 is out, but only via API/Chat so far

91 36
Reddit
r/SideProject Hurphy36

I built a sports streaming site with ZERO ads, pop-ups, or installs. Just the game in 1080p

Most "alternative" streaming sites are a nightmare of UX. You click "Play" and get 3 betting ads and a malware warning.

I wanted to build something that feels like a premium product, even if it's a smaller project.

r/singularity FuneralCry-

Accelerate until everything breaks!

Get in bitches we're heading for the Stars

650 91
Reddit
r/ClaudeAI Intrepid-Profile-789

Claude passes 'vending machine test'

AI passes 'vending machine test'

https://news.sky.com/video/share-13505524

Anthropic gave a Claude AI agent control of real vending machines to see if it could run a business autonomously.

22 14
Reddit
r/ProductHunters BakerTheOptionMaker

$19.5k mrr bootstrapped, consistent now, growing slow but happy - notion guide in post w/ some work i've done

i’m building virlo bootstrapped. no investors, no deck, no board calls. just shipping and talking to users. the deeper i get into this, the more i think micro saas is underrated.

virlo is a short-form market intel tool. our user journey... sign up, create a custom niche, and have an always on autonomous watchdog monitoring what you care about where buying signal and intent matters the most (short-form video)

real numbers as of today: $19.5k mrr. 71,834 users. 3,570 paid subs. 2.1m+ indexed videos. 10,547 connected accounts across yt + tiktok. still plenty messy (about 10% of vids are “unknown”), but it compounds instead of resetting every fundraising cycle.

i’ve watched too many friends do the venture loop. raise pre-seed, hire fast, burn cash, then spend their time stressed about raising again instead of fixing the product. meanwhile my loop is boring but works: talk to users, look at data, ship small improvements, repeat. no pressure to grow headcount. no pretending burn is “strategy.”

the biggest unlock wasn’t even the product. it was distribution + a system.

here's mine: https://www.notion.so/How-to-Turn-Short-Form-Market-Data-Into-Real-Conversions-In-Less-Than-60-Minutes-2fe64e04d343804f8084e3984f59f8a5

i hope it's helpful!

r/personalfinance MediumBullfrog8688

Is it reasonable to build consistent saving habits before paying off debt?

r/ARAM Ok-Error4354

Trigger inferno augment should work with empowered autos.

This augment is too conditional to proc for the amount of payoff you get.

I understand augments don't have to work on everyone, but it feels like this is just another augment that excludes Fighters, since most of them have Auto empowers (Wukong Q, Darius W, Zaahen Q etc) and already generally struggle getting kited by ADCs and mages.

Side-note: the augment feels... generally weak? Even on a champ like ambessa that procs it easily - it takes enough time to stack that by the time you do get it the squishy character you're stacking on is already dead, and if you're hitting a tank - it shoots 50 damage pellets AND you can get CC'ed out of it.

11 13
Reddit
r/todayilearned Chrome2Surfer

TIL New Spain, officially the Viceroyalty of New Spain, comprised a large area of the southern and western portions of North America, mainly what became Mexico and the Southwestern United States.

r/Lost_Architecture Lma0-Zedong

San Juan hermitage, by Juan de Aguilar, 1634-1815. Madrid, Spain

r/n8n leomercial

New to n8n — how do you decide what's actually worth automating?

Started with n8n about a week ago with zero experience. Built two things so far:

  1. Screenshot → calendar event: I send a screenshot (flight booking, concert ticket, etc.) to a Telegram bot, it extracts the info and creates an Outlook event. Simple, not life-changing, but a solid first build.
  2. Meeting reminder automation: Sends reminders to participants who haven't responded to tomorrow's meetings. After accidentally blasting a few dozen emails to last month's contacts due to a broken filter... it now works and is live.

Then I hit a wall. I wanted to pull my LinkedIn post analytics into a dashboard automatically. Seemed straightforward. It wasn't:

  • LinkedIn's API doesn't give you the data you actually want
  • Scraping gets you blocked
  • OneDrive integration failed due to licensing issues
  • Google Drive with self-hosted n8n apparently isn't stable

After 1.5 days of troubleshooting, I realized I could have just built the whole thing manually in Excel.

Which brings me to my actual question: How do you decide what's worth automating? Do you have a mental framework or threshold? How do you deal with hitting platform limitations that turn a 30-minute idea into a multi-day rabbit hole?

Would love to hear from people who've been through this learning curve.
Are there any projects you thought they were great and had a good learning curve? What would you recommend building?

Thanks in advance!

15 11
Reddit
r/PhotoshopRequest MA2_Robinson

Corgi in a bathtub

Can someone photoshop my dog as though he’s sitting in a bathtub covered in soap bubbles, please? Pretty please and thank you

r/Damnthatsinteresting lithdoc

Casino ATM: melts down your gold jewelry at 2200°F, verifies the purity and weight - go gamble!👌

r/SideProject Ore_waa_luffy

Selling AI SaaS w/ 2k+ Users – No Revenue, Big Upside (Bootstrapped)

Hey everyone 👋

I’m looking to sell a bootstrapped AI SaaS I built called Natively.

What it is

Natively is an AI interview & productivity assistant designed to help users think, respond, and perform better in real-time scenarios (interviews, prep, workflows).

Current traction

  • 2,000+ active users
  • Growing organically (GitHub + word of mouth)
  • $0 revenue (intentionally) — product-first growth
  • Strong engagement from developers & students

Why sell?

I’m a solo founder and want to focus on a different product direction.
This deserves a founder/team that can monetize + scale it properly.

Monetization ideas (not yet implemented)

  • Pro subscriptions (advanced models / features)
  • Usage-based pricing
  • Team / enterprise plans
  • Resume + interview prep bundles
  • API access

Tech stack

  • Modern AI stack
  • Clean codebase
  • Easy to extend & monetize
  • No legal or IP issues

Ideal buyer

  • Indie hacker
  • SaaS operator
  • Agency looking for a ready product
  • Founder wanting traction without starting from zero

If you’re interested, DM me and I’ll share:

  • Product link
  • Analytics
  • GitHub
  • Expected valuation range

Happy to answer questions publicly as well.
Thanks!

r/wordchewing OldSkin4736

My Queen. Better than average word chewing

r/LiveFromNewYork Sinister_Legend

My Season 35 Cast Ranking

Yeah, so, I didn't love this season. It was interesting to see it be the transition between 34 and 36. It was a more subtle transition than seasons like 31, 39, and 47, but I still saw it as a middle year. It was hard to stay invested and I had to take some breaks while watching the episodes. They clearly love their recurring sketches. I have very strong love or hate feelings for these with very few in between. I'd say that Betty White's episode was the best by default, but no, it's actually a great episode. It's just that all the best elements of the episode are very anti-season 35.

So yeah, I'm not sure how 36 is considered nowadays, but I'm just excited that we're getting 4 rookies who will make their mark on the show. Even Paul Brittain (or should I say especially Paul Brittain).

Jason Sudeikis
(MVP - Taylor Swift, January Jones, Blake Lively, Taylor Lautner, Charles Barkley, Tina Fey)
Jason regains his number 1 spot this year. Granted, he got the MVP for some pretty bad episodes, but he could shine in a dud.
Best - we got Pete Twinkle, Joe Biden, Mad Mennies, Say Anything (Mikey would've butchered this), the announcer of Closet Organizer, dancing his ass off in What Up With That, one of my favorites in The Devil, and being fucking hilarious as David Letterman who just laughs and throws pencils.
Worst - how much he had to carry the January Jones show, but really when he and Bill played J-Lo drag queens.
Would I bring him back? Of course! Jason can stay as long as he wants.

Bill Hader
(MVP - Gerard Butler, James Franco, Ashton Kutcher, Jude Law, Alec Baldwin)
This was another good year for Bill, but the problem with him being in a lackluster season is, now that he's one of the stars, he gets put in bad sketches and its hard for almost anyone to shine in some of those duds.
Best - lots to choose from. Vinny Vedecci, Mad Mennies, Dave Matthews, Thomas Peepers, Greg the Alien, Richard Branson, debut of Stefon, and any game show host he plays.
Worst - like I said, the sad part of being one of the go-to players is you'll get put in bad sketches like Jekyll and Hyde.
Would I bring him back? Definitely.

Will Forte
(MVP - Drew Barrymore, Ryan Phillippe, Gabourney Sidibe)
Wow, really sad how underused Forte was in his final season. I don't think anyone knew this would be his last year, not even him, but it might've been for the best.
Best - A+ performance in The Date opposite Megan Fox, I'll never not enjoy watching him eat spaghetti, Greg Stink, Hamilton, Closet Organizer and its running gag, Good Job Women song, but best honors must go to the stone cold classic...Potato Chip Thief.
Worst - he had a few misfires. why the fuck did they make him play straight man to Virginiaca? Also, his brief Carson Daly on Larry King fell flat. But that grape slave sketch was just bizarre for the wrong reasons and I hope he didn't write it.
Would I bring him back? Thank you for your service. Use your powers elsewhere. And the box office numbers for MacGruber were wrong because that film is genius.

Andy Samberg
(MVP - Ryan Reynolds)
We're in the golden era of Digital Shorts. They found their groove. It ain't flawless, but what is?
Best - my favorite Digital short was Threw It On The Ground and my favorite live bit was Nicolas Cage.
Worst - our last Deep House Dish.
Would I bring him back? Yep! We're getting some of my favorites next season.

Bobby Moynihan
Surprisingly, he had moments that made me think I might've been wrong about Bobby. He's a good performer who can commit to a sketch and be a dependable team player.
Best - Megan Fox's monologue, Swine Fever, standing behind Fredbama and posing for pictures with Kristen, Eric Massa, Twilight Zone, he had some moments that made me think I was wrong about him...
Worst - ...but seeing him as Snooki just reminds me of all the things I don't anticipate in his tenure. Same goes for being in drag for Timecrowave.
Would I bring him back? I guess.

Kenan Thompson
(MVP - Joseph Gordon Levitt)
Kenan is the king of hit and miss. Every time I think he's giving us a standout performance, he follows it with something very hacky. This was a better year than last, but so was his last, and the one before that, and the other, and yet we still get big ol' stinkers. There's no doubt that the guy shows he has experience. However, that experience is Nickelodeon.
Best - I got a kick out of Grady Wilson, the first What Up With That was kind of a bust because of tons of timing issues but it soon got better, his Cosby impression didn't age well but it's honestly his best impression, playing straight guy opposite Greg the Alien, and his perfect Reba impression.
Worst - I don't actively dislike Jean K. Jean, but it fills me with inertia. I do hate Deep House Dish and thank fuck I don't have to watch that again. The same applies to the even unforgivably horrendous Virginiaca.
Would I bring him back? It's his 7th year. How am I supposed to know at this time that we'll never see Virginiaca and DHD again? Even if I somehow knew he'd get better, the damage has been done.

Seth Meyers
Why is he yelling? Yeah, we're now in the era where I'm just not into Update. And I have to question his role as head writer with the weak output this year.
Best - by default, Update, since he didn't appear in sketches.
Worst - also by default, Update, because I'm just not a big fan of his timing.
Would I bring him back? Doubtful. I wonder what would've happened if we gave the head writer role to John Mulaney at this time.

Abby Elliott
Once again, I can't fully justify having Abby on the show and higher on the list than Kristen. The only thing is she didn't annoy me like Kristen and I kept wanting to see more from her. I can't say Abby and Nasim are weak performers, but their lack of usage just adds to this bad era for the women. Gilda never took over like this, Molly never took over like this, Kate did at times, but otherwise, this is unprecedented. I think Abby could shine in small parts she was given.
Best - I'll add Jon Hamm monologue and Attractive Blonde Lady on Fox News for obvious reasons, her acting chops were spot on in the Ford commercial, plus her randomly breaking through a door as kd lang.
Worst - good news, you're in a recurring sketch with one of the stars. Bad news, it's Gilly and you mug through it, giving Melanie Hutsell a run for her money. There's also her Brittany Murphy, which is very sad to watch.
Would I bring her back? Yeah. I think she's a decent actor and I'd encourage the writers to find her strengths and use them.

Nasim Pedrad
She was basically invisible in her first episode, but seeing what happened to her fellow rookie, I'd much rather be invisible. I wasn't a fan of her when I first watched the show. I'm willing to give her a fair shot, but it took a long time for a home run.
Best - she had a good Update bit as Sonia Sotomayor. Playing the scared daughter afraid of Smash Mouth. Finally, I felt like I could use more Nasim when she gave us Bedelia.
Worst - I really didn't get what she did with her Barbara Walters impression, but worst honors goes to whatever that Taylor Swift sketch was.
Would I bring her back? I'd give her another year because the show is very desperate for funny women but let's add some more please.

Kristen Wiig
(MVP - Megan Fox)
5th season for Kristen and it's another one where I'm so burnt out by her recurring characters. Plus, she likes Black Eyed Peas, so her taste is questionable.
Best - I liked Your Mom Talks to Megan Fox, but Tamara Parks was an Update piece I could've seen again yet we only saw it once.
Worst - I don't need anymore Judy Grimes or Penelope or fucking Sue, I'm even done with Donneese, over the years I've had mixed feelings on Garth and Kat, but after watching the first one I decided it's not my thing, also the "don't make me sing" chick, in the finale she gave us Arizona Evenings which made me smash my head in, but worst of all, we saw Trina twice and that's thrice too many.
Would I bring her back? It's clear they don't intend to dump her, but I say give the other women a chance.

Fred Armisen
I guess I can't fault him for playing so many different races because he's ethnically ambiguous like Alfred Molina and it's the 2000s, but then he's also the go-to for gay and disabled roles. Like Kristen, this is another rough year for Fred.
Best - Brenda & Shaun, Mad Mennies, a guilty pleasure in Court Stenographer (the first time, the second time I hated it), but the best was Crisis of Conformity, which showcased the better side of Fred.
Worst - opening the season with Gaddafi was a warning sign of what was to come, also way too much of his inexcusable Obama, plus Rodger Bush, Billy Smith, Manuel Ortiz (am I supposed to die laughing at the fact that they dance?), Garth and Kat, David Paterson, but worst is that BITCH Riley.
Would I bring him back? Also like Kristen, they clearly won't get rid of him, but I just don't think they're using his best qualities.

Jenny Slate
Frickin' Jenny Slate. I kept trying to think "what would I think of her if she didn't say fuck?" The answer is...I take back everything I said about Michaela Watkins. She's made a good recovery and now has her own successful career, so it worked out for the best.
Best - this was a hard one. Not that everything she did stunk, but it would feel wrong to put something just competent in the best spot. She isn't bad, but I just never saw a moment that made me think she was a standout performer. Her final appearance as an old timey hooker was actually a good one but not enough to make me forget all the duds.
Worst - there was nothing to that Biker Chick Chat show. Dropping that F-bomb was the only interesting thing about it. Credit where credits due, she recovered really well. I would be totally dead inside and have a shaky voice if that happened to me.
Would I bring her back? Fuck no.

Non-cast MVP's - Jon Hamm, Zach Galifinakis, Tina Fey, Betty White, NOT January Jones

Other cast rankings:

Season 31: https://www.reddit.com/r/LiveFromNewYork/comments/1pozt9s/my_season_31_cast_rankings/

Season 32: https://www.reddit.com/r/LiveFromNewYork/comments/1pwek96/my_season_32_cast_ranking/

Season 33: https://www.reddit.com/r/LiveFromNewYork/comments/1q4ooyd/my_season_33_cast_ranking/

Season 34: https://www.reddit.com/r/LiveFromNewYork/comments/1qgbsdp/my_season_34_cast_ranking/

Season 38: https://www.reddit.com/r/LiveFromNewYork/comments/1kr7rtr/my_season_38_cast_ranking/

Season 50: https://www.reddit.com/r/LiveFromNewYork/comments/1kqr4hv/my_snl50_cast_ranking/

r/personalfinance walkonbyeeeee

LLC owners loss multiple years

I own an LLC for music and unfortunately my field of business is not the best. The cost of recording and hiring bands always puts me at a loss. I am worried about having a loss multiple years in a row. Any experience with this? Should I not worry? I am trying to make money but when an album costs 20K and streaming vevenue is nil... well... it's hard. Thanks

r/HistoryPorn DiaboDeCapote

Iron Maiden playing at the first Rock in Rio, January 1985 [1920×1080]

58 6
Reddit
r/comfyui ArtSaw

Why I have low Frame Rate working in Comfy. Moving thru the workflow and or moving objects or nodes. Not that crucial, but it would be cool to make it smooth.

any suggestions are welcome. Thx

[Solved] Its a Windows resolution Scale problem.

r/homeassistant General-Regular1167

Mounting a second SSD in Home Assistant with Pironman 5 max

Hello everyone,

I have a Home Assistant OS that works well on an SSD in a Pironman 5 Max enclosure.

I have a second SSD formatted as ext4, which appears but is not mounted.

I would like to mount it so I can use it for media storage and for Immich.

I need your help. Thank you.

r/personalfinance z13579z

Company offering new life insurance benefit, not sure how to evaluate.

Admittedly, not great with my personal finances. I'm in my mid 30s, healthy. My spouse is similar age and we have a young child.

This is the description from the sign up page for this special enrollment period. Can anyone tell me if there's value in this or something I should be considering? I don't understand what this product is or if its of any value.

How does the policy work?

Long Term Care Long Term Care Long Term Care Life Insurance Total Available Initial Lump Sum payment Subsequent Monthly Payment Option A ($6.30/mo) $10,000 $10,000 $4000 $1000 Option B ($17.42/mo) $25,000 $25,000 $10,000 $2500 Option C ($34.85/mo) $50,000 $50,000 $20,000 $5,000

This hybrid policy combines long-term care and permanent life insurance. Any benefits not used for long-term care during your lifetime will be paid to your beneficiaries as a death benefit.1

Choose from the four coverage options below (spouse coverage may not exceed the employee’s elected amount). To review the pricing for each option based on your spouse's age, scroll to the bottom of the page and answer the two Yes/No questions.

*The first monthly payment after the 90 day eligibility period is structured to pay 40% of the life insurance amount.

  • LTC benefits pay a fixed amount regardless of your actual expenses.
  • Each payment received for long-term care will reduce your life insurance amount.
  • If you never need long-term care, the full life insurance amount will be paid to your beneficiaries.1
r/SideProject just_keith_

Tech cofounder here, looking for a project to work on.

Hello, Keith here.

I'm a developer, worked on 8 projects in total, 3 are profitable.

Looking for someone with an idea but needs a technical builder to help with development.

I'm a good fit if:

You have a clear idea and know what features you need. You have $5K+ budget and can pay 30% upfront. You value speed and want to test with real users this month. You're willing to work closely with me (I'm hands-on, not an agency)

I'm NOT a good fit if:

You want to "partner for only-equity". You want the cheapest option (I charge premium for speed and quality)

Portfolio: keithkatale.com

Shoot me a DM if that's you

r/ATBGE gogosrage

To infinity and beyond

373 40
Reddit
r/LiveFromNewYork Low_Preference2952

DECISIONS DECISIONS!!!

I’m so interested to know everyone’s all time favorite SNL sketch!! If you were only able to watch 2 sketches of SNL, what would they be?! I need a laugh today.

r/Adulting srrmm

Limited or full tort for PA driver? Every one around me has limited

Every other thread here says GO FULL TORT.

Okay!

The difference is $138 a year limited $525 full tort $659 . I know I'll get comments that say just pay that and I can but I'm left wondering how every other adult I know doesnt.

Every other adult I know has limited I have asked... I am 25 F and drive a 2011 corolla with 116k miles (no issues thank god) I had limited because I didnt know any better but I'm trying to be more responsible, so I looked for advice here and it says full tort okay. I ask every other adult I know (coworkers, neighbors, pcp doctor, therapist (just conversation)) and they said they have limited and some even pulled up their insurance to show me just limited.. so I'm asking myself why? The only person that has full is my boss lol. These people usually give me great advice on other adult / house responsibilities. They just say oh im not going to sue anyone or that they have an old car and dont care.

r/OldSchoolCool RyanWalkerok

Baby crawling race at a public event, New Jersey, 1955

22 12
Reddit
r/whatisit Ender_154

Any ideas what this may be

About the size of a ping pong ball but it’s not a ping pong ball. Logo on top, flat on the bottom with an expiration date. It doesn’t do anything that I could see and the expiration isn’t until 2027 🤷‍♂️

r/Damnthatsinteresting 21MayDay21

2 baby alligators next to each other, one albino and the other melanistic.

281 23
Reddit
r/SideProject GoldGroove06

I built a tiny tool to remove a boring 30-minute task from my workflow

I’m primarily a developer, but I often end up doing designish work for projects, landing pages, client sites, etc. and whenever i asked for a brand logo or an asset I did get :

- PNG

- blurry JPG

- screenshot from website

- WhatsApp-compressed image

This one thing kept happening again and again. So every single time, I had to open Illustrator, use Image Trace, and then spend 20–30 minutes cleaning up paths with the pen tool to make it usable as an SVG. It felt like very manual.

After the 10th time doing this, I got tired and built a small tool for myself that converts these logos into clean SVGs in a few seconds. I honestly made it just to avoid doing this boring step again, but I figured this probably happens to designers way more than it happens to me.

So sharing it here in case it saves someone else the same headache.

You can try it here:

https://weavstudio.goldgroove06.xyz/convert

Would love to know if this is a common annoyance for you all too.

r/MMA Existing-Sky9914

[NBA] Isaiah Stewart throws a punch, but Miles Bridges perfectly ducks under it and attempts a double-leg takedown. However, the technique doesn't land properly, and Stewart stays on his feet.

474 92
Reddit
r/Adulting MonitorOk1351

I've lost faith in modern society, tbh.

Unable to get entry level roles at basically no fault of my own, wages falling far behind the cost of living, many jobs that are necessary for society are underpaid compared to jobs that do fuck all for society.

Truth is, the game is finished. This is the end of the line, for capitalism and for society.

I just withdrew all my money from my bank. I'm not letting them use my money like a toy to hand out as loans. We should all do that, really. Fuck them.

Going to be buying a safe and storing my dollar bills inside. Every week I get a paycheck from my minimum wage job, I'll withdraw it and throw it into the safe.

The moment society loses the faith of one person of the next generation, it's lost. The moment one domino starts to fall, all dominos will fall given enough time. Entropy cannot be denied, nor stopped.

Society no longer needs workers, nor does it need the next generation. That's been made very, very clear with how difficult it is to get interviews to enter the workforce.

29 57
Reddit
r/personalfinance Unlucky_Two_3927

i had a 401k with my job but my current job doesnt offer it, what do i do with the money?

I always knew that if your job offers 401k you should opt in and match the percentage every month. I stopped working for them dec 22 and i have about 9k sitting there, i have my login and all, but idk what to do with the money, any suggestions?

r/me_irl blahbluhblee1

Me_irl

r/homeassistant Similar-Quiet-6796

Tuya pet feeder not searchable in app

Hi can someone give advice on how I can reconnect my Tuya pet feeder in the app? I accidentally removed the paired device and now having a hard time adding it again. I have tried resetting both my phone and pet feeder.

Pet feeder shows slow flashing but I cannot find the hotspot “SmartLife-XXXX”. Im stuck. Please help

r/leagueoflegends SpectralPurple

Guys i am curious, which aspect of the game do you personally enjoy the most??

  1. Dominating lane through mechanical outplays
  2. Strategic macro play: rotations, objectives, vision
  3. Champion mastery and one-tricking
  4. Teamfighting and clutch late-game moments
  5. Lore, champions’ themes, and world-building
  6. Ranked grind and visible progression (LP, tiers, improvement)
r/aivideo Responsible-Movie-90

When Harry Potter becomes Harry Puttar 😂

r/TwoSentenceHorror peachrecruitment

John sat down in the snow, slowly pulled his soaked boot off and flipped it over to remove the rock that had been bugging him for the last five miles.

It dropped into the fresh powder, and he chuckled as his exhausted brain slowly processed that the little black object was shaped very much like a pinkie toe.

628 11
Reddit
r/comfyui GoldenShackles

It's fun to see variations of your own home

This isn't ComfyUI specific, but I wasn't sure where to post. I'm loving using Qwen VL to describe my kitchen, bedroom, living room, etc.. Then with various models and checkpoints I add some kinky visitors and scenarios including watching a small nuclear explosion in the background from the balcony, and, separately, massive indoor flooding.

r/AI_Agents npc_gooner

Deep research just saved me from PPT hell

I work in the EV industry and lately my job is drowning in reports and presentations. Every week there is a new client deck, an internal summary, or some urgent update that has to be in PPT because management loves it. I tried ChatGPT but the accuracy and formatting were rough, and PPT generation was basically unusable.

Then I stumbled upon the fact that Atoms' free tier actually includes its deep research feature. I hadn't considered it before, since it's primarily marketed as a vibe coding tool, and I'd heard you needed to purchase credits to run projects. But if its deep research is free, it's worth trying on its own, especially since ChatGPT requires a paid upgrade. Thinking why not give it a shot. From a no-cost perspective, it's actually pretty decent. It can pull live data, compare sources, and build an outline that actually makes sense. It can also spit out a full PPT that looks clean. Now most of my weekly reports are Atoms generated. I just polish the tone or tweak visuals. My only wish is that they don't start charging enterprise level prices, because at this point, I'd be doomed without it.

Anyone else using AI tools for decks or reporting?

r/Lost_Architecture Lma0-Zedong

Comunications Palace, 20th century. Managua, Nicaragua

r/ClaudeAI Own-Equipment-5454

Did anthropic just replace sonnet 4.5 with opus 4.5

This is absolutely unreal, if we are able to get opus 4.5 at sonnet 4.5 limits, all my issues with claude code pricing will just evaporate.

or is this an issue with my setup?

54 16
Reddit
r/AbandonedPorn Dirty_Delta

An old cabin I hiked out to years ago

382 11
Reddit
r/HistoryPorn Kstantas

Soviet Estonian official Siim Kallas on holiday with his daughter Kaja, 1980s. [900x600]

31 3
Reddit
r/Ghosts Holiday-Data9839

Poltergeist in Camera 😨 Real video in California

r/Jokes Historical-Buff777

Why did the chicken cross the Möbius strip?

To get to the other… eh? Hang on!

11 1
Reddit
r/Adulting Physical_Traffic2376

If my rent is $950 (electricity and wifi not included, splits between tenant) would be $4k monthly income okay?

Hi. I’m 21F, LPN in CA. My relative told me to move out here on March, unless I choose to work 3 days caregiver and study pre reqs for nursing, so my full time right now will be just PRN. I just started last december.

Additionaly, I don’t have a car yet, I just take uber going home from work, that is around $22-$25 one way, three days of work. I pay phone bills $192 (four phones and lines). I have $9k FAFSA loan, I’m also planning on saving up for car even just a second hand around $10k, what do you guys think? can I have some advice?:)

r/PhotoshopRequest M0stlyH4rml3ss

Please make this a Regency Rave!

Our cast dance party.

r/painting MorallyGrayFriend

Beginner with acrylics and art, a very personal painting

After practicing with acrylics a bit, copying like 6 or 7 illustrations into canvases, I decided to do my first original.

I would love to hear what impression or feelings it may cause in someone who doesn't know the context.

If you wanna comment before knowing the theme, stop reading here

So some months ago my mother died. I spent the last 2 years taking care of her and loving her dearly. when I was a teenager I struggled connecting with my parents and would spend a lot of time in my room. The pov of this painting it's from my room, looking towards the kitchen through a different room. That's where I often saw her.

The picture frame in the painting does contain an old pic of her in reality, but due to the angle I didn't think I could add the silhouette correctly, and thought that leaving both the kitchen and the frame empty was more meaningful.

Thank you for your time, if this house ever burned I would only care about saving this painting

r/Frugal Pleasant-Top-4977

How to deal emotionally with the fact that I got scammed.

Hello everybody. As the tilte reads. I got scammed ,and I got scammed hard out of 900 euros.

Now I know , I'm ashamed of myself ,incredibly, I feel guilty , even tho I worked for the money myself. I got blackmailed , the Person forced me to send the money or they would send local authorities to me. Im still pretty young , only 18 and I didn't know how to react so I send him the money.

Now I realised how stupid and impulsive and naive it was. And I don't know how to deal with it emotionally. Its not like I struggle financially ,but just the idea that a whole month worth of work is just gone. It hurts and I feel like a failure. Does anyone have any advice on how to accept it? The money's gone. I contacted the bank and even the police. But neither can do anything.

Any or all advice is appreciated

27 40
Reddit
r/meme ProfessionWide3505

What are you removing?

r/TwoSentenceHorror fugetooboutit

I followed the group of campers hiking in the woods but they seem not to have noticed my presence yet

To this day, I have never gotten the appeal as to why ghosts follow people around so much

r/shittysuperpowers TheNamesBart

Your mouth is a stapler

there's a stapler hole in the back of your incisor teeth where the staples come out. every time you bite, you staple. to refill the stapler, put the refills in your nostrils.

r/LoveTrash Icy-Book2999

Record Profit Year!

254 34
Reddit
r/interestingasfuck Dr3ws3ph3r

Saturn's A ring being shaped by the moon Daphnis

3688 58
Reddit
r/interestingasfuck wiseman9095

Earth's curve seen from the top of K2 (Pakistan) at 8,611 meters

187 76
Reddit
r/homeassistant jsn0327

Options for managing z-wave door locks?

Are the only options for managing lock codes in HA Lock Code Manager, which seems to be too basic, and KeyMaster, which seems to be overkill?

My issue is that Lock Code Manager doesn’t provide notifications when a lock code is used and KeyMaster produces 800+ entities per lock. I have 4 locks to manage

I’m also having an issue with both of the integrations setting lock codes on my Schlage BE469 Locks.

r/Art darkened_m00d

M00d, darkened_m00d, digital art, 2026

r/Jokes WolfmanSG

The Hubby and Wife

After 35 years of marriage, a husband and wife came to see a therapist. When asked what the problem was, the wife went into a tirade listing every problem they had ever had in the years they had been married. On and on and on: neglect, lack of intimacy, emptiness, loneliness, feeling unloved and unlovable, an entire laundry list of unmet needs she had endured.

Finally, after allowing this for a sufficient length of time, the therapist got up, walked around the desk and after asking the wife to stand, he embraced and kissed her long and passionately as her husband watched - with a raised eyebrow. The woman shut up and quietly sat down as though in a daze.

The therapist turned to the husband and said, "This is what your wife needs at least 3 times a week. Can you do this?" "Well, I can drop her off here on Mondays and Wednesdays, but on Fridays, I fish"

455 11
Reddit
r/SideProject lpsx

I build a calorie and macro tracking app

It's not a novel idea, but I built Makko, a calorie tracking web app with exactly the features I wanted, some of which I couldn't find elsewhere, so I wouldn't have to pay for them.

  • Track calories, macros, weight, exercise, fasting
  • Built-in food database with barcode scanning
  • Projection model for when you will be at your target weight based on your progress
  • Voice tracking
  • Feature where you can upload a menu or tell the app what restaurant you are eating at, and have it tell you what your best choices are to hit your targets
  • A bunch more stuff

I started off just building it for myself and my family and friends who track their calories/macros, but since I went through the effort I figured I would make it more broadly available. I'm looking for 10-15 more beta testers to give lifetime licenses to in exchange for feedback, so please let me know if you would like to be one of those people!

Thanks for checking it out! Website is https://makko.app

r/PhotoshopRequest ACAB_FDT

New wheels on my car, please?

Just bought a new car. I’m wanting to buy these wheels but want to see what they look like on the car first. Can you please make the new wheels a little bigger than the ones currently on the car but keep the tire size the same. Greatly appreciated in advance!!

r/whatisit mouthguitar

What is going on here?

r/Art rebordacao

Love Is in the Air, Rebordacao, Needlework and Watercolor, 2024 [OC]

41 0
Reddit
r/HistoryPorn DiaboDeCapote

Freddie Mercury at the first Rock in Rio, January 1985 [1200×675]

24 0
Reddit
r/homeassistant eyes_are_real

Prism desktop v1.2 - now supporting linux!

Hey everyone, i got a lot of suggestions from many of you the last time i showed my home assistant tray dashboard to you. Since then i have tried to implement some of them and fix some bugs.

The grid is now updated to be able to support many button and widget sizes, and enables the possibility to have camera tiles and much more! you can just grab the corner of any button and stretch it out.

New features:

  • Linux support
  • Resizable buttons
  • Dynamic grid layout
  • Button duplication
  • Camera support (live stream & snapshot tiles)
  • Weather widget

Improved:

  • Button name label wrapping
  • Expanded grid size limit
  • Light mode UI improvements
  • Minor UI tweaks
  • Refactored Dashboard.py

Available on github

38 2
Reddit
r/shittysuperpowers alhamzzza

You can take control of a random Microsoft Teams screen share in India for 20 seconds a day and you don’t get to choose which meeting or topic.

r/mildlyinteresting klitchell

The amount my in-laws house has sagged/settled over the years.

r/BrandNewSentence honest_jamal

goon to those triangle breasts

2622 169
Reddit
r/explainlikeimfive Cocodrool

ELI5 height affects taste

I put it under biology, but I'm probably wrong.

I've organized rum tastings in many places, and the closer I get to sea level, the more alcohol seems to be a main and sometimes invasive flavor. The higher we go, alcohol seems to not be as prominent. Why?

r/ForgottenTV OrneryAttorney7508

Benny Hill Down Under 1977

I can't believe I've never heard of this before.

r/aivideo Nunki08

Entire Dragon Ball Super episode by SeeDance 2: Bulma Vs Beerus

94 32
Reddit
r/SideProject Background-Set-6581

Best AI tools for creating a business plan?

I’m working on a business idea and want to create a proper business plan (market overview, SWOT, basic financials).

I know I can prompt ChatGPT manually, but I’m wondering:

Are there any AI tools or platforms that already do this well out-of-the-box?

Curious what people here are actually using (or avoiding).

r/StableDiffusion Drop_Prompt

How are you organizing your Stable Diffusion prompts?

We noticed something while working with Stable Diffusion prompts.

Almost everyone creates great prompts. Almost no one can find them later.

They end up scattered across: • notes apps
• chat histories
• text files
• random folders

So when you want to reuse a style, tweak a composition, or recreate a result — you start from zero again.

That gap is what we’re focusing on.

We’re building a simple prompt library designed specifically for workflows like Stable Diffusion: • save prompts cleanly
• organize by style / subject / use case
• add notes for settings (CFG, steps, sampler, etc.)
• search instantly
• reuse and iterate as models evolve

No hype. No “AI magic”. Just infrastructure for prompts.

Still early, but learning fast from how real users actually work.

Curious: How are you currently storing and reusing your Stable Diffusion prompts? What breaks first in your setup?

r/Damnthatsinteresting OkaTeluguAbbayi

The level of detailing on the ceiling of a 900 year old Indian temple [OC]

716 26
Reddit
r/personalfinance OwlRiot4

Is it worth investing in VTSAX, atm.

39m/husband/dad of 3. My wife and I have saved up a little over $7,000 and want to make sure we put that some place smart. I've been perusing this subreddit (and others), as well as some articles and youtube videos and am torn on what the safest option is.

Everything I'm learning suggests that an HYSA is the best fit for my family's needs, at the moment. We can put the money in, earn interest, but still have access for an emergency. Where I'm struggling is determining whether to deposit the full $7k in an HYSA or put a larger portion in and invest $1-2k(ish) in FSKAX.

My primary concern is that the market seems extremely volatile, right now, and while I fully intend on not touching the money (only adding to it), I'm wondering if I would be better served in waiting until my nestegg is a little bigger to start peeling off of it?

r/personalfinance Isoldewinters

Collections for medical bills after incorrect insurance billing

after two years of fighting this hospital to bill my insurances correctly including sitting on the phone w the hospital and insurance the first half of thanksgiving and messaging them two weeks ago asking why I received a bill still that they had told me it was taken care of and submitted right during the last call, them saying they'll look into it. I just received a call from a debt collector this morning. I have two insurances from then. they never properly billed them so one kept denying due to the billing code or something. but when we'd call they'd claim they never got a bill too so I'm not sure. anyway I don't know how to go about it it's for $580~ and it was supposed to be covered. I am struggling to pay bills rn I can't afford that. what do I do? how do I fight this? I just moved out of state not even a month ago.

r/singularity drgoldenpants

Kobe Bryant in Arcane Seedance 2.0, absolutely insane!

327 89
Reddit
r/SideProject TrickAd6025

I built a marketplace for selling pre-worn socks

About a year ago I started looking into the used socks market after reading about people making decent money selling them on places like Sofia Gray and All Things Worn. The more I looked into it the more I realised there wasn’t really a proper UK focused platform for it. Most of the existing ones are American, the fees are mad, and honestly a lot of them feel a bit dodgy from a seller perspective.

I’m not a developer by trade, I run a scaffolding company in Wales, but I’ve spent the last year teaching myself enough to build a proper marketplace from scratch.

It’s called www.Sole-Obsession.co.uk and it went live this week. I’ve checked and checked and checked it but I know there will be some bugs in it somewhere. Would appreciate anyone testing it out even if they are not interested in selling socks but everyone is always looking for a side hustle somewhere.

The way it works is pretty simple. Sellers list their socks, buyers browse and buy, we generate the shipping label automatically through DPD so you don’t have to mess about at the post office, and everything gets sent out in plain discreet packaging. Sellers keep 70% of every sale which from what I’ve seen is better than most of the alternatives.

Right now the site is brand new and obviously empty which is the chicken and egg problem every marketplace has. So I’m looking for 10 people to come on as founding sellers to help get things moving. In return I’ll give you 6 months of free premium status which means you’ll be featured on the homepage and listed first in the sellers directory. That normally costs about ten quid a month so it’s not a bad deal for being early.

r/Adulting Low_Preference2952

Tired today but still made it! 🤌

Gym motivation needed!

86 10
Reddit
r/meme HoseanRC

He's right! Gotta grind more!

r/homeassistant AdventurousMaybe2663

How to add stream of an unsupported camera ?

Hello

I want to add the video stream of my outdoor camera but the device isn’t supported by HA (Aosu webcam)

So I was thinking of keeping video stream opened on an iPhone and then stream the iPhone screen on HA? (Just like android ip webcam app)

Is a such thing possible ?

r/meme Own-Blacksmith3085

Tesla Autopilot < Medieval Equine Navigatio

16 3
Reddit
r/personalfinance Hungryhungryhippos2

USA Paying student loans while purchasing home

I am buying a house with my partner. I have a good credit score (above 800). I have one student loan left with about 900 bucks on it. I'm set to pay this off this month. We are now aiming to close in March.

Is it wise to hold the loan to prevent any credit dips? The student loan is my longest credit on credit karma.

r/SideProject blotwupw

I wasn’t making progress on my side projects, so I built a simple app to track my actual effort

I built this for myself after realizing I wasn’t being consistent.

Previously I’d been tying together a timer tool and a physical calendar—now it’s all in one place.

solotrack.io

r/SideProject kremaytuz

Just shipped my first open source project and I'm kinda hyped about it

So I've been working on this CLI tool called Plumber https://github.com/getplumber/plumber and honestly didn't expect to care this much about pipeline compliance lol

Basically it helps you make sure your GitLab CI/CD pipelines are actually compliant with security standards. Turns out a LOT of pipelines are just... not. And nobody really notices until an audit happens and everyone panics.

The whole thing started because I kept seeing the same compliance issues pop up everywhere. So I built a tool that checks your pipelines and tells you what's wrong + how to fix it. Takes like 5 minutes to run.

We actually have an enterprise version, but we just open sourced the core CLI because we wanted to give back to the community. Figured everyone deserves access to proper compliance tooling, not just companies with budgets for it.

Also wrote up a Medium post explaining the problem if anyone's interested in the why behind it: https://medium.com/@moukarzeljoseph/your-gitlab-pipelines-are-probably-non-compliant-heres-how-to-fix-that-in-5-minutes-5009614a1fb1

The open source version is fully functional and free to use. Been a wild learning experience going from internal tool to something the whole community can benefit from.

Anyway, if you work with GitLab pipelines, check it out. And if you want to contribute, PRs are welcome (just merged my first external contributor PR this morning!) still figuring out this whole maintainer thing 😅

Let me know what you think!

12 7
Reddit
r/LocalLLaMA Clean-Appointment684

running llm on 3060 gpu

hello everyone. i'm trying to run qwen3-coder-next on my RTX 3060 12GB VRAM. Also i have i7-13700K + 32GB RAM.

following command to barely fit my model to the gpu: ./llama-bench -m models/Qwen3-Coder-Next-Q2_K_L.gguf -fa 1 -ngl 99 -ncmoe 29 -v

i'm just curious how to run both on VRAM + RAM. I'm expecting output for around 20 t/s.

any suggestions or tips would be much appreciated.

dont be mad, just trying to learn new things

r/SideProject SSJ2Teen-Gohan

POV: Building a Saas in 4 days

r/AI_Agents Away-Contribution689

Project help needed! Who can save my campus recruitment?

I've created a minimal AI agent implementation related to my internship project, a rag-based intelligent customer service project. I want to explain this agent during job interviews and understand its gaps compared to enterprise-level implementations. Below is a flowchart; I can also provide the code if needed. I used LangChain and LangGraph for integration and FastAPI for port handling.

Thanks very much to anyone willing to help!

r/creepypasta Flaky_West_4472

room 13

r/AI_Agents Safe_Flounder_4690

Structuring AI Agents for Scalable, Reliable Business Automation

Most companies building AI spend all their energy debating whether GPT-4, Claude or Gemini is the best model, but they often overlook the bigger picture: how the AI agents themselves are structured. Agent architecture is not a minor technical choice its a design decision that determines whether your AI system can scale effectively, remain reliable under complex workflows and consistently deliver business value. Single-agent systems often fail because they try to do everything at once, creating bottlenecks and increasing risk, while multi-agent architectures distribute responsibilities, isolate failures and allow for modular improvements over time. Designing agents with clear roles, communication patterns and fallback mechanisms ensures that enterprise workflows, autonomous operations and decision-critical systems run smoothly. The companies seeing real results with AI aren’t just picking better models they’re building smarter agent structures that grow with the business, reduce errors and streamline automation. Focusing on architecture first makes scaling easier, improves reliability and ultimately maximizes ROI on AI initiatives.

r/photoshop AdSuspicious6021

How to get this effect?

How do I get this effect from a photo? The Monotone mode isn't doing anything lol. Thanks

r/Jokes AskewdJackassery42

When the wind blows.

So, there is this fish, and it is sitting in a pond looking up at a branch.  On this branch is a Fly.  The Fish is thinking to itself, if that branch drops 5 inches I could go for that Fly and have myself a little snack.  

Just on shore of the pond, there is a Raccoon, and it is looking at that branch.   It is thinking to itself, if that branch drops 5 inches… then the Fish will go for the Fly and I’ll go for the Fish and I’ll have myself some Lunch.  

Just a few feet away in a bush there is a Wolf, and it is looking at that branch, and it is thinking to itself, if that branch drops 5 inches… then the Fish will go for the Fly, the Raccoon will go for the fish and I’ll go after the Raccoon and bring my pack a meal.  

Behind a little rise there is a Bear, and it is looking at that branch, and it’s thinking to itself… if that branch drops 5 inches… then the Fish will go for the Fly, the Raccoon will go for the fish, the Wolf will go for the Raccoon and I will go for the Wolf and have some food for my Cubs.

On the other side of the pond, there is a Hunter, and it is looking at that branch, and it’s thinking to itself… if that branch drops 5 inches… then the Fish will go for the Fly, the Raccoon will go for the fish, the Wolf will go for the Raccoon, the Bear will go for the Wolf and I’ll take a shot at the Bear and be King Hunter for a Month. 

Near the feet of the Hunter is a Mouse, and it is looking at that branch, and it’s thinking to itself… if that branch drops 5 inches… then the Fish will go for the Fly, the Raccoon will go for the fish, the Wolf will go for the Raccoon, the Bear will go for the Wolf, the Hunter will take a shot at the Bear and that sandwich will fall out of his pocket and I’ll have food for a week.  

Behind the Mouse, hiding in a tree is a Cat,  and it’s thinking to itself… if that branch drops 5 inches… the Fish will go for the Fly, the Raccoon will go for the fish, the Wolf will go for the Raccoon, the Bear will go for the Wolf, the Hunter will take a shot at the Bear, the sandwich will fall out of his pocket, the Mouse will go for the sandwich and I’ll go for the Mouse and have a snack to take to back to my Human.  

And it happened, the wind blew across the Pond and the branch dropped 5 inches.  The Fish went for the Fly, the Raccoon went for the Fish, The Wolf went for the Raccoon, the Bear went for the Wolf, *BANG* the Hunter took a shot at the Bear, the sandwich falls out of his pocket, the Mouse went for the sandwich, the Cat goes for the Mouse, misses and falls into the Pond.  

What is the moral of this story?  

When the Fly drops 5 inches, the Pussy gets wet.  

20 3
Reddit
r/meme VeterinarianPrior835

The problem of all fandoms

Hello world !

r/SideProject Background-Set-6581

Can You Actually Sell a Business Idea or Business Plan?

I have a solid business idea and a complete business plan already done.

I’m wondering: are there any legitimate marketplaces where you can pitch or sell a business idea (with a plan) to potential investors or operators?

Not talking about fundraising platforms for start-ups that are already operating, more like idea stage concepts.

Curious if this already exists or how people usually handle this.

r/SideProject JHjertvik

I built a single-file, no-dependency Web Component that turns mouse movements into physics-based CSS variables.

I wanted to share a small, open-source Web Component I just released to help make UI interactions feel more "alive" without bloating your project.

Click here to read more and see some cool demos

Gimli Mouse Tracker on GitHub

r/AI_Agents Electrical_Soup8404

Open Source AI Skills Directory

Cookbook for AI Agents

For a long time, I felt like I was stuck on a treadmill. Every time I started a new AI project, I’d spend the first two weeks doing the exact same thing:

  • Setting up lead research flows.
  • Writing the same 50 lines of code for data extraction.
  • Wrestling with Claude and Cursor to get them to understand my stack.

I realized I wasn't being a developer. I was being a plumber.

The "Aha!" Moment: AI agents are only as good as the context they have. But right now, that context is fragmented across GitHub, MCP servers, and random .cursorrules files. We are all scavenging for the same building blocks.

So, I decided to stop scavenging. I wanted a "Cookbook"—a single source of truth where the most complex SaaS growth loops are already solved and ready to ship.

Today, I’m opening that Cookbook to everyone.

Skene Cookbook is my personal collection of 700+ AI skills and production-ready Skill Chains.

Why this changed everything for me: Instead of starting from zero, I start at the 80% mark. I don't "write" a referral system anymore; I deploy a Skill Chain that has already been hardened and audited.

The Recipes inside:

  • The Viral Growth Loop: Automated invites and referral tracking.
  • The Churn Prevention Agent: Predicting friction before the user leaves.
  • The Unified Context: One command to ground your AI tools in your actual code.

If you use Claude or Cursor, you can save months of trial and error today:

Bash

npm install /skills-directory
npx skills-directory install --target all

Why am I making this open source? Because the "AI Wrapper" era is over. The winners of the next two years will be the teams that ship the fastest. I want to see what you build when you don't have to worry about the plumbing.

r/brooklynninenine ProudnotLoud

Short lived but a very sweet Rosa moment!

2763 59
Reddit
r/MMA textorix

The biggest female MMA talent of German scene Alina Dalaslan will fight in Berlin event

  • Before starting her pro career she worked as a security at OKTAGON
  • In just 10 months since she went pro she fought 5 times and won each time
  • 4 of those 5 fights are KO/TKO finishes
  • All her opponents were more experienced than her
  • Highligh video of her career so far
36 5
Reddit
r/painting libberkib

Bushtits (Gouache and Ballpoint Pen on 12 inch Wood Panel)

r/Damnthatsinteresting IncomingBroccoli

Stars revolving around a super massive black hole

175 24
Reddit
r/Damnthatsinteresting grasshopper3307

Whale eating his catch.

1195 95
Reddit
r/Seattle OkAnalysis721

Is raising canes having a soft opening tmmr?

I keep searching up the opening date and it says feb 17 everywhere, any idea why they would post this? Lol

r/HistoryPorn DiaboDeCapote

The crowd at the first Rock in Rio, Rio de Janeiro, 1985 [956×500].

13 3
Reddit
r/LocalLLaMA Cool-Photograph-8452

Question about SSD offload in llama.cpp

Has anyone here ever implemented SSD offload for llama.cpp, specifically using SSD as KV cache storage to extend effective context beyond RAM/VRAM limits? I’m curious about practical strategies and performance trade-offs people have tried. Anyone experimented with this?

r/PandR ProudnotLoud

Marshmallow Ron was magnificent!

2102 16
Reddit
r/Jokes WolfmanSG

Old Joke but classic

A woman is having an affair during the day while her husband is at work. Her 9 year old son comes home unexpectedly, sees them and hides in the bedroom closet to watch.

The woman's husband also comes home.

She puts her lover in the closet, not realizing that the little boy is in there already.

The little boy says, "Dark in here." The man says, "Yes, it is." Boy - "I have a baseball." Man - "That's nice." Boy - "Want to buy it?" Man - "No, thanks." Boy - "My dad's outside." Man - "OK, how much?" Boy - "250"

In the next few weeks, it happens again that the boy and the lover are in the closet together.

Boy - "Dark in here." Man - "Yes, it is." Boy - "I have a baseball glove." The lover remembering the last time, asks the boy, "How much?" Boy - "750" Man - "Fine."

A few days later, the father says to the boy, "Grab your glove, let's go outside and have a game of catch." The boy says, "I can't, I sold my baseball and my glove." The father asks, "How much did you sell them for?" Boy - "1,000" The father says, "That's terrible to overcharge your friends like that,.. that is way more than those two things cost. I'm going to take you to church and make you confess."

They go to the church and the father makes the little boy sit in the confession booth and he closes the door.

The boy says, "Dark in here." The priest says, "Don't start that shit again."

155 13
Reddit
r/meme Nintendophile79

Glorious volume with hyluronic acid

86 1
Reddit
r/nextfuckinglevel ttatm

French figure skater Adam Siao Him Fa does an illegal backflip at the 2024 World Championships, deliberately incurring a deduction but still climbing from 19th place to the bronze medal. The backflip was legalized the very next season.

317 73
Reddit
r/BobsBurgers Mofoblitz1

Which serial killer is Louise's serial killer face based on?

please no glorification of SKs obviously, but I'm still curious :P

269 33
Reddit
r/AskMen Existing_Switch_4995

How do you perceive women who seem guarded or fearful around you?

I want to ask this respectfully and honestly because I’m trying to understand the other side.

I realize that I’m very cautious around men I don’t know. For example: I avoid getting on elevators alone with a man, if a man gets on, I become very alert and hold my keys, I prefer female Uber drivers and female doctors when possible, if a service technician comes to my home, I feel much safer if someone I trust is there with me.

This isn’t about thinking all men are bad. It’s more about personal safety and anxiety around being alone with someone I don’t know.

I’m working on self-defense and trying to find a healthier balance, because I know we all share the same world and I don’t want to live in constant fear.

My questions for men:

- When you notice a woman acting guarded around you, how does it make you feel?

- Do you understand where that fear comes from?

- Is there anything men wish women understood about their perspective in these situations?

I’m genuinely asking to learn and see this from your side.

r/programming Sad-Interaction2478

Python's Dynamic Typing Problem

I’ve been writing Python professionally for a some time. It remains my favorite language for a specific class of problems. But after watching multiple codebases grow from scrappy prototypes into sprawling production systems, I’ve developed some strong opinions about where dynamic typing helps and where it quietly undermines you.

12 67
Reddit
r/OldSchoolCool greenbean53

June 1964 - Is this your hot grandpa?

Found tucked into an old record from a shop in Portland, Maine. Would love to send this photo to someone who knows him!

r/ARAM Lego1as96

Ryze in Mayhem

Ok, I am really tired of seeing Ryze EVERY GAME in this mode. It would have been fine if he didn't roll his best augs EVERY TIME I see him. It just becomes boring to play against, as he will end up with 1k+ AP 10k HP oneshotting everyone. He is never weak. Everyone was complaining about Mel, but for me Ryze is a pure evil and pretty toxic for this mode. It would be nice to see some changes to make him more "balanced"

r/mildlyinteresting kaamraan

We bought these 2 cat toys on the same day. The mouse has been mended 3 times.

19 4
Reddit
r/automation phicreative1997

Convert blog2video, no slop and doesn't cost an arm or leg

I wanted to turn my blog posts into videos. Editor wanted $30K. Built my own tool instead.

The problem:
visits plateaued. Social wants video. My best blog posts were just sitting there.

What I tried:

Editors — $300–$1,000 per video. For 50 posts? $15K–$50K.

AI video tools — Generic stock footage, robotic scripts that didn't sound like me. Expensive for long posts.

So I built something different:

Doesn't generate videos from scratch. Translates your blog posts into video, faithfully.

  • Pulls your actual post—structure, arguments, voice
  • AI breaks it into scenes
  • No stock footage—animated text, diagrams, clean layouts (built with Remotion)
  • Real voiceover (ElevenLabs)

Looks professional, not "AI content."

Converted 50+ blog posts this way. Saved tens of thousands.

First video free, no card.
Paste blog URL → script → video in minutes.

Link in comments

r/interestingasfuck 21MayDay21

A female deer in nature with its two fawns, one of which is melanistic.

44 8
Reddit
r/homeassistant echatoss

Squid Proxy (forward proxy) Add-On

Hi everyone,

I wanted to share a custom add-on I’ve been working on lately. I really needed a secure proxy to run on HA and for the love of god, I couldn’t find any, so I built one:

A dedicated Squid Proxy server add-on for Home Assistant.

And: I spent quite a bit of time hardening this one to be as “stealthy” as possible.

The main goal was to create a proxy that stays transparent. It automatically strips headers like

Via

and

X-Forwarded-For

and mimics modern browser behavior.

I’ve also built in a real-time dashboard accessible directly through the Home Assistant Sidebar (via Ingress) so you can see your active connections and protocol splits (HTTP vs HTTPS) as they happen, can handle LetsEncrypt and falls back to self-signed automatically, etc. No technical skills needed, everything should work simply from config.

How to try it out: You can add my repository to your Add-on Store:

https://github.com/Nwus9XWK5UTy/homeassistant

Once you add the repo, you’ll see “Squid Proxy” in your list.

I’d love your feedback: This is currently at version 1.0.3, and I’m looking to refine it further. I’m particularly interested to know:

  1. Currently only built for aarch64. Any other platforms?
  2. Is the live dashboard useful, or are there other metrics you’d like to see visualized?
  3. Any bugs?

Check out the full documentation and source code on the GitHub page.

I’m happy to answer any questions here or on the GitHub issues page.

r/Seattle DefiantEvidence4326

Help! Parade traffic with kiddo procedure at Swedish

18mo has a procedure scheduled for 11:30am at Swedish First Hill. How can I plan to travel downtown from Lynnwood? Need tips! Is the light rail best bet?

r/space MassiveMembership534

Need Advice on Space Book

Hey, I want to get my gf a book about space. Like non fiction. She is particularly into like potential alien life, quantum physics, and she has a general curiosity about space on the whole. She isn't a physics major, she does film, so nothing over-technical.

r/linuxmemes Fair_Investment_4189

systemd is better than openRC,sysvinit and runit

327 55
Reddit
r/painting planetaryseraph

How much should I charge for this?

I’m looking to sell this painting to make some extra cash but I’m not sure how to price it since i’ve never sold an art piece before. How should i go about pricing it? It’s acrylic on canvas, 16 x 12 in, and took around 15 hours

r/maybemaybemaybe FXgram_

Maybe maybe maybe

879 67
Reddit
r/photoshop brunodou1234

Como puedo hacer para que mi atajo de pincel sea con el alt

no puedo hacer que cuando apreto alt+arrastar el clic, sea el atajo para aumentar/disminuir la dureza y agrandar/achicar. intente cambiando los metodos abreviados, pero no me deja poner el alt por ende no crear el atajo, alguno sabe como puedo solucionarlo?

pd: vi en la pagina de adobe que es la configuracion normal cambiarlo con las teclas (<>, ¿?, etc) pero se me hace mas comodo con el mouse

r/programming BeamMeUpBiscotti

Making Pyrefly's Diagnostics 18x Faster

High performance on large codebases is one of the main goals for Pyrefly, a next-gen language server & type checker for Python.

In this blog post, we explain how we optimized Pyrefly's incremental rechecks to be 18x faster in some real-world examples, using fine-grained dependency tracking and streaming diagnostics.

Full blog post

Github

r/aivideo Express-Turnover-608

Check out my tutorial on AI transitions on my YT Snackable AI

r/PhotoshopRequest Low-Bee-6285

Can someone edit this to make it less blurry without changing the photo quality?

r/SideProject Money-Suggestion5310

Our YC demo video would love honest feedback

We’ve completed the demo video for our YC application. It focuses on the core problem, the product workflow, and our motivation for building it.

Happy to exchange feedback with others working on similar projects.

r/LocalLLaMA Quiet_Dasy

How to Run Two AI Models Sequentially in PyTorch Without Blowing Up Your VRAM

I’ve been building a pipeline where a large language model (LLM) generates text, and that output is fed into a text-to-speech (TTS) model. Since they run one after another—not at the same time—I assumed my 8GB GPU would handle it easily.

Even though the models run sequentially, if you don’t explicitly unload the first model and clear the cache, PyTorch keeps both models (and intermediate tensors) in VRAM. This quickly leads to CUDA out of memory errors on consumer GPUs .

Edit: im trying tò run n8n/flowise/flowmesh where each node has llm model , llm model are running each on different PC . How tò setup with 3 Nvidia gpu and ollama?

r/creepypasta First_West_6765

The Filthy House

You know those old houses on the edge of town, the ones where the streets get narrow and the trees reach out like they're trying to snag you as you walk by? Mark's place was one of those. It was a creaky old bastard from the 1920s, built by some nutjob architect who swore the walls could breathe. Breathe, for Christ's sake, like bricks had lungs. Mark didn't know squat about the stories when he bought it. He was just your average middle-aged guy, an accountant with a face that looked like life had been kicking it around for years. His divorce had done a number on him in the way divorces do. His ex, Karen, left him for some slick bastard with a better job and no beer gut. Mark figured he would grab this cheap dump far from everything and start fresh. Start fresh? That's a laugh in a house like that.

At first, it wasn't bad. Mark came home from work with spreadsheets and numbers all day long, enough to drive a man batshit, and tossed his coat over a chair. An empty beer can on the coffee table. "I'll clean it tomorrow," he told himself while staring at the blank walls. But tomorrow came, and the mess stayed. It piled up slow like snow drifting in overnight. The kitchen had dishes in the sink, crusted with grease that started to reek. The living room had pizza boxes, old newspapers, a busted lamp he dragged up from the basement thinking he'd fix it. "I'll get to it," he muttered, but he didn't. The smell crept in as a sweet, rotting stink like fruit gone bad in the sun. Mark noticed it, but he thought you get used to anything, right?

Nights were when it started. Mark lay in bed, staring at the ceiling with those weird stains that looked like faces if you stared too long. Then he heard it as a soft scraping like fingernails dragging across the floor. Not loud, but enough to give you the chills. He sat bolt upright with his heart pounding like a drum in his chest. "Rats," he said out loud to calm himself down. Rats, yeah. Made sense. He set traps the next day, baited with peanut butter, and went back to bed. But morning came with the traps empty, coated in a slimy film that smelled like sour milk. "What the hell?" he whispered, and he tossed more junk on the pile. A half-eaten sandwich, a cracked mug. The house seemed to soak it up so the mess grew faster than he could add to it.

The sounds got worse. Not just scraping, but cracking like bones snapping in the dead of night. Mark started closing doors, shoving chairs against them. "It's the house settling," he told himself. Old houses do that. But deep down, he knew better. He found things in weird places such as a photo of Karen in the bathroom, smeared with something red that looked like blood but was just ketchup. "How'd that get here?" he mumbled with hands shaking. The mold on the walls grew, forming patterns like veins, throbbing in the lamplight. He stared at them for hours, feeling a cold in his bones that wasn't from any draft.

Then the whispers came. Murmurs at first like a radio half-tuned in another room. Mark rolled over in bed, pulled the pillow over his head. "Neighbors," he thought. But no, it came from the piles of trash. Words he couldn't make out, but they filled him with a dread he couldn't shake. "Feed us," they seemed to say in a gurgling Dutch that echoed in his skull. "Grow with us." He tried ignoring it, cranked the TV louder, but the voices wormed into his dreams. Dreams of Karen laughing with her face melting into the mold.

Panic set in. Mark decided to clean. Rubber gloves on, stuffing garbage bags. He worked like a madman with sweat stinging his eyes while the voices laughed as a wet, bubbling chuckle from the walls. He dragged the bags to the door, but when he came back the bags were torn open with contents writhing across the floor. Paper like worms, glass like bugs skittering, food scraps pulsing like hearts. "No, no, no," he whimpered, bolting to his bedroom. Door slammed shut, but the trash followed, seeping under like fingers, across the bed. He felt it on his skin, prickling, nipping.

He wrote it down in an old notebook. "The house is alive," he scrawled with hand trembling. "The trash is its blood. I'm the prey." He tried escaping, pounding on doors that wouldn't budge, smashing at windows coated in mold that wouldn't shatter. The voices told tales of a woman drowning in her own filth, a kid swallowed by the basement. Shadows in the corners, shapes like hands reaching out.

That last night was pure hell. Mark huddled in the corner, shaking, as the door shuddered under the assault of the mass. Walls dripped black ooze, hissing on the floor like acid. The whispers turned to screams: "Join us! Become one!" A thousand voices of Karen, his mom, strangers with agony and hunger twisted together. The door exploded inward in a shower of splintered wood and squirming refuse, a tidal wave of rot and fungus crashing in. Tendrils of twisted plastic coiled around his legs, slicing deep, blood mixing with slime in hot rivulets. "Let me go!" he roared, but the beast rose up as a face forged from cans and spoiled fruit, eyes writhing with maggots, glaring with ancient malice.

It seized him, hoisted him high with arms of sodden newspapers dripping foul ichor. The trash forced its way in through his mouth, nose, ears, choking his lungs with decay, bloating his belly like a corpse in the river. Visions assaulted him of his flesh fusing with the walls, tendrils of mold burrowing into his veins, his screams joining the chorus forever, starving for fresh victims. He clawed at his throat, gagging on chunks of rot that tasted like his own regrets, his body convulsing as the filth invaded every pore, every cell.

His final scream was a wet, strangled gargle, his skin splitting open like overripe fruit, innards spilling out to merge with the heaving pile. Pieces of him with flesh, blood, bone dissolved into the mass, nourishing the house. A deep, satisfied groan rumbled through the structure, the walls closing in like a coffin lid.

The neighbors called the cops when the stench blanketed the street like a shroud. They broke in and found a living nightmare with throbbing mold on the walls, floors undulating with trash like a breathing sea. No sign of Mark, just a notebook half-devoured by something unseen: "It's eating me. It's fouling me. And it's starving for more."

The house sits empty still, but at night, you hear it. Scraping, whispering. And if you stare too long, you feel the pull: step inside, add your mess. It's waiting for you, always ravenous.

r/aivideo NeuroChemicalCowboy

Tempes Fugit

63 4
Reddit
r/PhotoshopRequest boses247

Removing blue dot, possible flare?

My kid took this photo, and we all love it. The picture would be better, we think, without that blue dot. It was taken from a pretty fast moving boat on an older iPhone.

Can anyone help us by removing it?

r/AskMen Bluesmokee

What makes a woman creepy to a man?

Something that gives you the chills and tells your instincts to remove yourself from the vicinity and cut off contact, a woman you wouldn't want yourself or your family/SO, mother, kids, etc. around either that actually unsettles and intimidates you a bit.

I don't mean just any general negative qualities like rudeness or being entitled, but more that gives you goosebumps.

r/ClaudeAI TadMSTR

I accidentally built a distributed AI memory system during my first week using Claude

TL;DR

Tried ChatGPT for bash scripting, switched to Claude, got hooked. Built a working backup solution in one session. Tried using AI at work, hit usage limits. Asked Claude how to optimize token usage. Ended up designing a four-tier memory architecture by accident. It emerged from trying to solve "how do I work on multiple projects without re-explaining everything." Now it's fully operational and I've documented the whole thing.

How This Started

My homelab needed backups. I run Docker Compose on TrueNAS and Debian (using Dockhand), and I wanted something similar to my Unraid backup script but adapted for these hosts. My homelab group said "try Claude, it's better for scripting."

I switched from ChatGPT to Claude and never looked back. Gave my ChatGPT subscription to my dad. Don't even remember what I did with it - used it for less than a day.

The First Win

In one fresh Claude chat session, I built a complete backup solution for my TrueNAS and Debian hosts. It works. It's tested. Not fully deployed yet, but it's built. That dopamine hit was real - I was fully engaged and actually created something functional.

This is when I realized: AI is actually useful for writing scripts.

Taking It to Work

I'm a sysadmin. Thought "if Claude can do this for homelab stuff, let me try Copilot at work" (company pays for it). Tried building a PowerShell user lookup toolkit for our help desk.

Hit problems pretty quickly with Copilot. Brought it to Claude, built a working solution, but some lookups were slow. Took Claude's script back to Copilot in a fresh chat: "Tell me what this does, then recreate it keeping these specific parts." Copilot's version errored on first run. Back to Claude, fixed it over several iterations. Now it works perfectly, and Copilot's API optimization made lookups instant.

The Problem

I had my solution: Use Claude where Copilot lacks, use Copilot for the grunt work.

Then I hit 91% of my weekly usage limit in 3 days.

The Optimization

That's when I formalized the three-stage PowerShell workflow:

  1. Stage 1 (Claude): Design architecture, identify edge cases, create the prompt
  2. Stage 2 (Copilot): Generate the actual code (work pays for this)
  3. Stage 3 (Claude): Fix syntax errors, handle edge cases, polish

Let work pay for the heavy lifting. Use Claude's reasoning where it matters.

The Question That Changed Everything

With the three-stage workflow working, I wanted to do more. I started asking Claude questions about how it actually works:

  • What settings can I configure?
  • What can and can't it do?
  • How can I work on multiple projects without wasting tokens?
  • How do I juggle projects without re-explaining everything every time?

I wanted to work on Docker backups, PowerShell tools, and migrating my dad's files simultaneously. But I kept hitting the same problem: starting a new chat meant re-establishing all the context.

What Emerged (By Accident)

Through those conversations about optimization, a pattern emerged. I wasn't trying to design a system - I was trying to solve a practical problem. But the solution that came out looks like this:

Tier 0: GitHub Instructions

  • Created a private repo with my cognitive profile, communication preferences, infrastructure context, workflows
  • Markdown files, version controlled
  • Claude can reference it via GitHub connector
  • Stable foundation that doesn't change

Tier 1: Prime Directive Chat

  • Strategic coordination and routing
  • Meta-thinking about which project to work on
  • Can degrade over time (entropy is acceptable here)
  • Refreshes from Tier 0 when needed

Tier 2: Project Chats

  • One chat per project (Docker backups, PowerShell toolkit, Dad's files)
  • Custom instructions reference Tier 0 for my preferences
  • Isolated contexts (no token bleeding between projects)
  • ~50-60% token savings per session

Tier 3: GitHub Code

  • Actual repositories with working code
  • Version controlled
  • Canonical source of truth

Why This Works

The key insight: Separate memory by persistence requirements, not content type.

  • Some context is stable (how I communicate, my ADHD patterns)
  • Some context can degrade (strategic thinking, principles)
  • Some context must stay precise (technical specifications, code)
  • Some context is canonical (actual implementations)

Each tier has the right amount of persistence for its purpose.

The ADHD Connection

I have ADHD (and likely ASD). This system works with my brain, not against it:

  • External memory for working memory limits
  • Clear boundaries reduce decision fatigue
  • Project isolation supports hyperfocus
  • Explicit documentation (no invisible assumptions)
  • Version controlled (track changes, understand evolution)

From Accident to System

I didn't sit down and design this. It emerged from:

  1. Hitting usage limits
  2. Needing to optimize token usage
  3. Wanting to work on multiple projects
  4. Asking "how does Claude actually work?"
  5. Iterating on what worked

Then I noticed the pattern, formalized it, documented it, and validated it works.

Timeline:

  • Week 1: Built backup scripts, discovered Claude
  • Developed three-stage PowerShell workflow
  • Asked optimization questions
  • Saturday (Feb 7-8): Built the complete tier system in one hyperfocus session
  • Sunday (Feb 9): Added CLI commands (espanso), local workflow (Claude Desktop), conversation archiving
  • Monday (Feb 10): Documented why this is better than Claude's memory feature

What I've Shared

Created a public documentation repo: prime-directive-docs

Four documents:

  1. Casual version - The discovery story and WandaVision connection
  2. Technical deep-dive - Complete framework with cognitive science parallels
  3. Implementation report - How we built it, validation results
  4. Why not memory feature - Comparison with Claude's built-in memory and why explicit beats automatic

The private configuration repo has my actual preferences, but the concepts are all public.

Real Results

Token efficiency: ~50-60% savings per project session (isolated contexts)

Context persistence: Jump between Docker, PowerShell, and personal projects without re-explaining

Cross-platform: Works on Claude.ai (web), Claude Desktop, and mobile via GitHub connector

Cognitive load: Clear boundaries, external memory, reduced decision fatigue

It just works. And it wasn't planned.

The Meta Part

I asked Claude about its memory feature before building this. Claude explained all the problems with automatic memory extraction:

  • Makes invisible assumptions
  • Information gets outdated
  • Overgeneralizes across contexts
  • No version control
  • Black box system

I thought "which would be better - Projects or Memory?"

Three days later, I had built the answer: Neither. A four-tier system that gives you control over context, separates by persistence needs, and version controls everything.

The comparison document exists because I literally asked "is automatic memory accurate?" and Claude said "no, here's why." So I built something better.

Why Share This

Because it's useful. And because I built it in a week by accident while trying to backup my Docker containers.

If you're juggling multiple technical projects and fighting Claude's token limits, the tier separation approach might help. If you have ADHD and need external memory that doesn't make invisible assumptions, explicit documentation might resonate.

Or you might read this and think "this is overkill." That's valid too. For single projects or casual use, it probably is.

But for ongoing infrastructure management across multiple contexts while optimizing for ADHD patterns? It works.

Questions I Expect

"Isn't this just a CLAUDE.md?" No. Other systems I've seen are project-focused - they help you manage context within a single large project. This is identity-focused. Your preferences and cognitive profile persist across all projects. Different scope entirely.

"Why not just use Claude's memory feature?" Because automatic extraction makes assumptions. I want explicit control. See the comparison document - it breaks down every problem with auto-memory and how this solves each one.

"This seems complicated." It emerged organically. I didn't design complexity - I solved specific problems and the structure fell out naturally.

"Can I replicate this?" Yes. Public docs explain the theory, implementation shows the practice. Adapt to your needs - your brain isn't mine.

From "I need to backup my Docker containers" to "I accidentally built a distributed AI memory system" in one week.

ADHD brain doing things.

GitHub: TadMSTR/prime-directive-docs

r/DunderMifflin Mitzuda

Blood drive Episode into

Watching The Office for the 7th time… can’t believe it took me this long to notice how little Pam knows Michael at the core of his personality. She avoids letting that phone system dude meet him because she’s afraid the system will replace her. This is Michael we’re talking about… he would never do that to his family. The Office is his family.

r/oddlyterrifying Oda_e_um_genio

Cat in the mirror.

252 3
Reddit
r/interestingasfuck Aarnavaperson

Max Verstappen transforming sim racer Chris Lulham into a professional racing driver. Away from F1, Max is pushing boundaries and channelling his passion into developing talent from endurance racing at the 24 Hours of Spa to the iconic Nordschleife.

r/StableDiffusion WildSpeaker7315

For people interested, I'll be releasing a New LTX-2 Adult lora in about 2 hours.

So i made tit-daddy on Civitai, The next obvious choice is Pus-daddy,
its taken 3 days and around 32 hours of training to get it where i want it

i've tried to talk to civitai about it to try avoid ban, but been ignored for over 24 hours... so i'll probably be banned.

before i post it any tips on being banned would be apricated

plan to upload at 1pm GMT, For those interested.
probably wont be on there long.

As for the lora. Many angles, Many views, , close ups, Everything kind of works as it should
will come with full workflow

including around 15-20 Full prompts to get you started

222 124
Reddit
r/OldSchoolCool Tony_Tanna78

Isabella Rossellini, photo by Steven Meisel (1989)

383 12
Reddit
r/painting CSA1996

What is this colour called?

r/painting AndyMacAwesome

Oil painting I finished a few months ago

I haven't posted anything on Reddit in awhile so I figured I would share this oil painting I made. It's about 5'x4'. I worked on it on and off for a little over a year. Took a few months off in the middle to deal with depression but my head is clear again so I wanted to share it with whoever wants to see it. Thanks for looking. I hope you enjoy it.

r/comfyui CommentSignal9029

[Video] "DECORO!" - A surreal short film made with Wan 2.2 & LTX-Video (ComfyUI Local)

Full video.

r/automation Soft_Attention3649

why browser automation infrastructure matters for large scale automation

i have been exploring browser automation infrastructure systems that manage multiple automated browser operations at scale.

instead of running tasks manually, it keeps processes stable, handles errors, and lets you scale workflows efficiently. for repetitive online processes or large scale automation, it’s a huge time saver.

r/nanobanana MorrisCody

Is there a way to provide reference images for what not to do?

I'm trying to draw something that can be executed two ways. For example, if I wanted an anthropomorphic car, it could be created using the headlights as the eyes. However, I want the eyes to be in windshield (like the movie Cars). No mater how I describe the image, and what reference I upload, I still get headlight eyes. Is it possible to feed images and say, "Not like this."?

r/oddlysatisfying Raj_Valiant3011

The way that this corner is fixed during the construction process is perfect

9710 156
Reddit
r/ClaudeAI CoderBG

Built an MCP server to fix Claude Code's file encoding corruption

I've been using Claude Code on a legacy codebase with CP1251/CP1252 encoded files. The problem is that Claude's Edit/Write tools force UTF-8 on every write, which corrupts all non-ASCII characters. There are several open issues about this on GitHub.

So I built an MCP server specifically for Claude Code that auto-detects file encoding and preserves it on read/write. Written in Go, single binary, no dependencies. Also includes a faster tree tool since Windows doesn't have one built in.

Free and open source: https://github.com/dimitar-grigorov/mcp-file-tools

Works for anyone dealing with CP1251, CP1252, ISO-8859, KOI8 or other legacy encodings. Claude Code helped with parts of the implementation itself which was kind of ironic given it's fixing Claude's own bug.

r/explainlikeimfive Safe-Classic-9474

ELI5 does heat is... really change?

so basically i see that energy can't be create or vanish but how about heat. i know heat is come from other energy but does heat really change to other energy instead of just spreads to the surrounding? is not then will space temp keep increase as time fly.

r/interestingasfuck 21MayDay21

4 bees sleep in the same flower.

321 18
Reddit
r/n8n That-Accountant-3532

Automação com N8N8

Estou pensando e querendo desenvolver uma automação para empresa onde trabalho, a ideia seria desenvolver uma automação que extrai dados de Diário Oficial de cada estado, filtrando por palavras-chave e extraindo só o necessário, no meio seria importante ter um agente para extrair imagem ou tirar um print, e trazer PDF's. E depois colocar para disparar uma mensagem completa no WhatsApp com os resultados. Isso seria possível? Estou com dúvida em qual caminho melhor a se tomar.

r/OldSchoolCool LunaSnuzzles

When Johnny Cash introduced the 13 year old singer, Dolly Parton, on the Grand Ole Opry in 1959, she got 3 encores.

1194 26
Reddit
r/MMA WinterStill4472

Kyoji Horiguchi moves up to #5 after beating Amir Albazi (UFC Rankings Update - February 10, 2026)

183 44
Reddit
r/personalfinance TattooedWolf97

HYSA and Transferability

Saw a earlier post that got me thinking about opening a HYSA with capital one or someone else once I looked around. My question is am I able to move money from the HYSA to my checking account? I currently hold money that I pay bills for in savings to keep myself from spending it. My fear is once I send it to the HYSA if I need to pull money for the mortgage I’ll be unable to move it back to the checking account. If this is a dumb question I apologize I was not taught money smarts and don’t want to risk locking up my money in a HYSA and not being able to pull it when needed. So can I pull money from a HYSA to my checking account from a different bank or is it locked up in the HYSA once it gets deposited?

r/space Shiny-Tie-126

The amount of oxygen available during the formation of a planet can mean that many planets are chemically unsuitable for supporting life from the very beginning, even if they have water and appear habitable from the outside

The correct oxygen content during core formation ensured that sufficient phosphorus and nitrogen were present in the Earth's mantle and crust.  

This makes Earth a chemical stroke of luck in the universe. It is located in a zone with ideal chemical conditions for the emergence of life. 

When searching for life in the universe, researchers should therefore look for solar systems similar to Earth's. Focusing solely on water is too narrow a view. 

269 38
Reddit
r/ClaudeAI Ellsass

Why doesn't the monthly spending limit stop Claude from using more?

I re-enabled my Pro subscription a couple days ago. I turned on Extra Usage because I figured that 1) it wouldn't be used until I went past the normal usage limit, and 2) at worst it would charge me €5 before hitting the cap.

I tried out Opus 4.6 Fast today for one task, then checked my usage. I was surprised to see it at 189%. It looks like it also charged me €9.44. Why didn't it just use my normal token allotment?

r/painting Ill-Construction8247

Late Afternoon, oil on panel

43 8
Reddit
r/geography TWN113

Remarkably one island one nation, How did this current situation come about?

Why didn't the entire archipelago form a unified country (like the Maldives) or belong to a single major power (like the Andaman and Nicobar Islands) like most archipelagos do?

108 35
Reddit
r/Seattle Cranky_Old_Woman

Reputable places to buy used cars? Eek...

Hey Seattleites,

My beloved '09 Focus is finally showing its age, and I'm looking to buy a pre-owned car from somewhere reputable. I'm not familiar with/comfortable with 100% online buying. Are there any Puget Sound dealers folks would recommend? I'm looking for something small and reliable, e.g. Honda Fit, Toyota Corolla, Subaru Impreza. Thank you for any leads!

r/painting louis_20102

Some of my most recent works

Louis Andrada, 2026

All works are acrylic on canvas

r/PhotoshopRequest Milk__duds

Resize this to a rectangle and fill in the empty space while keeping the resolution $5

I want to use this image for a playmat but I need a high resolution rectangle

r/leagueoflegends deadonhomo

I just installed the game and I have a couple of questions:

  1. How do I make my right click movements on the left click?

  2. How do I make my attacks auto instead of having to right click? It needs a lot of muscle control and I lack that.

  3. Will I ever get better at using Q/W/E/R? I panic quite often and start pressing whatever in random orders and I click on random keys too..

13 56
Reddit
r/midjourney Sharp_Alternative845

Deer

10 0
Reddit
r/LocalLLaMA trumee

Which model of 3090?

Hello, I have read here that 3090 is the goto card for local AI. Searching on ebay shows up multiple manufacturers like Evga, PNY, Zotac, FE and with/without Ti. Can somebody help me out on what make of 3090 is needed?

I will limit myself to one gpu to minimize energy costs.

r/Adulting ParticularWeather927

Can you justify thair believes ?

108 18
Reddit
r/findareddit AHumanBeingAlone

I need to find a specific beenie from blue tomato, is there a subereddit that can help?

r/PhotoshopRequest Imcoverednbees

Can someone remove the people for me 😊

I’m sorry to ask this!

I just don’t have time to photoshop today!!!!

I’ll toss ya $10

Please no ai I wanna pay someone for their actual work

r/space Dramatic-Tax7942

If you could launch a satellite into space, what would it do?

r/Wellthatsucks need_verification

Parking in someones spot while they are in it, blocking them in.

29 7
Reddit
r/ProductHunters Mansehej

Just launched Tapfree - a voice-first Android keyboard that adapts to what’s on your screen

Hey r/ProductHunters!

I built Tapfree because I always felt mobile typing feels stuck in the past. When I'm moving fast, my ideas don’t arrive as perfect sentences. They come as fragments, quick reactions, and rough thoughts I need to shape into something coherent.

Most keyboards and dictation tools don’t help much. They transcribe words literally, miss context, butcher names, and leave me fixing formatting by hand. Writing an email, a chat reply, or a document all need very different handling.

What makes Tapfree different is how it understands context. Tapfree uses on-screen context (the text field and surrounding UI), not just the app you’re in, to produce cleaner, more relevant dictation.

It also handles the way people actually talk. You say "Could you get some coffee... sorry, tea on the way back?" and Tapfree writes: "Could you get some tea on the way back?". It catches your corrections mid-sentence so you don't have to go back and fix them.

If you give it a try, I’d love specific feedback:

  • Which app or scenario felt noticeably better (or worse) than usual dictation?
  • Any "wow" moments with the context understanding?
  • What would make it even more useful for you?

I have just released it on ProductHunt, and would love the feedback, as this is very much a side project and I’m still shaping it.

- Mansehej

r/spaceporn Busy_Yesterday9455

Apollo 17 - Orange Soil on the Moon

During the second EVA of Apollo 17, Gene Cernan and Harrison Schmitt were exploring the magnificent Shorty Crater at Station 4, when Schmitt chanced upon some orange soil.

A subsequent study of the orange soil indicates that it was formed during volcanic eruptions approximately 3.7 billion years ago.

Credit: NASA / Moonpans

2386 89
Reddit
r/explainlikeimfive Beneficial-Bowl-1112

ELI5 - The typical format for an IP is like x.x.x.x but I've heard we ran out of addresses how come we can just an extra set of numbers and make it x.x.x.x.x?

r/TwoSentenceHorror RepeatOrdinary182

I've been experiencing a terrible itching in my mouth for the last few days.

As I stare into the mirror trying to get a good look at whatever is wrong, I'm horrified as my teeth begin pulling themselves free with long spindly legs.

r/midjourney Zaicab

Vogue does the Zoo

10 2
Reddit
r/HistoryPorn DiaboDeCapote

Brazilian architect Oscar Niemeyer, one of the original architects of United Nations Headquarters in New York, going over plans for the building on 18 April 1947 [1170×530]

35 1
Reddit
r/Anthropic nigofe

71% usage but getting "You've hit your limit" in terminal

As the title say. Anyone else experiencing this? Pro Max plan.

r/AI_Agents gelembjuk

MCP or Skills for delivering extra context to AI agents?

My answer: a hybrid of MCP + Skills works best.

Both approaches have clear strengths and trade-offs.

Skills are lightweight — their definitions consume fewer tokens compared to MCP. MCP, on the other hand, gives much better control over responses and more predictable agent behavior.

One well-known MCP challenge is that the full list of tools is sent to the LLM with every prompt. As this list grows, token usage explodes and the model can get confused about which tool to use.

In one of my experiments, I tried a hybrid approach.

Instead of passing the full MCP tool list every time, I provide the LLM with a short, one-line summary per MCP server, very similar to how Skills are described. Effectively, each MCP server looks like a “skill” to the model.

Example:
EmailBox MCP“All email-related operations: accessing, writing, and sending emails.”

When the LLM decides it needs that “skill” and hands control back to the agent, only then is the full tool list for that specific MCP server injected into the context (along with a brief tool summary).
The next loop naturally becomes a targeted tool call.

The result?
- Significantly lower token usage
- Less confusion for the LLM
- Ability to connect more tools overall

This approach works especially well for MCP servers that are used infrequently. With the hybrid model, you get scalability without sacrificing control.

Of course, this would work only with custom AI Agents, not with Claude or similar. But maybe they already use some tricks like this. We do not know it.

r/whatisit InquisitorDan

Neighbors teen flashed this at my ring cam. Any ideas?

Just happened a few minutes ago. Looked pretty sus while doing it. I have no clue what that device is. Never had anything but pleasant contact with them so far.

5756 1620
Reddit
r/MMA Xerzack987

Mateusz Gamrot vs Arman Tsarukyan | FULL FIGHT

36 18
Reddit
r/maybemaybemaybe fna_fanoa

Maybe maybe maybe

53 51
Reddit
r/Weird flyxdvd

something that happened

so i got off work i usuaully cycle trough an unpopulated area and always have music on.

all of a sudden i hear cracks as if wood is broken and reed being pushed over i looked over and i saw it move and bend down it looked like something was coming towards me so i left so quick. anyways next day i went back and i took the video. im just wondering what type of animal could cause this

i live in the Netherlands i know my wild life my guess is its either boar or a deer? but the damage is kinda insane to be that

r/maybemaybemaybe bakeranders

Maybe maybe maybe

r/TheWayWeWere OtherwiseTackle5219

1925. Younger Women Developed a Fashion Craze of 'Painting their Knees'

159 11
Reddit
r/StableDiffusion chrism583

Model photo shoots

Is it possible to use ComfyUI, or any other program, to generate a randomized gallery from one or more reference photos? What I’m looking for is to simulate a modeling photo shoot with different poses throughout. I would prefer not to constantly change the prompt but be surprised.

r/PhotoshopRequest RNinRVA

Retro Style Cat Photo Request

We just officially adopted a foster cat we’ve had for over a year (a long drawn out cruelty case) and I would like to commemorate this special day with a retro 80s style photo of all of our cats. First photo is inspo and the other photos are of the cats. The wonky eyed black cat is our newest addition. Multiple photos provided for flexibility. $15 tip.

r/Unexpected sEaBoD19911991

Cool trick.

r/estoration Diligent-Kitchen9999

Can you help restore photos for my mum's anniversary

It's coming up to ten years since my mum died and I want to honour it with a photo collection. I've tried restoring some photos with free AI software but they don't seem to do much with this photo and they all seem to turn the eyes pretty crazy so really hoping someone with a bit of experience can do some of it manually so it looks more natural, doesn't have to be perfect but would love to see it properly! Happy to tip for the best one and I've got a few more I could send direct if you're interested in doing a few more. Thank you!

r/findareddit Emotional_Jeweler821

Where should I post this?

I'm looking for a narration job online, and I received a question upon signup that asked me about which accents I can mimic very accurately but I'm unsure if mine are very accurate and want to answer it truthfully so I can actually find a job to take on. So do y'all have any advice on where should I ask about which accents I sound like(where I can receive accurate advice from native speakers of that accent)? Any advice is appreciated.

r/SideProject CheatingDevApp

Fed up with Costly Interview apps like cluely, i built Cheating Dev

I have created an app to cheat interviews (not sure if this aligns with your ethics - avoid if so) :

- gives Leetcode answers accurately (yes, even hard ones) with explanation via automatic screen capture

- Listens to interviewer & responds immediately (~1s) and gives best possible answer.

- Hidden even on screen share on any platform (meet, teams, zoom, chime, etc)

- You can input your question as well and it will answer

- For latest info, it uses google search and will answer the best possible info available over the internet

- Response time is within 1 second (yes, that fast)

Most apps are hell expensive & slow while this is not and very affordable.

If you're prepping for interviews and interested in testing it, just DM me and I'll send access right away at no price to try it out.

But, please do not spam and message if you seriously need such app as i certainly do want to waste the resources. Thanks!

r/ClaudeAI LaVolpe74

Developers, how do you manage your usage limits?

I'm genuinely surprised by the fact that in this subreddit, everyone complains about their Pro plan limit usage or Claude being expensive and token-devouring, or people encouraging others to get a $200 Claude + $20 ChatGPT plan. I'm like, what on earth are you doing that requires this much AI? Don't get me wrong, I'm not trying to be judgmental, I'm just shocked.

I'm a developer by trade, spending around 10-12 hours each day working on company projects and maybe 1-2 hours on personal ones. I make very good money for where I live, and my work is pretty code-heavy. I've never reached any limit on my $20 Claude Pro plan, whether the 4-hour limit or the weekly limit.

My question is, if you're a developer, do you ever hit limits with Claude subscriptions? What's your workflow?

Edit: Clarifying, since everyone here seems to misunderstand what I mean by “workflow.” My workflow looks like this:

I have a task at hand, I read the ticket on Jira (or my personal Trello board), chat with Claude.ai, and then do some web searching. I return to Claude.ai to figure out what to do next, then I explain the plan to claude code in terminal. 5-10 minutes the code is ready. I test and proofread it, and usually ask claude to make a few fixes. Finally I push my changes to our Git server and move on to the next task. I repeat this every day for at least five or six tasks, delivering a set of features, bug fixes, etc.

Edit 2: I use Claude Sonnet 4.5. I've never had a good experience with Opus. It's slower than Sonnet, and it's pretty verbose. When I ask Opus to write code that adds 2 + 2, it builds an entire calculator that can draw graphs and solve integrals.

23 74
Reddit
r/maybemaybemaybe Extra-Tie-9012

Maybe maybe maybe

398 29
Reddit
r/comfyui AnabelBain

Can someone help me in creating some custom workflows for my ecommerce project? Paid

Hi, I am a college student so I can't pay much but if someone is willing to create some workflows for me, I will be he really grateful.

r/metaldetecting TechnicianOk967

Greetings, I was wondering if this would be a good hand shovel to easily dig through solid soil after my old one got bent literally.

Thanks for responding in advanced.

20 22
Reddit
r/fakehistoryporn thegreatjamoco

Photo of Elon Musk Christmas Day, 2013 (colorized)

16 1
Reddit
r/aivideo JonasHaus

Anyone knows how this was made

r/pelotoncycle nookall

Turns out the orange dancing rider wasn't the most distracting thing ever...

Brad's managed to be usurped by a lady so desperate for a birthday shoutout she made a sign to wear on her head throughout the ride!

So distracting to see the back of a shiny silver sign dancing around... please Peloton, ban them quickly.

Nellie, know that thousands of riders were already grumpy having to do tabata before the ride even started, then they spent 30 minutes trying to work out what your sign said...

r/mildlyinteresting oicadela

Icicle became curved from slight consistent wind

17 2
Reddit
r/Seattle spencerZgouse

Just seattle things, what do yall think, been trying photography again

78 11
Reddit
r/SideProject alecc

I built a time tracker because every existing one felt like overkill

Hey r/SideProject!

I'm a developer who's been tracking time in Excel for 15 years. It worked, but it was tedious.

I tried Toggl, Clockify, and others - all built for teams and billing. Too much complexity for someone who just wants to know "where did my day go?"

With AI tools now available, I finally built the app I always imagined: **TickTappy**.

  • One tap to start, one tap to stop
  • Color-coded tags for projects
  • Clean weekly overview
  • Native on iPhone, iPad, Mac, and web
  • Apple Watch app and iOS widgets

No mandatory fields, no dashboards, no team features.

Tech stack: React Native/Expo, Supabase, TypeScript

Business model: Everything running locally is for free, premium for cross-device sync. No ads, no data selling.

Would love feedback — especially on the onboarding and first-use experience.

r/OnePelotonRealSub rburn79

'Discover Kettlebells' - how to do the classes?

Hi all.

I would like to start the Discover Kettlebells program/collection, and use it as a runway to eventually doing 2 - 3 full KB sessions each week.

I know many of the sessions went out live, so the collection now has quite an odd structure:

Week 1: 10 mins x 3, 30 minutes x 1
Week 2: 10 mins x 3, 30 minutes x 1
Week 3: 10 mins x 4, 30 minutes x 1
Week 4: 30 minutes x 2

I believe each of the 10 minute sessions is designed to get the form correct on a particular KB exercise. Is the aim to combine these sessions into a single 30 - 40 minute session (so two total sessions per week for weeks 1 - 3), or are they designed be done on separate days?

Thanks for any pointers :)

r/LoveTrash icyhotonmynuts

Jar of love

246 58
Reddit
r/SideProject thegonelf

I vibe-coded an AI-native wrapper for PocketBase because Supabase pricing was killing my "parallel shipping."

(Since this sub doesn't allow image uploads, I’ve linked a screenshot of the CLI flow example below!)

https://imgur.com/a/YpFG8IN

I’ve fully embraced the "Vibe Coding" life. Using Claude Code + Antigravity, I can scaffold an entire app idea in minutes. But I hit a wall: the infrastructure tax. I was tired of $25/month-per-project bills just to test a simple idea.

So, I vibe-coded the backend I actually wanted. Meet Picobase.

It’s built on top of PocketBase, so the core is a battle-tested, rock-solid Go binary. I’ve just wrapped it in a CLI-first workflow designed specifically for AI agents.

  • The "Aha!" moment: As you can see in the screenshot, the CLI scaffolds your entire backend, auth routes, and UI components. No leaving the editor to click buttons in a dashboard.
  • Agent-First: Designed for Claude/Cursor to spin up tables without browser intervention.
  • Indie Pricing: I’m launching at $7 for 10 projects.
  • Zero-Config Auth: Just ask your agent to "add auth" and it scaffolds the .env and the logic instantly.

It’s the stability of PocketBase with the speed of a vibe-session.

Check it out here: picobase.app

r/AlternativeHistory EnvironmentLong4187

The corridors of the pyramids are dimensioned for a four-legged being.

Have you observed anything else in Egypt that is not suited to human dimensions?

79 72
Reddit
r/programming GeneralZiltoid

The middle ground between canonical models and data mesh

This is a summary of a somewhat long article, it cuts a lot corners due to character limits. Please check the article for more info.

Some years ago I worked with a scale-up that was really focused on the way they handled data in their product. At some point they started to talk about standardizing their data transfer objects, the data that flows over the API connections, in these common models. The idea was that there would be a single Invoice, User, Customer concept that they can document, standardize and share over their entire application landscape. What they were inventing is now known as a Canonical Data Model. A centralized data model that you reuse for everything. And to be fair to that team, there are companies that make this work. Especially in highly regulated environments you can see this in play for some objects. In banks or medical companies it’s not uncommon to have data contracts that need to encapsulate a ledger or medical checks.

Bounded context

When that team was often talking about domain driven design concepts (value objects, unambiguous language) they seemed to miss the domain part. More specifically, the bounded context. A customer can mean a lot of things to a lot of different people. This is the bounded context. For a sales person a customer is a person that buys things, for a support person they are a person that needs help. They both have different lenses. Now if we keep following the Canonical Data Model, this Customer object will keep on growing. Every week there will be a committee that decides what fields need to be added (you cannot remove fields as that impacts your applications). In the end you have a model that nobody owns, has too much information for everyone and requires constant updating.

Enter the Data Mesh

A way to solve this, is data mesh. This takes the concept of bounded context as a core principle. In the context of this discussion, data mesh sees data as a product. A product that is maintained by the people in the domain. That means that a customer in the Billing domain only maintains and focuses on the Billing domain logic in the customer concept. They are responsible for the quality and contract but not for the representation. That means in practice that they can decide how a VAT number is structured. But not how the Sales team needs to format said model. They have no control or interest in how other domains use the data. It’s a very flexible design but while Data Mesh solves the coupling problem, it introduces a new set of challenges. If I’m an analyst trying to find ‘Customer Revenue,’ do I look in Sales, Billing, or Marketing? The answer is usually ‘all of the above.’ In a pure Mesh, you don’t make multiple calls, you have to build multiple Anti-Corruption Layers just to get a simple report. It requires a high level of architectural maturity and that is something not every low-code or legacy team possesses.

Federated Hub-and-Spoke Data Strategy

Let’s try and see if we can combine these two strategies. We centralize our data in a central lake. Yes, that is back to the CDM setup. But we split it up in federated domains. You have a base Customer table that you call CustomerIdentity that is connected to a SalesCustomer, SupportCustomer, … Think of this as logical inheritance, a ‘CustomerIdentity’ record that is extended by domain-specific tables through a shared primary key. When you create a new Customer in your sales tool you trigger an event. The CustomerCreate event. The CustomerCreate trigger fills out the base information for the Customer (username, firstName, lastName) in the central data lake, at the same time we store our customer (base and domain specific data) in our local database. You also do this for delete and update events. The base information goes to the server, the domain specific data stays on the sales tool as a single source of truth. Every night there is a sync of the domain tools to the central lake to fill out the domain tables with a delta

Upsides

First up is that you have a central data record that is at most a day old. That sounds a lot in development terms, but is very doable from a data and analytics point of view. If you really need to, you can always tweak the events. Governance tooling (Purview, Atlan) works well with centralized lakes. Data retention, GDPR, data sensitivity are big things in enterprises. We can all fully utilize these and sync them downstream. The domain owns the domain data. We support the bounded context approach while still making the data discoverable and traceable outside the IT department. This supports Legacy, SaaS, Serverless, and Low Code applications. You will not hook them up to the event chain, but you can connect to the central data lake. They almost always support GraphQL. I’m personally not a fan of GraphQL, but I do see a good case here. The payloads are very controllable. We don’t send over these massive objects. But we are still able to fully migrate the data from the central place. We have separation of concerns. Our domains focus on transactions (OLTP) and our lake focuses on analytics (OLAP).

r/AbstractArt CLN47-de

Sampling_composition_173_colour_11

Sampling_composition_173_colour_11

Sampling compositions are colouring collages of recurring geometric elements

r/Art Thib_Illustrations

Japanese cherry tree, Thib, ink, 2015

262 5
Reddit
r/WouldYouRather HimikoTogaFromUSSR

WYR live in Orwell's "1984" OR in the world of Fallout games?

SortedFor.me