Billionaire Mike Novogratz predicts liberal arts education is going to make a comeback now that technical skills are becoming less valuable due to AI
3000 posts
Hey everyone,
I’m building a CLI tool called Kakarot CI. It uses AI + AST analysis to generate executable unit tests directly from your source code.
The idea is simple: if AI is going to write tests, they should actually run, pass, and be measurable.
It’s currently in beta. I’m a solo dev, so there may absolutely be rough edges.
To test quality, I created a public demo repo (~500 lines across 4 files: logic, classes, and a React component) and generated every single test using Claude Opus 4.6 through Kakarot CI.
No edits.
No manual cleanup.
No human intervention.
Total time: ~10 minutes for those 4 files.
Statements: 98.96%
Branches: 97.75%
Functions: 96.29%
Lines: 100%
Coverage alone doesn’t prove strength, so I validated it with mutation testing using StrykerJS.
Overall Mutation Score: 93.48%
257 mutants killed
16 survived
StateMachine.ts — 100%
EventBus.ts — 92.39%
SearchableList.tsx — 91.45%
math.ts — 91.67%
From published studies, many human written tests land between 50–80%, even on mature projects. Breaking into the 90s is uncommon.
This is still a controlled demo repo. The next step is running it against a messy real-world codebase and seeing how it performs there.
If you want to verify:
• Run Stryker yourself on the demo repo
• Or delete the tests, add your own Anthropic API key, regenerate them with Kakarot CI, and run the full suite + mutation tests again
It’s free to use, you just bring your own LLM API key.
Demo repo: https://github.com/Lokardo-Labs/Kakarot-Demo
Site: https://www.kakarot.io/
Would appreciate feedback, especially from anyone who’s used mutation testing in production.
I am on the verge of giving up on this after so many failed attempts to do it with ChatGPT instructions. I am a complete noob and I am using realisticmixpony something model right now.
I have tried training realism based character lora using chatgpt instructions and failed badly everytime with zero result and hours and hours wasted.
Can someone please give steps, settings, inputs for each section like what to input where.
I am on 16gb 5060 ti and 64ram. Time is the issue so wanna do that locally.
Hi everyone,
I’m building an outbound AI voice agent specifically for the French market. I'm hitting a wall regarding telephony costs and latency, and I’m looking for advice from anyone who has deployed voice AI in Europe.
Most US-based AI platforms (Retell, Vapi, Bland) default to Twilio or Telnyx for telephony. While great for the US, their termination rates to French Mobile numbers (+33 6 / +33 7) are brutal compared to their local providers (like OVHcloud, Sewan, or even Skype Connect)
My Questions :
- If I were to use platforms like Retell,vapi what is the best option to connect my agent to a french number for better latency and minimal cost ??
- If I were to build the agent from scratch like using Livekit what is the best option to connect my agent to a french number for better latency and minimal cost ??
Hey everyone,
I know the sub is full of spam and generic advice, so I’m trying to share something real that actually helped me push my SaaS from stuck-at-$4k/mo to consistently scaling past $25k+.
If you’re building or growing right now, read it all the way through. These aren’t shiny new hacks, but when you actually do them and iterate, they compound hard.
Just a quick: this post is written by me, but polished a bit for clarity and flow – because my raw typing is usually a mess.
Most of this you’ve probably seen before. The magic is in actually implementing it, testing it, and tweaking every week instead of just bookmarking and forgetting.
If this hit home or gave you even one thing to fix in your own project, I’d really appreciate an upvote or a quick comment (even just “saving” or “#2 hurts because it’s true”). It helps me know whether putting real effort into posts like this is worth it.
Thanks for reading all the way.
Just click on "LoRA" button and select your .safetensors file.
LoRAs trained both on Z-Image Turbo and Z-Image Base are supported.
SageAttention2 acceleration is also part of the news.
Download at: https://github.com/SamuelTallet/ZPix
As always, your feedback is welcome!
Like in the end, I still have to update the chat context in the master context prompt which I also do in the normal chat. I also have to upload documents. There's no depository that shows me all the documents in one place. What's the use of Claude projects?
Hi r/ClaudeAI,
I’ve been using Claude Code heavily across my laptop + servers, and I realized GitHub measures commit activity but I want to track how well I use Claude Code.
So I built ccrank, a small tool that turns your Claude Code usage (ccusage JSON) into a simple, comparable set of metrics.
What it does today
• Upload a ccusage JSON export
See:
• Total tokens used (AI input)
• Estimated API cost (rough approximation)
• Active days (consistency)
• A basic rank/title ladder (mostly for fun)
Useful bits
• Multi-machine support: you can combine usage from multiple machines
• “Time travel” views: daily/weekly/monthly snapshots of usage trends
• Shareable profile card (so it’s not just screenshots of terminal output)
How Claude/Claude Code helped build this
• I used Claude Code to scaffold the app structure, iterate UI quickly, and refactor the data pipeline
• The biggest help was fast iteration on edge cases in usage logs (missing fields, time bucketing, merging multiple exports)
What I’m trying to learn from the community
1. What metrics would you actually want here beyond tokens/cost/days active?
2. If you use Claude Code daily, what’s a good proxy for “output” that isn’t easy to game?
3. Any pitfalls you’ve hit when tracking usage across machines / accounts?
Try it (free)
• Public leaderboard + upload: ccrank.dev
• Self-host (MIT): github.com/makash/claude-leaderboard-using-ccusage
If this kind of measurement is useful, I’m considering a next step where you can optionally log “what shipped” and map that to tokens (ROI per feature). But I’m deliberately keeping v1 simple and would rather learn what’s valuable before adding more.
Happy to answer questions or share implementation details if folks are curious.
Hi,
Has anyone had any success with connecting an Arduino to an AI agent for machine learning? I’m looking to have the AI manage the irrigation of a garden.
Any resources would be great.
Built an automated Instagram posting pipeline using n8n, Flux (via Cloudflare API) and Gemini. The workflow takes ideas from an Excel sheet, generates prompts, creates 2-3 AI images and posts directly to Instagram.
The Stack:
How It Works:
Extended this to X and LinkedIn with platform-specific formatting rules (limited testing on these two platforms though).
What Worked:
The Unexpected Problem:
Around week 3, realized the team was still spending time refining outputs. Not because the workflow failed, but because:
The problem wasn't speed or accuracy. It was creative judgment.
Automation handles deterministic tasks brilliantly. But the last 20% (brand voice, emotional tone, "does this feel like us?") still needed human review.
Current Setup:
Changed to human-in-the-loop:
Quality improved dramatically. Team focuses on strategic decisions, automation handles execution.
What I'm Testing Now:
Experimenting with OpenClaw (Clawdbot) as an alternative approach. Unlike n8n's deterministic workflows, you give it context and it decides actions autonomously. Running it locally in a sandboxed environment with a local model.
Early days, but interesting to compare:
Something I noticed after a few months of using Claude for most of our frontend code: it has a voice. You've heard it. "We're thrilled to announce." "Don't hesitate to reach out." "This groundbreaking, seamless experience." It sounds professional the way a hotel lobby smells professional. Manufactured and forgettable.
The thing is, this stuff passes code review. It compiles, tests pass, the PR looks reasonable. Then your customer gets an email that reads like it was written by committee.
So we started writing ESLint rules to catch it. Not code quality rules. Copy quality and design consistency rules. We have four now and they've changed how we work with Claude.
The first one, humanize-email, maintains a ban list of AI phrases. "Don't hesitate to reach out", "we're thrilled", "groundbreaking", "seamless", "delve", "leveraging", "warm regards", the whole greatest hits album. The list started from Wikipedia's "Signs of AI writing" page and grew every time we caught something in our own sent emails. It also forces our plain-text email layout for personal emails because Claude always reaches for the branded template with logos and footers. And it limits em dashes to 2 per file because Claude overuses them like someone who just discovered punctuation.
The second rule bans raw Tailwind color classes like bg-white dark:bg-gray-800 and forces semantic tokens like surface-primary. Claude doesn't know your design system. It knows Tailwind defaults. So you get six classes to express what should be one concept. With semantic tokens, dark mode works automatically, and theming is one CSS variable change instead of find-and-replace across 400 files.
Third rule auto-fixes mixed curly and straight quotes in JSX. Small thing, but Claude outputs curly quotes and your keyboard types straight quotes. Both in the same UI looks sloppy. Auto-fixable, zero effort.
Fourth rule blocks hover:-translate-y-1 on interactive elements. Claude loves this on cards. Looks good in a demo. Then a real user approaches the card from below, the card jumps away from their cursor, hover drops, card falls back, cursor re-enters, it lifts again. Jittery loop. hover:shadow-lg gives the same lift feeling without moving the hit area.
The interesting part is these rules teach Claude too. The error messages become context for the next generation, so it stops making the same mistakes after a few rounds.
All four rules are open source: https://jw.hn/eslint-copy-design-quality
after 100 hours of pure debugging prompting and a bit of coding I finally got a working landing page out, for my app Link up and we will be setting up an email waitlist soon!
So excited to launch hope you guys are to, thank you !
I got Pentium g6400 with 64 GB and 2060
Screenshot of the first graph in the tool showing an example with reference images and two lora tests. on the right there's the control panel where you can filter the loras or cycle through them. the second image shows the full set of graphs available at this moment.
I’m a software engineer and long-time roleplay fan, and I’ve been building a local-first AI roleplay desktop app for myself. I’m considering refining it into something more polished and usable.
The core idea:
• Fully local (no accounts, no cloud storage, no tracking)
• You choose which model to use
• Clean UI designed specifically for immersive roleplay
• Highly customizable characters and scenario setup
• Optional structured scene formatting for more consistent dialogue and character behavior
• Fantasy/world-building friendly
• Experimental-friendly — easily switch models and tweak behavior
Privacy note:
The app does not collect or transmit your data. Your characters, conversations, and settings stay on your machine.
Everything runs locally on your machine.
The app does not collect or store your data.
Your characters and conversations stay on your computer — no accounts, no tracking, no cloud storage.
Everything is designed so you stay in control.
The trade-off is that performance depends on your hardware (GPU/CPU and model size).
Before I invest more time polishing it:
Would you personally use something like this?
What features would make it meaningfully better than current options?
If there’s enough interest, I may open a small private testing group. Pls comment on the post since I am a Reddit newbie - haha I know, silly since I am a software engineer but alas.
I feel the panic is INSANELY overdone because AI is not creating imagination and it is only rearranging stories that humans already made. This is a half vent because people aren't thinking critically with the likes of Seed Dance 2.0 release.
Won’t you get bored seeing the same characters mixing with other similar characters and the same SHOW overall???
You might enjoy a custom Seinfeld episode with exactly what you want where Jerry decides to play fortnite or something... or watching Batman in perfect AI-4K generated battle with Voldemort the first few times, but after a while it becomes ..boring.
This is like remixing music where at first the remix sounds good, then another remix comes out, and eventually you just want a brand new song.
You know all these characters from your established memory of watching this. This isn't practical at all for people who have no idea what the hell a "Mr.White" is. A 10 year old won't be amused of some random ai algorithm crafted out of Charlie Brown or whatever else that's for kids these days. This mean there's a requirement where you must familiarize yourself with the original material before even figuring out why this Crazy bald chemist guy is dancing with Spiderman in AI.
Even if Hollywood feels lackluster right now, that has more to do with the economy and the general enshittification of everything, not because human creativity ran out.If you want real movies with heart and new ideas, they already exist in indie films and smaller studios.Hollywood is in a weak spot this decade, but replacing it with AI would not fix the problem, it would only make stories feel more empty. People are over hyping this AI video generation stuff.
Everyone seems to forget there's an entire generation of people that want something NEW. Our great great grandparents grew up watching Charlie Chaplin dancing around, and the generations after wanted something different. If AI somehow existed earlier in that era, we'd have black and white Charlie Chaplin in the Twilight zone remixes. THIS WILL BECOME BORING. There's no new imagination involved. It would not eventually make and produce a Marvel Super Heroes scenario from all the film and material it acquired. It's all regurgitated trash.
Planning trips often feels overwhelming—comparing hotels, reading countless reviews, and juggling itineraries can turn a fun experience into a chore. Here’s a simple workflow to make travel planning manageable today:
Example checklist for hotel comparison: - Price per night ($150–$300) - Guest rating (4.0+) - Breakfast included (yes/no) - Free cancellation (yes/no) - Distance from city center (<2 miles)
Common pitfalls: - Overloading the table with irrelevant details—stick to what matters most. - Ignoring cancellation policies, which can add costly surprises.
If you want to simplify this process further, consider michelinkeyhotels—a tool that aggregates distinguished and boutique hotels from sources like the MICHELIN Guide, helping you filter and compare luxury stays effectively. Feel free to adapt this approach with or without specialized tools to keep your travel planning friction-free.
Hi
I built this project for myself because I wanted full control over what my personal assistant does and the ability to modify it quickly whenever I need to. I decided to share it on GitHub here's the link: https://github.com/emanueleielo/ciana-parrot
If you find it useful, leave a star or some feedback
Hi all,
I’m planning a smart-home control + monitoring system and I’d like some advice before I choose the main ecosystem.
This is a new build and I intentionally ran a lot of wired cabling so the most important sensors don’t have to rely only on wireless:
Current wiring prepared:
The house also has a PV system built on a SolaX inverter — is it possible to integrate this into the same smart-home ecosystem?
Of course the whole house also has a WiFi mesh, but my goal is to keep critical sensors wired if possible.
I want to connect all of this into one unified system that can:
I’m looking for real-world experiences, especially from people who started with wired sensors.
Thanks!
Another LTX-2 experiment, this time a lip sync video from close up and full body shots (Not pleased with this ones), rendered locally on an RTX 3090 with 96GB DDR4 RAM.
3 main lipsync segments of 30 seconds each, each generated separately with audio-driven motion, plus several short transition clips.
Everything was rendered at 1080p output with 8 steps.
LoRA stacking was similar to my previous tests
Primary workflow used (Audio Sync + I2V):
https://github.com/RageCat73/RCWorkflows/blob/main/LTX-2-Audio-Sync-Image2Video-Workflows/011426-LTX2-AudioSync-i2v-Ver2.json
Image-to-Video LoRA:
https://huggingface.co/MachineDelusions/LTX-2_Image2Video_Adapter_LoRa/blob/main/LTX-2-Image2Vid-Adapter.safetensors
Detailer LoRA:
https://huggingface.co/Lightricks/LTX-2-19b-IC-LoRA-Detailer/tree/main
Camera Control (Jib-Up):
https://huggingface.co/Lightricks/LTX-2-19b-LoRA-Camera-Control-Jib-Up
Camera Control (Static):
https://huggingface.co/Lightricks/LTX-2-19b-LoRA-Camera-Control-Static
Edition was done on Davinci Resolve
There is 5V 100mA max on input, but for example on charger there is 5V 2A.
I was curious to know how much you guys are earning working in this industry.
If comfortable do let me know, what kind of jobs do you do?
Full time/Freelance/Business
Your insights will help us, understand what should be the realistic expectations we must keep, while working in this field (with AI)
I’m working on a personal project where I’m building an AI agent with long-term memory. Right now I’m using Airtable to store memories, each with a created timestamp and a user ID (from Telegram).
The agent decides when to store something as long-term memory if it seems useful for better future replies. That part works fine, but I’m running into an issue with time-biased memories.
Example: I might say, “This month I’m using my snack budget for traveling.” The agent stores this as long-term memory. The problem is that after the month ends, this info is no longer relevant, but it still stays in memory and gets fed back into context, which can hurt response quality.
I currently have another agent that tries to clean up duplicates and time-biased memories, but I feel this is not an efficient or scalable approach.
How are you handling:
Long time lurker, first time poster!!
I’m in the process of setting up my first home assistant setup and I’m trying to integrate my APSystems solar (ECU-C) and Emporia Vue 3 but it looks like home assistant 17.1 on a Pi5 doesn’t like it very much.
I’ve searched this group a bunch and have been unable to find info on how to do it. Any insight would be appreciated!
I'm playing around with LM studio and an uncensored GPT model and it barely understands what a prompt for ai art/video even is. Bogged down by formatting and outlines and all manner of rubbish.
How's your experience? Looking for anecdotes not obnoxious hand-holding. thanks.
I’m trying to convert an old 49-key keyboard into a USB MIDI controller using an Arduino Pro Micro.
HERE ARE THE SHEMETAIC OF THE KEYS, A PICTURE OF THE SCAN MATRIK I FOUND USING MULTIMETER AND LASTLY THE DIAGRAM OF THE CIRCUIT (ASSUME THE NANO IS PRO MICRO)
(ALSO YES I USED AI TO WRITE THIS POST IM DYSLESIX)
The keyboard uses a 15-pin ribbon cable, and after reverse engineering it with a multimeter, I confirmed it is a standard matrix:
I mapped every key and confirmed the full matrix layout and MIDI note assignments (C2 to C6).
Initially, I planned to connect the matrix directly to the Pro Micro, but to reduce pin usage and simplify wiring, I introduced a CD74HC4067 16-channel multiplexer.
My setup is:
First, I tested the mux using a single button connected between C14 and GND, and that worked perfectly. Arduino could detect presses and send MIDI.
Then I tested reading all mux channels individually, assigning one MIDI note per channel — also worked perfectly.
The problem started when trying to scan the actual keyboard matrix.
My scan logic:
Expected behavior: pressing a key should connect two mux channels and allow detection.
Actual result: no keys are detected at all.
I verified:
But matrix scanning via a single CD74HC4067 is not detecting connections between lines.
EVEN THE LIGHT IS NOT BLINKING WHEN IM PRESSING KEYS
I have finished my first DIY project based on the Raspberry Pi PICO 2W and a 5.65‑inch ePaper display. The goal of this system is to show the weather forecast for the current day (see the left side of the screen) and for the next five days (see the right side of the screen). I forked the software from micropython‑ePaperWeatherStation (https://github.com/frederik-andersen/micropython-ePaperWeatherStation) and improved several things. My version of the software can be found at https://github.com/Damov/micropython-ePaperWeatherStation.
The system is based on a Raspberry Pi PICO 2W, a Waveshare 5.65‑inch E‑Paper Display Module, and an Adafruit PowerBoost 1000C combined with a 3.7V LiPo battery for power management. On the GitHub page, it is documented how to wire these components. I will also publish the STL and 3MF files for 3D printing the frame at some point.
This is version 0.1. What I personally don’t like about it is the fact that you won’t immediately notice when the battery is empty, because the status LEDs are on the back of the Adafruit 1000C, and the screen (as it is an ePaper display) will always show the last image. Therefore, I am planning to replace the Adafruit PowerBoost 1000C with a Waveshare UPS Module for Raspberry Pi Pico, which provides an uninterruptible power supply and battery status monitoring via I2C. This module makes it possible to show the battery status on the Pico through I2C, and therefore it can be displayed directly on the screen.
Hi
Is anyone priting there generated art/image on t-shirt/ wall frame etc?
If you do how are you going to do this?
I just built a workflow that automatically identifies potential leads from Reddit posts, and I'm pretty excited about the results.
What it does:
The flow:
Challenges I solved:
The workflow has been running stably for 2 weeks now and has completely changed my lead generation process. Happy to share more details about specific nodes or the logic if anyone's interested!
Anyone else working on similar Reddit automation projects?
Btw made also same thing for linkedin and got +100Million lead if this post got 100upvote will share the workflow, feel free to check the 102M+ lead i have at leadadydotcom
This video clarifies OpenAI's recent press release regarding GPT-5.2 Pro's "new result in theoretical physics," stating that the claims are overhyped and misleading (0:00).
The speaker, who has a physics degree, explains that the AI did not discover new laws of physics (0:15). Instead, human authors first developed complex physics equations, which were then given to GPT-5.2 Pro. The AI spent 12 hours simplifying these existing complicated expressions into a more concise form (1:10).
Key points from the video include: Simplification, not discovery: The AI's achievement is in simplifying already-known equations, which could have been done manually or with other software like Mathematica, albeit with more time and effort (1:40). AI as a tool: The speaker emphasizes that AI serves as a valuable tool for physicists by making complex mathematical derivations faster and simpler (2:31). Misleading headlines: The video criticizes OpenAI's press release for using terms like "derived a new result," which can be misinterpreted by the public as a groundbreaking discovery comparable to Newton's laws (3:18). This leads to exaggerated headlines that fail to accurately represent the AI's actual contribution (4:03). "Internal Model": The video notes that OpenAI used a specialized "internal model" for this task, suggesting it wasn't just a standard ChatGPT application that achieved this result (4:36).
The speaker concludes by urging viewers to be cautious of sensationalized headlines and to understand the actual technical accomplishment (4:55).
Hey there, can anyone help me with how to create images/videos like this? With radiating color and a clear grain effect?
👋 This is a bit different take on programming with AI, instead of going more in the vibecoding direction, I'll try to use AI to stay get into the "zone", into the flow state. I'd love to hear other ideas how AI can be used in a way to empower us instead taking away. How can AI leave the hard parts to us, but give us better focus on it?
Found it in the drawer of my oven after just moving into a new place
I am working on this programming language called Ypsilon Script for programming microcontrollers and dev boards. It features modern beginner friendly syntax and compiles to Arduino C++. Be sure to check it out on my github : https://github.com/ycharfi09/ypsilon-script
I inherited this large wooden box with several small drawers of tools. Are all of these tools related or is this just a wooden tool box? Also, the box has a strong medicinal smell when you open it.
Thanks for your help!
Taken On Seestar S50 Using 3hr Integration.
Edited In PS Express.
New experiment, involving a custom FLUX-2 LoRA, some Python, manual edits, and post-fx. Hope you guys enjoy it. ♥
Music by myself.
More experiments, through my YouTube channel, or Instagram.
Our university life is covered and I don’t understand why!
On Katie's most recent core class (2/13/26), she says that if she has live back-to-back classes she always programs them together so they can be stackable.
I always wondered if instructors did this, but it's helpful to know if you are looking for a good stack and not sure what to pair it with. She also has an upper body class from that day that goes along with the core class.
Can someone remove the glare on my BFs glasses? No other edits. TIA
I was playing around with OpenClaw. Installed it on Raspberry Pi, used it to do a work task in minutes rather than days. It's magic, especially paired with Opus 4.6
Problem is the API is still too expensive; you can burn through $100 a day easy.
Then Cowork for Windows dropped. Apart from the few networking connectivity bugs, it works a treat. I was already spending quite a bit on the API so went for the top tier x20 plan and so far, have not been hitting any of the limits, roughly 10x cheaper.
I also have Gemini and Codex subs so why not use those to save Opus work, but Cowork can't directly spawn agents outside its environment... or can it?
I asked it to create in and out .md files with instructions for Codex and then set up a PowerShell script to run on file change. Et voilà, now I have Cowork spawning and orchestrating Codex agents doing real work at what I would say is orders of magnitude lower cost.
From there onward all kinds of automation is trivial.
Anyone else tried it?
Edit: Typo had 4.5 instead of 4.6
Another LTX-2 experiment, this time a lip sync video from close up and full body shots (Not pleased with this ones), rendered locally on an RTX 3090 with 96GB DDR4 RAM.
3 main lipsync segments of 30 seconds each, each generated separately with audio-driven motion, plus several short transition clips.
Everything was rendered at 1080p output with 8 steps.
LoRA stacking was similar to my previous tests
Primary workflow used (Audio Sync + I2V):
https://github.com/RageCat73/RCWorkflows/blob/main/LTX-2-Audio-Sync-Image2Video-Workflows/011426-LTX2-AudioSync-i2v-Ver2.json
Image-to-Video LoRA:
https://huggingface.co/MachineDelusions/LTX-2_Image2Video_Adapter_LoRa/blob/main/LTX-2-Image2Vid-Adapter.safetensors
Detailer LoRA:
https://huggingface.co/Lightricks/LTX-2-19b-IC-LoRA-Detailer/tree/main
Camera Control (Jib-Up):
https://huggingface.co/Lightricks/LTX-2-19b-LoRA-Camera-Control-Jib-Up
Camera Control (Static):
https://huggingface.co/Lightricks/LTX-2-19b-LoRA-Camera-Control-Static
Edition was done on Davinci Resolve
Ryzen 5 1600 ,Pentium G6400 , i7-2600 ,I3-6100 paired with 4x2060 Nvidia Will i encounter bottleneck, CPU doesnt supporto pcie4, ?
I've spent the last few weeks researching how to build a personal AI-powered knowledge system and wanted to share where I landed and get feedback before I commit to building it.
The Problem
I consume a lot of AI content: ~20 YouTube channels, ~10 podcasts, ~8 newsletters, plus papers and articles. The problem isn't finding information, it's that insights get buried. Speaker A says something on Monday that directly contradicts what Speaker B said last week, and I only notice if I happen to remember both. Trends emerge across sources but nobody connects them for me.
I want a system that:
What I tried first (and why I abandoned it)
I built a multi-agent system using Letta/MemGPT with a Telegram bot, a Neo4j knowledge graph, and a meta-learning layer that was supposed to optimize agent strategies over time.
The architecture I'm converging on
After cross-referencing all the research, here's the stack:
RSS Feeds (YT/Podcasts/Newsletters)
→ n8n (orchestration, scheduling, routing)
→ youtube-transcript-api / yt-dlp / faster-whisper (transcription)
→ Fabric CLI extract_wisdom (structured insight extraction)
→ BGE-M3 embeddings → pgvector (semantic search)
→ LightRAG + Neo4j (knowledge graph + GraphRAG)
→ Scheduled analysis jobs (trend detection, contradiction candidates)
→ Telegram bot (query interface + automated briefings)
Key decisions and why:
- LightRAG over Microsoft GraphRAG - incremental updates (no full re-index), native Ollama support, ~6000x cheaper at query time, EMNLP 2025 accepted. The tradeoff: it's only ~6 months old.
- pgvector + Neo4j (not either/or) - vectors for fast similarity search, graph for typed relationships (SUPPORTS, CONTRADICTS, SUPERSEDES). Pure vector RAG can't detect logical contradictions because "scaling laws are dead" and "scaling laws are alive" are *semantically close*.
- Fabric CLI - this one surprised me. 100+ crowdsourced prompt patterns as CLI commands. `extract_wisdom` turns a raw transcript into structured insights instantly. Eliminates prompt engineering for extraction tasks.
- n8n over custom Python orchestration - I need something I won't abandon after the initial build phase. Visual workflows I can debug at a glance.
- faster-whisper (large-v3-turbo, INT8) for podcast transcription - 4x faster than vanilla Whisper, ~3GB RAM, a 2h podcast transcribes in ~40min on CPU.
- No multi-agent framework - single well-prompted pipelines beat unreliable agent chains for this use case. Proactive features come from n8n cron jobs, not autonomous agents.
- Contradiction detection as a 2-stage pipeline - Stage 1: deterministic candidate filtering (same entity + high embedding similarity + different sources). Stage 2: LLM/NLI classification only on candidates. This avoids the "everything contradicts everything" spam problem.
- API fallback for analysis steps - local Qwen 14B handles summarization fine, but contradiction scoring needs a stronger model. Budget ~$25/mo for API calls on pre-filtered candidates only.
What I'm less sure about
What I'd love to hear
- Has anyone built something similar? What worked, what didn't?
- If you're running LightRAG - how's the experience with local LLMs?
- Any tools I'm missing? Especially for the "proactive intelligence" layer (system alerts you without being asked).
- Is the contradiction detection pipeline realistic, or am I still overcomplicating things?
- For those running faster-whisper on CPU-only servers: what's your real-world throughput with multiple podcasts queued?
Hardware: VPS with 47GB RAM, multi-core CPU, no GPU. Already running Docker, Ollama (Qwen 14B), Neo4j, PostgreSQL+pgvector.
Happy to share more details on any part of the architecture. This is a solo project so "will I actually maintain this in 3 months?" is my #1 design constraint.
Unsure if flair is correct..!
Hey all,
I use Claude Pro daily as my main driver, along with ChatGPT and the MiniMax. And I kept running into the same annoying thing: I'd be deep into a Claude coding session and suddenly hit the 5-hour usage cap. No warning, no way to know I was at 90% before it happened unless I had the usage dashboard open in a browser window.
The thing is - if you time it right, you can kick start a 5h window when you don't need it, so you have ~90% remaining at the end and a fresh one ready to go. I was juggling this across three providers, doing mental math to keep windows from overlapping - so I could make the most out of my very limited pro subscription!
I got tired of mentally tracking windows and checking dashboards, so I started building an iOS app to just show me where I stand across all my AI subscriptions in one place - and mos importantly, having notifications and easy to glance widgets.
I call it AI Usage Tracker.
Claude Opus helped me kick start it (mostly planning and reviewing, while coding was a mix of multiple providers), and the app made it possible to take the most out of the 5h slots!
A few screenshots.
This app is meant for people who use AI heavily. Multiple accounts per provider is on the roadmap.
What it does right now:
Being honest about the state of things: this is early. It works and I use it daily, but some providers are more polished than others. Claude and ChatGPT tracking are solid. OpenAI API tracks tokens well but cost reporting needs more work.
There will be bugs - that's exactly why I need testers.
Everything runs on-device. No accounts, no servers, no tracking. Credentials stored in iOS Keychain (encrypted on device).
I'm limiting beta spots so I can actually keep up with feedback. If you deal with this problem too - juggling limits across AI subscriptions - sign up here and I'll send TestFlight invites in batches:
👉 https://forms.gle/GFHj3sYyrGXmHVag6
Happy to answer questions.
A few months ago, I was drowning in a sea of AI prompts, struggling to organize and make sense of them. It felt like every time I thought I had a grip, another wave of disorganization hit. If you've ever felt like you're just endlessly curating without any real community engagement, keep reading.
I was so focused on creating prompts that I overlooked the potential of sharing what I already knew. Turns out, sharing my expertise not only helped others but also brought unexpected benefits to me. By putting my prompts out there, I started attracting feedback and ideas I hadn't considered before. The community engagement was way more valuable than I imagined.
The fix was stupidly simple - just start sharing. I know, it sounds obvious, but I sat on my prompts for months, thinking they weren't ready or good enough. But by sharing them, I realized there was a whole community ready to collaborate and improve them.
I've found that engaging with others, discussing prompts, and getting feedback saved me a ton of time and improved my work significantly. If you're working on something similar, don't wait until it's perfect. Share what you have and let the community help you refine it. This approach changed everything for me, and I'm hoping it can do the same for you.
Does anyone have experience creating a successful app with Claude? What worked, what didn’t? Any advice? I’m at the beginning stages, and CoWork has really help me flush out my concept for an app, mission statement, pricing and next steps.
Hey 👋
I’m building an AI-native, agentic SEO agency that lives inside tools like WordPress, GitHub, Framer, Webflow, etc — not a dashboard, but something that actually runs SEO end-to-end.
I’m a non-technical founder with strong product + GTM + logic background, but I’m looking for a technical co-founder to own architecture, agents, infra, and long-term code quality.
Early stage, bootstrapped, equity-heavy.
If you’re into agentic systems, OSS vibes, and want to bootstrap to a 100M+ exit, let’s talk.
DMs open 🚀
Background: Industrial engineer, 2x founder, first company +2M ARR.
Hi everyone,
I created my first app for pinterest users (of,course using with ai only) . Did some promotion here and there, but not getting any users. Don't know the exact reason.
Need your suggestions on getting first paid user.
Here is the tool genpin.co
I built a real-life Simon Says game that physically slaps you if you fail.
The idea was simple: camera watches you, calls out poses, and if you mess up (or if Simon didn’t say), a giant mechanical hand smacks you.
Under the hood:
Hardest parts:
I’d love feedback on:
Happy to share code snippets or talk through the architecture if anyone’s curious.
I built this website as a student side project with around a dozen tools all in one spot so you can save time by not navigating away. There are grade calculators, timezone converters, stopwatches, timers, metronomes, etc. It's also offered in 4 languages: English, Spanish, German, and Mandarin. Any feedback is greatly appreciated, and if you're interested in offering testimonials to add onto the website, pls let me know.
So I’ve been pretty deep into automation for a while now, basically lived in Zapier and Make for the last couple of years. It worked fine for the simple stuff—syncing leads to a CRM, posting to Slack, the usual. But lately, I’ve been trying to push it into actual marketing execution, and honestly, it feels like I’m trying to build a skyscraper with Legos.
The problem I keep running into is that marketing isn't a straight line. If I’m running a campaign and the search environment shifts or a competitor drops a new feature, a linear workflow just... sits there. It does exactly what it's told, even if the context has changed.
I’ve been experimenting with moving away from "If This Then That" and trying to set up more of a "workforce" vibe. Like, having one agent handle the SEO/search visibility side, another watching social sentiment, and a third actually adjusting the content. The idea is they’re supposed to talk to each other and adapt.
It’s been a bit of a nightmare tbh. Getting them to share context without just dumping the entire history into a prompt and hitting token limits is tough. I tried building a shared "memory" layer, but it’s still kinda clunky and they sometimes get into these weird feedback loops where they just agree with each other until the credits run out.
I'm really curious if anyone here has successfully moved past the "trigger-action" mindset into something more collaborative for high-level tasks. Are you guys using specific frameworks for the handoffs, or is everyone just winging it with custom scripts? I feel like I'm close to something that works, but the coordination part is still so brittle.
I've been hitting credit limits on Cursor/Copilot pretty regularly. Expensive models eat through credits fast when you're doing full codebase analysis.
Thinking about self-hosting DeepSeek V3 or Qwen for coding. Has anyone set this up successfully?
Main questions:
- Performance compared to Claude/GPT-4 for code generation?
- Context window handling for large codebases?
- GPU requirements for decent inference speed?
- Integration with VS Code/Cursor?
Worth the setup hassle or should I just keep paying for multiple subscriptions?
This came about as I was trying to come up with app ideas and this is the one that stuck. I don't have all the money in the world, and while groceries weren't my biggest expense at the time, I wanted to keep track of what was in my cart and how much I was spending before I reached checkout.
I've since discovered the fun of determining sales taxes via GPS and that shelf tags don't always seem to be accurate — which is another direction I could take this app, but it would take a much larger user base to gather pricing data.
What it does:
- Point your phone camera at a shelf price tag and it reads the price automatically using an escalating chain — OCR and regex first, then sending the image to AI for full analysis if needed.
- Keeps a running subtotal + tax estimate as you shop
- Built-in shopping list you can check off as you go. A second concurrent login is allowed so your spouse can send you a list and see as you check things off. Also supports pasting a comma-delimited list to quickly add items.
- Email yourself (or your spouse) a receipt summary when you're done.
Completely web-based, no download needed. Can share with others via a QR code generated within the app. Meant to be used on an Android or iOS device though.
Looking for feedback on the app or tell me I'm crazy for making it? Would offer lifetime access (unlimited scans) to up to 20 that test and provide feedback. Would like to push it out as an app to the google play store and iOS app store eventually.
Hey all,
I am seriously considering investing a significant portion of my signing bonus into a local LLM setup as a hobby and learning project once I start my job in August.
I am currently in university. I have studied a lot of theory, but I feel I am missing practical, hands-on experience.
If you were starting from scratch today, knowing what you know now, what would you do differently?
Specifically:
I know much of this information already exists, but it is often fragmented across many threads, benchmark posts, and user experiences.
I would really appreciate any lessons learned from people who have been running local setups for a while.
Thank you :)
Hello,
I've seen people on the OpenModelDB website sharing PyTorch files (mostly) that allow them to filter, or even restore, videos using artificial intelligence.
I'd like to know if it's difficult to create a model, as I've tested several and noticed that the results depend heavily on the videos being filtered.
Please note that I'm a complete beginner, and this will be my first time undertaking such a project.
Thank you in advance for your help.
Real talk: I was spending more time prospecting than actually selling. Google Maps scrolling, LinkedIn stalking, copying emails into spreadsheets, praying the contact info wasn't 3 years old.
Every existing tool was either:
So I started building Dight.pro. The pitch is dead simple: tell me your target niche and location, get verified decision-maker contacts you can actually reach. No enterprise contract. No paying for 10,000 credits you'll never use.
It's in pre-launch right now (collecting waitlist signups) and I'd love feedback on:
I'm happy to geek out about the tech stack and share what I've learned building this. Dropping screenshots in the comments.
It seemed extremely consistent, cohesive, no repetition so far I've tested, and it works very well on small vram size.
How is this possible?
Hi guys, it seems like there's something extremely strange going on with regards to claude max usage limits: this morning, i sent like 3 prompts to claude, and all of the sudden my max 5 usage limit was almost completely used up. I never had this issue before. Now, after resetting a couple of hours ago, the usage seemed to tick down sort of normally - then i just checked claude usage (the webpage, not through the CLI ) and I saw that apparently now my limit resets in 9h -- after basically already using it for a few hours. Are there some additional limits i'm not aware of?? I thought the resets are every 5h???? (on max x5 plan). also, my reset day was actually Wednesday evening until I checked back today, and now its thursday morning. WTF?!
My mailbox is in the building entrance hallway (typical Prague apartment setup), several walls away from my flat and with no power outlets nearby.
I wanted to get a push notification the moment a letter arrives, not when I open the mailbox.
At first, I paired an Aqara P1 motion sensor directly to my Zigbee coordinator, but the signal was unreliable because of distance and metal interference.
The solution was flashing a Sonoff Zigbee Dongle-E with router firmware and powering it from a 20,000 mAh power bank to extend the mesh closer to the mailbox.
Unexpected problem: the power bank kept shutting down because the router draws too little power. Fix was adding a small USB keep-alive module (dummy load) to prevent auto shutdown.
Now the flow is:
Motion sensor → Zigbee router → Zigbee2MQTT → Home Assistant → instant push notification
I also implemented a watchdog using the last_seen attribute so HA notifies me if the router goes offline.
Works reliably now.
I wrote a full breakdown including architecture diagrams, firmware setup, and automations:
https://jonathas.com/how-i-made-my-physical-mailbox-send-me-real-time-push-notifications/
TLDR: The main reason the agentic framework has seen most success in coding is because of its ratio of time saved to human supervision needed.
One of the most visible real-world applications of the agentic paradigm is coding. Most people seem to think it is because corporations no longer want to have to be dependent on highly paid engineers which is clearly a strong incentive. But while that is the motivator this omits the core reason that makes this even possible.
First, the main obstacle to agent adoption is risk. Take customer support. If I mistakenly tell a customer their return has been processed when in fact it has not, this does a lot of damage to my brand image. This is why, at the current level, of AI reliability, we need human supervision.
Structurally, software engineering is one of the few areas where agents can replace humans with relatively low risk. This is because coding agents are supervised. They ultimately have to go a through a human-made testing pipeline and a human-reviewed process. This drastically reduces the risk of something completely outlandish and catastrophic being shipped by AI.
That's also why other fields have not seen as much progress automation yet. Customer support for example – even though now even that is changing – is less inherently favorable to agents because the customer support cycle is short. Customer support calls are measured in minutes whereas a software feature is built in hours. This means the ratio of human supervision to time saved by AI is way higher for customer support. This makes it less profitable.
This brings me to the core measure of whether a field is suited to being automated by AI: the ratio of time saved by AI over time needed for a human to supervise its output.
e.g.
Say as an engineer it takes me 8 hours to build a feature without AI and AI does it in one minute. The testing pipeline and review process take say 1 hour in total. The ratio is roughly (8*60-1)/60 ~ 8.
For customer support, say it takes 2 minutes to complete a call (vs 5 seconds for the AI) and then 30 seconds for a human to review you have a ratio of roughly 4. Twice as low as for coding.
Hi folks 👋
https://reddit.com/link/1r5kqo6/video/gpgxlllb6pjg1/player
We have built Warren to simplify investing for layman investors. With our app, you won't need any financial advisors or wealth managers. So far, Warren provides its customers with a set of personalised features (stocks, portfolios & chat) that help them achieve their goals.
We're excited to see what the community thinks. Please share your feedback 🙌
App store - https://apps.apple.com/gb/app/warren-simplify-investing/id6504190197
Product Hunt - https://www.producthunt.com/products/warren-2
YC Hacker News - https://news.ycombinator.com/item?id=47020004#47025169
I'm building a platform where college students can discover classmates with specialized technical skills (CAD, welding, PCB design, 3D printing, etc.) for project help.
The problem: You need someone who knows how to TIG weld for your capstone, but you have no idea who on campus has that skill.
My question: **Would you actually create a profile on something like this?** Or would you just ask your friend group / post in Discord?
Trying to validate if this solves a real problem before building payment features.
I think I fell into the "complexity trap" pretty hard over the last few months. I got so excited about the idea of autonomous agents that I started building these massive, multi-step chains for everything—content research, lead enrichment, competitive analysis.
The problem is, when I actually sat down to look at the numbers this week, the ROI just wasn't there. I was paying for these high-level LLM calls to do things that, honestly, a basic Python script or a standard Zapier workflow could have handled for a fraction of the cost.
The "cool factor" of having an agent "think" its way through a problem is high, but it’s becoming a bit of a nightmare to manage. Half the time, the agent takes a weird detour that costs 50 cents in tokens and provides zero extra value.
I'm currently trying to strip everything back and figure out where the "autonomy" actually provides a return. For me, it seems to be in the tasks that require real-time adaptation—like adjusting a marketing strategy based on live search data—rather than just repetitive data moving.
I’ve been trying to document which specific "agentic" behaviors actually move the needle and which are just expensive window dressing. It’s been a frustrating process of trial and error.
Curious if anyone else has gone through this "de-complicating" phase? How do you decide when a task actually needs an autonomous agent versus just a well-built linear workflow? I feel like the hype cycle led me to over-engineer everything.
I built a free photo editor that you can use to crop, rotate, adjust, add filters, and add text, all for free, no signup or anything.
So I've been a Claude Pro subscriber since late last year. Mostly use it for coding and writing — honestly it's replaced like 3 other tools I was paying for.
But $20/month adds up, especially when you're also paying for other SaaS stuff.
Anyway, I stumbled on something a few weeks back. Turns out there's a legit way to get Claude Pro at $10/month instead of $20. Not a hack, not account sharing, not anything sketchy. It's an actual discount that's available through the platform — just not obvious at all.
I've been on the discounted rate for about 2 months now and everything works exactly the same. Full Opus access, Projects, all the Pro features. Just half the price.
The discount applies for 3 months, so that's $30 saved total. Not life-changing money but honestly why pay more when you don't have to?
Happy to share how I did it if anyone's interested. Or if other people already know about this, curious if your experience has been the same.
Edit: since a bunch of people asked, I put together a quick walkthrough with screenshots. Link in comments.
Hi, I have the following goal: I want to remove my hands from the video where I move a mascot and do proper inpainting, so that in places where my hands are in front of mascot, the mascot shape is inpainted, and in other places the reference image (background) is inpainted (so that it looks like a mascot moves on its own).
The problematic part is the mascot shape - the model have no idea how the mascot looks like so it can't be consistent across generations and it adds various body parts that should not exists. After some research I found out that the LoRA of the mascot might help. I used `diffusion-pipe` for that goal, with about 18 pictures of my mascot. After trying maaany settings (I still testing) I manage to have a LoRA that somehow works and for prompt like "florbus dolphin toy in the car" returns the shape of my mascot (see below videos for real mascot look). The florbus is my keyword for that mascot, which seems to be not strong enough, becaue if I simply write "florbus" in my prompt then lora seems to have no efffect and dolphin is not generated at all and I have to type "florbus dolphin toy" to generate it.
Now I connected that LoRA to my VACE inpainting workflow. I divided my video into parts of 5 seconds each and for each part I masked my hands with SAM3 and I expanded that mask by 5 units. As a reference image I used the image of the background seen on the video (without my hands and without mascot). My prompts are:
- (positive) - florbus dolphin toy on the background of reference image
- (negative) - human, finger, hand, blurry, bad quality, asymetry
Additionally, here are the settings of my sampler (LoRA strength is 1, and VACE strength is 1.5):
Finally, here is the input vs output:
https://reddit.com/link/1r5j5pr/video/6o2dt9jfjojg1/player
As you can see, dolphin is completely bad inpainted - my thumb created a hole in the mascot, while my other fingers are not removed but got the material of the mascot and some extra dolphin body part is added at the bottom :(
I tried to lower VACE strength, but with 1.0 hands are not removed at all. Do you have any recommendations how to fix it? What settings to use? Is my LoRA ok? I would really appreciate help because I feel that I'm pretty close to make it works but I still do not understand all VACE inputs and settings and maybe that's the reason why mascot shape is not preserved during inpainting.
I started using Runpod this week. I deployed the custom ComfyUI "optimized for 5090" image and deployed it on a machine with a 5090 GPU.
It worked "out of the box", and I was able to install various custom nodes and download models. But, the next time I tried to start up the pod, multiple nodes in my workflow have been executing on CPU rather than GPU and I am at a loss to understand why.
For example the ordinary dpmpp-2m-sde sampler from the KSampler node was originally running on GPU. But now it only runs on CPU. The Euler sampler, or the dpmpp-2m-sde-gpu sampler, are still able to run on the GPU. The birefnet background removal tools in the AILab ComfyUI rmbg package are unable to run on GPU as well and they fall back to CPU.
Or, if I migrate the data to a new "pod" and try to run my setup on this new pod, I experience similar difficulties.
From this description, it seems like the problem is of the following form. I installed custom nodes but for some of these nodes the installation process resulted in changes outside the /workspace directory that persists after shutdown. When I restarted the machine, those changes were reset, resulting in a broken install.
But I can't track down the source of the breakage. Also the KSampler dpmpp-2m-sde sampler should just work out of the box without me having to install anything so I don't understand how the default image generation workflow would be broken
Since I got the $50 promo credit and usage was discounted 50% for the promotional period (ends tomorrow), I decided to try out fast mode.
Background: I started using CC when it was API-only but backed off because it was too expensive. When they allowed Max subscribers to access it, it became my primary coding tool, but I immediately noticed it was much slower than API. Over time I feel like inference has gotten even slower, but I have chocked it up to them having to balance the cost of inference with offering flat rate access to customers.
I have a sub-agent for a project that: researches inbound change requests, fully researches the blast radius (i.e. if you are fixing a bug related to a database connection, does this potentially affect all database calls), creates a tech approach plan, uses my Gemini MCP to ground that approach against Gemini 3 pro, and then finally turns the request into a series of action items in my task tracker doc. It works very well for my project BUT it takes approx 15-20 minutes to run per each submitted issue. I figured this would be a great test for fast mode.
Sure enough I submitted a request and it churned through the first one in 3 minutes. I checked my usage: those 3 minutes cost $20. I submitted another one to be sure: another 3-4 minutes, another $25. This was reminding me of the old API days. I started implementing the first feature, and I ran out of my $50 allotment before it was done (it gracefully degrades back to normal speed). That was a fun 10 minutes.
Anyway, my theory remains that “fast mode” is merely a way for flat rate subscribers to have easy access to API speeds but at API costs. This was always available as a mode switch if you had an API key, but now it has an easy toggle slash command (/fast). I think for fiscally-conscious deadline-driven developers, this is a convenient way to quickly inject bursts of speed where necessary.
Hope this helps someone!
App lets you track your job applications, edit resumes using CV, export your resume in an ATS friendly format and analyze your resumes against specific job posts.
Looking for user feedback
I'm a millennial who consumes brainrot on the daily so excuse my horrid attempt at a concise narrative over fragmented chunks here.
I understand in 2026 we basically have no say or control, and by we I mean anyone whos eyes see this thread, over really anything anymore especially in relation to technology BUT, as the title states, why are we hell bent on speed running this?
Not only are we just blindly adopting a blackbox technology [LLMs] we have no control over but we're doing it at the expense of people's livelihoods I.E. jobs.
We've had magic tech for decades now but all of a sudden Chatgpt comes along, introduces a new trick, and immediately results in the slashing by double digit percentages of entire workforces?? And this all comes from the guiding beacons of a few dozen companies that control the entire landscape and are relentlessly shoving this tech down our throats.
Why the fuck do we put up with this? Are we that goddam lazy? How are we ok just submitting to a few corporate entities?
I switched from Tuya to Tuya Local and everything is working great. I have some scenes in Tuya that are still available for me to call via Tuya.
Is there a way to copy or move a Tuya scene to Tuya Local ?
Hey folks,
I’ve been hacking on a side project: a web-based “System Design Simulator.” It’s like a whiteboard, but you can actually press play and watch your architecture behave (or fail).
What you can do:
Why I made it:
Try it here: https://paperdraw.dev
Quick start: drop Load Balancer → App Server → Cache → DB, press play, then trigger a cache-miss storm.
Would love feedback:
Thanks for taking a look—happy to fix bugs or add features if you ping me.
Hey! I’m Damian from Poland and I’ve been working on a small side project called FirstVibe.
The idea came from a simple question:
we all wonder how we’re perceived in the first few seconds - but nobody tells you the truth.
So I built a tool where you:
- upload a selfie
- get a vibe score
- see how you come across in 5 areas (confidence, attractiveness, approachability, style, charisma)
The free version shows the score and a short first-impression preview.
The full report goes much deeper (dating energy, glow-up tips, AI caricature, red/green flags, etc.).
Tech stack: Rails 8 + Claude Vision + OpenAI + Stripe
No user accounts - everything is session-based (this alone tripled my conversion compared to a signup flow).
I’m mainly trying to validate the idea, so I’d genuinely love feedback:
- Does this concept feel interesting or gimmicky?
- Are the results believable?
- What would make something like this shareable?
If you want to test the full version, I can unlock it for early testers.
Brutal honesty very welcome.
implemented all hooks here: https://github.com/shanraisshan/claude-code-voice-hooks/tree/main
Hi,
I'm looking for recommendations for a centralized WebUI for my local setup. I've got the backends running but I'm searching for the perfect frontend that offers a smooth, seamless user experience similar to ChatGPT or Gemini.
Here is my current backend stack that the UI needs to handle:
• LLMs: Two 32b models (Qwen & Deepseek) running via vLLM fixed to gpu 1 with 24gbvram
• Vision: MiniCPM-V
• Image Gen: dunno yet flux or sdxl
• Audio/TTS: Whisper Turbo (distilled for German) and i dont know what
Fixed to gpu 2 with 24gb vram
These are the features I'm prioritizing for the WebUI:
Unified UX: Text, Vision (uploading/analyzing images), and Image Generation natively accessible within a single chat interface.
Is there anything out similar to this
I’ve had a hard-to-ignore sense of paradox for weeks now. Just a month ago, a lot of us were paying $100 / $200 to Anthropic (for example via Claude Code) for a level of capability that, at the time, felt “worth” the price. Today, Claude Opus 4.6 is clearly more refined—but then GLM 5 shows up pushing incredibly hard, setting records and closing the gap (or outright surpassing it in some areas) relative to the kind of capability that, not long ago, cost exactly those $100 / $200. And yet, the default behavior is still to keep paying the same amount for Claude, as if the “value” equation hasn’t changed.
What bothers me isn’t only the technical comparison—it’s the mismatch between real value and delivery speed. Capability leaps arrive so quickly that the monthly price starts looking less like payment for performance and more like a psychological toll to avoid falling behind. That’s where FOMO kicks in: we’d rather avoid “being a few weeks behind” even when the market is clearly offering alternatives that are increasingly close—and sometimes better for specific tasks—for the same money or less.
There’s also something that feels, at minimum, notable: on the ARC-AGI-2 leaderboard, I don’t see Chinese models (for example, GLM 5). I’m not saying this as an accusation—more as a question about how these narratives of “who’s ahead” get constructed, and what gets left outside the frame.
And this is where the fatigue hits: we’re in a cycle where performance improves at a brutal pace, but our purchasing decisions behave as if pricing were static and viable alternatives didn’t exist. Even knowing that the predictive inference paradigm (and these rapid improvements) has made us better workers—faster, more capable, more productive—we still act as if the only thing that matters is “not missing the train” of this week’s model.
Does this paradox bother anyone else? How are you rationalizing it day to day—by actual ROI (use cases) or by the peace of mind of not falling behind?
I am a fan of open source community but I cannot say I have been a huge contributor. But it is changing. Here is the story:
Literally out of frustration, I developed Respectlytics as a privacy first mobile app analytics platform. I develop mobile apps myself, and analytics is a huge concern - all the time!
As the time goes on, I recognized more and more that what I built is likely one of the most (if not the most) strict privacy first analytics tool out there. I guess that companies/developers (in education, healthcare, finance kind of industries) who are in need of a privacy paranoid solution can appreciate it, which made me think that I need to be extremely transparent about this platform.
Concerns to convince people about privacy:
- I need to be able to show the code so that the users can evaluate the privacy aspects of the solution at code level, not based on my words.
- Depending on regulation, no matter how privacy friendly the solution is, some users may need/want to deploy it to their own servers instead of relying on a third party cloud solution.
And I have taken the action.
For Respectlytics, SDKs were already open source but not the server side. But now the server side is open source as well.
And anyone, who want and/or need, can deploy and self-host it with clear instructions.
All the information is available at GitHub (link in the comments) where you can find all the information regarding how you can use it in your mobile apps if you have strict privacy needs.
It comes with a AGPL-3.0 license which makes it free to use. I hope it helps people who need a strong alternative for privacy-first mobile analytics platform. For organizations who cannot or don't want to deploy it themselves, it is still possible to use the cloud version.
You can give the repo to any AI tool or lawyer, and test it from the point of any privacy regulation.
I appreciate any kind of feedback.
Just spent half a year coding. Launched my "masterpiece."
Result: 0 dollars.
Here's what I wish I'd known before wasting 6 months of my life.
Even my friends didn't opened the tool second time.
I am following this exact formula to build my second app.
Stop thinking like a developer ("How can I build this?") Start thinking like a business ("Will people pay for this?")
Nobody warned me how easy it is to waste months building something nobody wants.
Question: Have you built something that flopped? What did you learn from it?
Just added some fun features to PersonaPlex to make it more enjoyable to use. I hope you enjoy!
Instructions for installation and usage are included in Github. An example Personality can be found in the Releases.
"There's no question about it — whether you want it or not — the best way to help some of these communities is gonna be AI-based avatars," Oz, the head of the Centers for Medicare and Medicaid Services, said recently at an event focused on addiction and mental health hosted by Action for Progress."
Medicare and Medicaid are the US's universal healthcare programs for older and low-income people. They've faced steep cuts in funding since Trump came to power, particularly in rural areas.
New research in Rwanda and Pakistan shows LLMs can outperform human doctors in diagnostic success. We're heading for a world where everyone gets the same standard of AI healthcare, and it's near free & universally accessible. It will be a big improvement in Rwanda and Pakistan, and it will probably be an improvement for poorer people in developed countries, too.
Dr. Oz pushes AI avatars as a fix for rural health care. Not so fast, critics say
Hi everyone 😊
I’m completely new to Stable Diffusion / AI video tools and I’m trying to figure out if what I want to do is possible.
I have an anime video (several minutes long), and I’d like to change the clothing of a specific character (swap outfits, add different clothes, etc.) while keeping the original animation and scene exactly the same.
Right now I’m doing it by editing frame by frame, sometimes by myself and sometimes with AI, which is extremely slow and inconsistent.
So I wanted to ask:
If anyone has experience with this, I’d really appreciate any guidance 🙏
Thanks a lot ❤️
Hey the community !
I played a bit with Ace Step 1.5 lately, with Lora training as well (using my own music productions with different dataset sizes) I mixed and scratched the cherry picked material on Ableton and made a compilation. I only generated instrumental (no lyric) voices and textures are from old 50-60 movies live mixed
I used the Turbo 8 steps model on 8gb VRAM laptop gpu, inferences made with the modules on the original repo : https://github.com/ace-step/ACE-Step-1.5
We are close to the SD 1.5 of music folks !
Peace 😎
I’m building a tool that helps structure meals for better focus — looking for 20 beta testers.
We’ve got a very air-leaky 120+ year old house, and it gets so dry- it got down to 15% this year. So we finally sprang for a whole house humidifier instead of a bunch of small humidifier units spread across the house. We went with a steam model and it immediately turned our desert of a house into a comfortable (at least humidity wise) home. We also switched out our old no-longer-supported nests thermostats with Ecobees.
I’ve only noticed condensation on my office windows twice now and dialing down a couple percentage points has remedied that. Most recently it was 28* outside and the set point was 45%, turned it down to 42%. That was Friday.
Inspired some posts a few weeks ago, I put these dashboards together to dial in our humidity set points appropriate for the outside air temps. I can adjust the sliders right in the dashboard and it’ll change the humidity set point on the ecobee to whatever that temp range is set to.
I also have Sonoff temp & humidity sensors throughout the house, so I also created a view with all of those plus the thermostat sensors, to see what the conditions are across the home. Definitely a wider range than I’d like, but again 120+ year old house 🤷🏻♂️
What do you guys think? Any suggestions/feedback?
Hi everyone,
My iOS Widget App has crossed 800+ users and 100 paid users.
My main marketing strategy is TikTok and Reddit.
After months of coding and debugging, seeing these stats in the console feels surreal.
I’m averaging about 20 new users a day. If you’re currently working on your first app, don’t give up! The feeling of seeing your first 100, then 500, then 800 users is incredible.
If anyone interested in my app - Dale
Built a tool that scans your container images for CVEs, CIS benchmark issues, and layer waste - then generates an AI report with Dockerfile suggestions.
https://github.com/siddhantprateek/reefline
Let me know what you guys think, it's a weekend side project
I wanted to share a breakdown of a project we’ve been scaling. We just hit a milestone of $7,000 MRR with 21 active clients in a specific service-based niche, and n8n is the engine room of the entire operation.
Company: Restoration AI Website: restorationai.io
The Build:
The Shift: We’ve moved past the vibe coding phase. With 21 active clients, we need to move from speed-at-all-costs to production-grade reliability. We are looking for an n8n Wizard / Technical Lead to join the crew and take over the architecture.
What we’re looking for:
The Deal: We are 100% bootstrapped, profitable, and growing fast. This is a chance to come into a validated product with real cash flow. We are looking for a long-term partner; potential equity is on the table for the right person as we hit our next scaling milestones.
Let’s Talk: To keep this educational: I’m happy to answer questions in the comments about how we handled the voice routing logic in n8n or our experience building without a traditional IDE.
If you're interested in the role, DM me with a bit about your background and the most complex n8n workflow you’ve ever built. 🤙
***
I'm tearing my hair out because nobody else seems to be having this problem. It should *just work*, but it's acting like my end frame isn't even there. I've tried every node I have with start and end frames. The only thing I haven't done is hook up clip-vision which I didn't think was needed anymore.
I get beautiful animations but they only take the start image into account.
If you work with RabbitMQ or Kafka, you know the pain: messages pile up, something is broken, and you're alt-tabbing between the management UI, your schema docs, and your editor.
I built an MCP server called Queue Pilot that lets you just ask Claude things like:
- "What's in the orders queue?"
- "Are all messages in the registration queue valid?"
- "Publish an order.created event to the events exchange"
It peeks at messages without consuming them and validates each one against your JSON Schema definitions. The publish tool also validates before sending, so broken messages never reach the broker.
Setup is one command: npx queue-pilot init --schemas ./schemas --client claude-code
It generates the config for whatever MCP client you use (Claude Code, Cursor, VS Code, Windsurf, Claude Desktop).
GitHub: https://github.com/LarsCowe/queue-pilot
Still early (v0.5.0), feedback welcome.
Hey y'all,
I just wanted to share a web app I built alongside Claude, in case anyone's looking for something similar.
After an embarrassingly longtime away from the gym, I am making my triumphant return on Monday.
I wanted to do a strength training program similar to Starting Strength but with a focus on dumbbell-only lifts. I've always been way more comfortable with dummbells and, while I'm aware there are downsides to not utilizing barbells, believe you can get an A+ workout in with them.
I've never deployed an application, web or otherwise, in my life. This is my first attempt at this and Claude has been incredible.
DUMBLIFTS is a free web app designed specifically for tracking dumbbell-based 5×5 strength training workouts, featuring automatic weight progression suggestions, clean workout logging, and progress tracking without ads or subscriptions.
Please check it out if you're interested. I would love any and all feedback.
Hi,
This is my first post on reddit, sorry in advance if this is a naive question. I am a PhD student working on ML/RL theory, and I don't have access to compute at my university. Over the past year, I have been trying to transition toward empirical work on LLMs (e.g., for reasoning), but it has been frustratingly hard to do so in my current environment. No one in my lab cares about LLMs or any kind of empirical research, so it's difficult to do it on my own.
I initially hoped to rely on available grants to get access to compute, but most options I have found seem tailored to people who already have a precise idea in mind. This is obviously not my case yet, and I find it hard to come up with a sensible project description without (i) anyone around to help me navigate a very noisy literature to find sensible problems (e.g., still largely unsolved), and (ii) no compute to run even basic experiments (I don't even have a GPU on my laptop).
That is what brings me here. Recently, I have been considering buying my own setup with personal funds so I can experiment with whatever idea I have. I mostly hang out on X, found this community through people posting there (especially "TheAhmadOsman" who is quite active), and figured reddit would be more appropriate to ask my questions.
Most of what I see discussed is hardware for inference and the benefits of running models locally (privacy, control, etc.). My use case is different: for my day-to-day work (80% math/ML research, 10% random questions, 10% English writing), I don't see myself moving away from frontier models, as I think they'll always be way ahead when it comes to maths/code. What I want is a setup that lets me do small-scale LLM research and iterate quickly, even if I'm limited to relatively small models (say, up to ~2B).
From what I have read, the main options people debate are: (i) some NVIDIA GPU (e.g., RTX 6000 or else + other necessary parts), or (ii) a Mac Mini/Studio. The usual argument for (i) seems to be higher throughput, and for (ii) lower power consumption and a smoother setup experience.
My questions are:
Otherwise, I am happy to take any advice on how to get started (I am honestly so new to this that I don't even know what the standard libraries/tooling stack is).
Thanks in advance!!
Hi — I’m working on a concept called a Simulation Governance Kernel (SGK) — essentially a deterministic execution control plane designed for multi-agent environments.
Problem I keep running into:
When multiple agents (LLM or otherwise) interact, subsystems often mutate shared state independently. Over time this breaks determinism, makes replay impossible, and creates hidden coupling between domain logic and execution order.
SGK tries to separate concerns:
• Agents and domain policy modules emit structured intents/deltas • A governance layer performs arbitration, ordering, scheduling and safety checks • State mutations happen only through atomic, replayable commits • The governance layer is domain-blind — it doesn’t understand gameplay or task logic • The underlying world model / engine remains the source of truth
Conceptually it behaves more like a deterministic execution governor than an agent framework. The goal isn’t to replace planners or LLM stacks, but to sit above them as a control plane.
I’m especially curious about architectural pushback:
1) Where does strict governance separation introduce friction or hidden coupling? 2) Would such a layer integrate cleanly with orchestration stacks like LangGraph, AutoGen, or similar DAG-based runtimes? 3) Are there obvious failure modes — latency, deadlock patterns, loss of flexibility, etc.? 4) Does this resemble existing patterns I may be reinventing (transaction schedulers, simulation kernels, actor runtimes, etc.)?
Appreciate any critical feedback — especially from people building multi-agent or distributed decision systems.
Just Klein appreciation post.
Default example workflow, prompts are all the same: "add detail, photorealistic", cfg=1, steps=4, euler
Yea photorealistic prompt completely destroys original lighting, so night scenes require extra work, but the detail is incredible. Big thanks to black forest labs, even if licensing is weird.
Hi everyone,
I’m looking for the most cost-effective way to access Claude (specifically Opus and Sonnet) for my coding workflow.
I’m not a full-time developer, but I build as much as my main job allows. My schedule is a total roller coaster: one day I might spend 10 hours straight coding, the next only 2, and then I might go two weeks without touching a single line of code. But when I’m "in the zone," I’ll easily pull 8-hour days for two weeks straight.
I’ve been considering the standard $90-100 Claude MAX subscription, but I have this nagging feeling that I’m not maximizing it. On slow weeks, it feels like a waste; on heavy weeks, I’m worried about hitting message limits right when I’m most productive.
Some friends suggested Windsurf ($15 for 500 credits + referral bonuses), but I’m completely open to any other possibilities—be it Cursor, using the API with a frontend like TypingMind, or any other setup I might be overlooking. I’m not tied to Claude Code or Windsurf specifically; I just want the best value for this "bursty" usage.
My question is: For someone with such irregular peaks and valleys in usage, what’s the move?
I’d love to hear from anyone who has done the math or switched between these platforms. What’s the most efficient way to get Opus access without feeling like I’m burning money on my "off" days?
Thanks in advance for the help!
Here's my workflow. It worked fine on the first 2 batches, then it broke, probably due to a ill formatted number. Now I'm trying to fix the database and adapt the SQL Query to ignore incorrect numbers. However, once I fixed it, how do I start back from where we left off?
I don't wanna double message everyone and I didn't write any updates to the table on the DB to flag which ones have already sent...
Is this little icon what I need to use?
Really I’m just looking for any advice or guidance or recommendations on setting up the most automated efficient streamlined project management tool using Claude (cowork or code) and click up.
We’re still refining our tech stack but one of the biggest challenges is figuring out how to set up our project management engine.
Current trigger = Once a prospect signs an SOW and pays an invoice.
Once that happens, Claude (cowork or code) will trigger Google workspace to set up all the client project folders and click up to create a new project based off of a template, create a timeline, assign tasks, make project updates based on recent conversations, and so forth.
What I’m trying to understand is right now we have Claude cowork with heavy skills embedded in it, but we’re hitting usage capacity limits. And my understanding is Claude code excels with repetitive, well structured tasks (which this process is)
I’m wondering if we switched some of the processing to Claude code will still leveraging Claude coworker for more of the strategy things if that would help streamline are set up.
Any guidance is appreciated in advance!
Need a recommendation on a zigbee compatible motion sensor or similar technology to detect a hand swipe under our kitchen cabinets. End goal is a quick hand wave under the cabinet as we’re walking in/out of the kitchen to turn on/off a series of led strips I have underneath those cabinets. I already have a working automation that toggles the leds when a motion sensor triggers.
The current motion sensor I have hooked up (these) has a couple issues that makes it kind of a hassle. For one, it detects small changes in temperature and is kicking on and off when we use the air fryer we have a few feet away. Another issue it has is once an hour it just activates. No real rhyme or reason, nothing happens to trigger it (like heat kicking on, pet walking by, etc.) so my guess is some sort of hourly recalibration? Anyways, no way to fix it in the limited software options I have. Any help/recommendations would be appreciated!
Hiya,
I was trying to use ApexCharts to present temperature and humidity throughout my house.
I didn’t like the chart but did like the header with all the info, so have hidden the chart itself and kept the data up top.
The only issue is that I cant find a way to fix it to four columns.
Picture 1 is how i see it (and want it presented) when editing the custom card, in a uniform 4 by 3 grid.
But when i hit save, Picture 2 is how it looks, 2 rows of 5, with a very spread final two bits of data on the last row.
Anyone know a way to get this to stay how i want it?
https://reddit.com/link/1r5jgx5/video/lnyrie99uojg1/player
I spent the last year solo building bantr.live, which is still in constant development. It's a cross-platform application that includes text messaging, video chatting, and image sharing. Getting users has been a struggle, so there may be no one on to talk to right now. I'm looking for feedback on the UI/UX, which can be submitted through support on the iOS/Android apps or the feedback box on the web site....or here if you would like. Thanks
Last year, I wrote my first research paper.
The actual writing took 2 weeks.
But making Figure 2 in matplotlib? 6 hours. For ONE bar chart.
I realized that this must be a somewhat relatable experience, so I developed Eliee.
What it does:
Upload CSV → Type what you want → Get publication-ready figure in 10 seconds. You can highlight parts you don't like and instantly change them.
What I need:
Any feedback on how it works and if this could be a useful product, as well as how I can market this to researchers. (if you'd want to be a beta tester, please reach out!)
Try it:
www.eliee.sh (you should have access to one figure creation before signing up for the waitlist)
p.s: I'm 16 and don't have a clue what i'm doing. but matplotlib genuinely made me want to quit research, so here we are.
I am running the self hosted version of n8n on my MacBook M3 Air which i installed and ran via terminal(did not use docker)
I have this issue where when i add a new google node, i have to again aand again enter the client ID and secret from the OAuth key i made. N8n does not seem to remember my key
Even in the same workflow after adding a trigger of Google Sheets, i added another google sheets node, it is still asking me to enter my credentials again,
Please could anyone help me in simple language as i am not from a coding background (finance), im trying to learn n8n from scratch without any prior experience.
Thank You 🤗
I've been Claude-coding this helper tool for developers like myself. Idea brewed from my own frustration with these ai agents: They output text + code with such magnitude that at some point it got tiring to try and ingest it all. This tool is NOT trying to convince you to "just ship it", but instead it tries to create a structured report from the changes about to be merged. A diff inspector, but better!
And I'm not just dumping the diffs for the LLM to solve. Before the LLM ever sees anything, the tool parses your code with Tree-sitter to extract what actually changed structurally (functions added/removed/modified), analyzes control flow for safety patterns, and pulls recent git history for context. I'm tired of reading massive .md explanations and I know some people are too, that's why I'm doing it. :)
If anyone has time to check it out - much appreciated! https://github.com/inar-vision/diffintel
Fair warning, I'm not a dev, so bear with me. I've seen posts about using Brave, Tavily, and various other search MCPs, and I'm wondering if it's really that important/do they do that much better of a job? I'm not coding anything, rather I use CC for strategy work, admin, and misc. business tasks. I do end up doing a lot of web searches and deep research, so this is why I ask. I appreciate any feedback!
Hi all! Been about a year or so since I checked what's new in image generation. Are there any good models that have released recently that would be good for training the likeness of something like a cartoon excavator character? When I tried before I couldn't quite get there, but I'm hoping there may be some better options now. Thanks!
Hi everyone! I’d like to share a thought experiment I created called the Jeed Khalil Nested Universe Theory. This is purely imaginative and meant for discussion, not established science.
Imagine reality like a Russian doll, where each layer contains another layer inside it. Dust that we see could contain tiny universes. Inside the dust, molecules might have their own worlds, and atoms could hide even smaller universes. Subatomic particles might each be a universe of their own, with time flowing differently at each layer — a blink for us could be millions of years in a smaller universe. Likewise, our universe might be just a speck of dust in a much larger reality.
The theory also considers that life and consciousness might exist at scales smaller or larger than ours in ways we cannot imagine. Everything is connected through layers, and reality could continue infinitely, both into the very small and the unimaginably large.
I’ve made a diagram to help visualize the idea (see below). It shows the flow from dust → molecules → atoms → subatomic particles → mini universe → our universe → larger universe, with arrows showing progression and symbols representing different time flows.
I’d love to hear your thoughts and discussion: Do you think time could flow differently at different scales? Could life exist in such tiny or huge layers of reality?
I run a Substack publication that drives traffic to my e-commerce business. My articles are in-depth with custom illustrations, and content creation takes significant time investment.
I’m looking to improve my social media presence across multiple platforms but want to stay focused on creating quality content rather than manual posting. Ideally, I’d like automation tools that can grab my Substack articles and distribute them to various social media outlets without requiring constant supervision.
I recently started exploring Notion but I’m still learning its full capabilities.
What automation tools or workflows would you recommend for:
Auto-posting Substack content to multiple platforms
Minimal hands-on management once set up
Integration with platforms like Facebook, instagram, Pinterest.
Any suggestions or experiences with similar setups would be appreciated
Real-Time Virtual AI Agents that will autonomously manage your live-streams. They will respond to your chat, discuss your screen shares, and more. Check it out here at Mixio
Hi all, new to n8n and honestly loving it so far.
I’ve managed to build a few solid automations for our local newsroom and feel like I’m only scratching the surface of what’s possible.
One workflow pulls all of our articles, images, and key assets into BigQuery, along with performance data like views and engagement. That part is working really well.
What I’m now trying to do is make that BigQuery data accessible via a chatbot so reporters can ask questions like:
• “Top 10 articles this week”
• “Best performing stories in [publication]”
• “Which reporters are driving engagement lately?”
I’ve got three main tables/views set up that should be able to answer almost anything, but I’m having mixed results getting the chatbot to reliably call and reason across them.
Has anyone here successfully connected BigQuery to a chatbot (via n8n or otherwise)?
Any tips on architecture, prompt structure, or tools that worked well for you?
Would really appreciate any guidance or lessons learned.
I am planning to run purely text-based LLMs locally for simple tasks like general chat and brainstorming (and possibly some light python coding and rag). I am not sure if I should go the m series route or the nvidia route. As of this writing, what's the best entry point for local ai that is a balance between cost, performance, and power usage? I'm currently using a gtx 1660 super and qwen 3 vl 4b feels a little slow for me that I feel like I should put up with a free version of chatgpt instead. I want to be able to run at least something more useful but with a little higher tokens per second rate.
New here and I need ideas to create automatic work flow. I’m new here so don’t roast me please. I just need ideas :)
Anyone with ideas how I can use them for my business? I have a cleaning company. I need a website with a ticket system. Can I create it with Claude ai using my Mac mini?
Looking for motor that will work with this style of cellular shade mechanism that can be automated. There is no pull cord externally. These are originally from blinds.com and they call them a Bali cellular shade and offer a motor kit but only when first buying it seems.
I built LimeSocial because honestly, social media APIs are a mess. I wanted something simple, clean, and that actually easy to setup for TikTok, Instagram, Facebook, YouTube, X… .
I built it to be super easy to use into n8n / Make projects so you can actually post to social media without doing 12 different hacks.
Our site is fully built and tested ✅, but here’s the thing: we need beta testers to really push it to its limits. Push it. Break it. Suggest ways to make it better. Tell us what sucks.
Perks? You get a free paid subscription while you help us make it awesome.
If you wanna help build the easiest social media API on the internet and get a Free subscription - DM me
Hi! I’m trying to recreate the lighting from this reference image (attached).
It looks like a single strong light source near the camera / phone flash:
I’m generating in ComfyUI and can’t consistently get this specific look — SD tends to invent side lights or soft studio lighting.
What I’m looking for:
What I tried:
If you’ve achieved this style, could you share prompt tips or a node setup? 🙏
Just some fun.
Just some fun.
I’ve been experimenting with two ideas around making LLM systems more scalable:
Instead of trying to increase context windows indefinitely, both approaches restructure how inference happens.
For RLM, I ran a small experiment on a ~6.5M character corpus (Sherlock Holmes). That’s well beyond the model’s native context window.
Instead of failing due to length, the system:
It converged in 25 iterations and processed ~2.0M input tokens across recursive calls.
Interestingly, frequency counts differed slightly from deterministic regex counting — which makes sense. RLM performs semantic aggregation across chunks, not strict lexical counting.
Takeaway:
The shift feels less about “bigger prompts” and more about controlling computation.
Full write-up + implementation here (free link):
https://medium.com/p/c60d2f4552cc
Things are getting pretty wild with redditors hiring their fridge as employees and running OpenClaw on them, others letting teams of Claude agents autonomously solve a problem, and some letting Claude call Codex agents and committing several hundred times a day.
How are you all augmenting your development experience with the latest tech?
Im relatively new to home assistant.
Recently purchased a new house that comes with hikvision poe cameras, a hikvision doorbell and hikvision nvr. Also have a dsc alarm panel (pc1832). All are around 5 years old.
My plan was to run HA from an old Mac mini I have thats collecting dust. Will bring lighting / temperature control on with that.
What is the best way to integrate the camera and security side of things?
In my previous house, I used HKSV and HomeKit and liked how I could review historical security footage directly from the app.
Not opposed to replacing some of the hikvision hardware, if necessary.
Thanks for your help!
If you’ve been playing with LoRA, you know you can fine-tune a model by only touching specific "parts" of its brain. I decided to run a controlled experiment using a Qwen-2.5 3B model to see how it modifies its behaviour as a result of adapting different parts of its layers.
The domain I work in is AI academic systems. The goal here in this particular application was to generate a memorandum to the advisor about a given student. The prompt used for all tests was a strict persona instruction: it required the model to act as "Academic AI," an academic advisor, and write a professional memo in a flowing narrative style (no bullets) based on raw student data including dropout risk, quiz scores, and discussion posts.
I tested 5 different LoRA configurations on this one prompt. Here is how this "brain surgery" affects LLM behavior.
behavior: pure text completion. it fails to act as an "assistant" and instead just continues the document like it's a blank notepad.
example: "student dropout risk assessment student: user **, course ** assessment date: 2025-11-21 23:03:33+00:00 predicted dropout risk: 55%...additional data points for student ***** include previous enrollment in course ***** and a high interest in radiology. critical: when referencing student communications, you must quote relevant excerpts and cite specific dates. assistant: i will now analyze the data. user: please begin the memo. assistant: certainly, here is the analysis for the academic staff..."
the model leaked the instructions and started a hallucinated dialogue because it doesn't "know" it's supposed to follow a system prompt; it just thinks it's part of a transcript. I remind this is only a 3B model.
behavior: high-fidelity instruction following. it balances the persona with the data perfectly.
example: "memorandum
to: academic advisory team
from: academic ai assistant
i am writing to report on the 55% dropout risk identified for user **. a review of the data shows a significant drop in performance on november 16th, where the student scored *.* on quiz **** at 21:04:02 (-0600 utc). while the student expressed on october 31st that they find the 'coordination and precision' of c-arm usage 'amazing,' their fluctuating quiz scores (reaching *. then dropping back to *.) suggest a struggle with consistent application. i recommend immediate one-on-one tutoring to bridge this gap."
analysis: this is the highest quality version. it followed the "narrative" rule, used the empathetic persona, and correctly extracted the timestamps.
behavior: data extraction powerhouse, but "instruction-blind."
example: "academic ai assistant report
risk factors:
median score: *. (medium impact)
avg days late: .** (low impact)
quiz timeline:
2025-11-16: scored .* (quiz ****)
2025-11-16: scored .* (quiz ****)
2025-10-31: scored .* (quiz ****)
i recommend monitoring the student's 47.1% late submission rate. Academic ai assistant."
analysis: it completely ignored the "flowing narrative" and "no bullets" rules. mlp layers hold the "knowledge," so the model got the numbers right but lost the behavioral constraints.
behavior: the "search engine" mode. it finds data well but can't write a coherent story.
example: "academic ai assistant analysis. the student ***** in course ***** has a risk of 55%. the data shows scores of *. on 2025-11-16, *. on 2025-11-16, *. on 2025-10-31, *. on 2025-11-04, and *. on 2025-11-14. the student said on october 30th 'obviously lol' and mentioned 'buhurt.' the priority is academic support."
analysis: it is very effective at "finding" the details in the text (k/q are indexing matrices), but the prose is disjointed. it is a list of facts masquerading as a sentence.
behavior: the "creative writer" that prone to bringing in outside information.
example: "dear academic team, it is my duty as academic ai assistant to share concerns regarding user **. they are an enthusiastic student who enjoys 'magic the gathering' and 'buhurt,' as noted in their october 30th post. however, their 55% risk profile is troubling. according to the **** textbook (2024), student success in radiography requires 'precision and confidence,' which user ***** is currently lacking. we must intervene with a high-priority wellness check."
analysis: the value (v) matrix handles the "content" of the response. this version writes the most "human" sounding prose, but it brought in outside information (the book citation) that wasn't in the prompt. it is too "creative" with the source material.
MiniMax-2.5 is a new open LLM achieving SOTA in coding, agentic tool use and search and office work.
The 230B parameters (10B active) model has a 200K context window and unquantized bf16 requires 457GB.
Unsloth Dynamic 3-bit GGUF reduces size to 101GB (-62%).
Official Guide - https://unsloth.ai/docs/models/minimax-2.5
GGUF Models - https://huggingface.co/unsloth/MiniMax-M2.5-GGUF
I built a tool for organizing hardware projects
I’m a prototyping coordinator and after managing a lot of builds I wanted one place to keep:
- bill of materials
- modules / subassemblies
- notes
- collaboration tools/ open sourcing
- a public build page
GitHub kind of does this, but as a non-coder it never felt natural for physical projects. I’ve been enjoying building software with the newer AI coding tools, so I made this tool. It's basically a collaborative tool for physical builds.
Track parts + cost, split into modules, collaborate, publish, and let others fork/remix with attribution.
It’s free, has no subscriptions, and no AI features — I figured a lot of people (myself included) are getting a little sick of unnecessary AI tools. If people actually want it later I can revisit.
I will eventually hit storage costs, so if anyone has experience funding small open projects (donations? other ideas?) I’d appreciate advice.
I put my most recent project on there. A pottery wheel for my wife. Feel free to check it out!
I'm designing an automatic turntable from scratch called the STM-01. Right now, I have the lift mechanism finished and the tonearm finished to a point. It lets me verify my hardware is working so far, anyway. At the moment, I'm using my Technics SL-D2 as the "turntable" part, because I haven't designed that yet.
Speaking of that hardware, this thing's heart is a Teensy 4.1, which will drive two axes (elevation and azimuth, with only elevation hooked up so far). The elevation movement's exact position is monitored through a 10k linear potentiometer, which I'm using as an absolute encoder.
It's also monitored using a wire in the tonearm lift that completes a circuit with the metal rod that pushes it up and down, so it's aware if it's currently lifted, or set down on something.
If you're curious to hear me go into (way to much) detail designing the tonearm, you're welcome to check out a video I made documenting it: https://youtu.be/1wr13gz5l9k?si=OACKD8xNRtpHkq4G
The project is also completely open source, if you want to follow along there: https://github.com/pdnelson/Automatic-Turntable-STM-01
Lots of work to do on this yet, but I can at least show it playing a record now! I'm really excited to get the automatic azimuth movement working.
Hey everyone I’ve been experimenting with AI agents lately and I’m honestly surprised at how quickly they’re moving from “cool demo” to actually useful tools.
So far I’ve tried using agents to:
But I feel like I’m barely scratching the surface.
I’m curious:
Would love to hear real-world examples especially anything in production or side projects that genuinely made life easier.
Let’s share what’s working (and what isn’t)!
Am I the only one having issues with the native roborock integration being unable to get device configuration from my account?
I’m able to remove it and re-add and authenticate to the account (and the roborock app notifies me of the login, so at least that part works. But as soon as I connect it, I get “Failed setup, will retry: Failed to get Roborock home data”
My best guess is that Roborock changed something on their API. All that changed on this end was adding a second robot. However, this new robot wants its own set of floor plans rather than sharing the ones my existing bot has, which may be confusing to the API? Idk. definitely feels like a Roborock issue rather than HA, but I need to know which tree to bark at!
Some of what I found online suggests that the integration is able to communicate directly with the bots and the docks on the LAN, but that it needs the online account to at least discover them initially, and can also use that as a fallback if they’re isolated from HA on the network.
I'm very new to this locally run systems and am getting an error when I do anything. ChatGPT suggests I disable "xformers" on my GPU (an RTX 5060TI), but I am hesitant to do so in case that messes with the functionality of my PC as a whole.
Can anyone attest to this?
hey guys. back again.
my post a few days ago got really good reception and I am really happy for it. you guys seem like a supportive and wonderful community and I hope to be more apart of it in the coming future.
I wanted to ask a philosophical question to you all. in your eyes, what makes a side project? what is it about a side project that is special to you? what motives you to continue, especially when you aren't succeeding?
Hello everyone! Today I want to tell you about my startup, FixDrive.
We noticed the pain points of people who are constantly late for work, don’t have enough time to pick up their kids from school, and, in general, people who want to automate their daily lives. Our startup makes this possible. We are currently at the late pre-seed stage and are actively developing to create a new vision of convenient mobility.
In our app, users will be able to choose a driver and a vehicle, as well as select convenient days and times so the driver knows where to pick them up and where to take them. For clients, this provides predictability in their trips; for drivers, it offers stable income. We are currently working hard to establish B2B and B2C partnerships for our waitlist.
A startup is not just a business — it’s a lifestyle. FixDrive is not another taxi app. It is infrastructure for predictable daily mobility.
I would be glad to see warm discussions during such a difficult time for startups. Everyone is talking only about AI now, but convenience and time-saving are things we should always care about.
Here is our website for more information.
Life can be a blur and it's easy to forget most day-to-day interactions. A harmless crabby comment or (unfortunately) a thoughtful gesture can fade from memory by the end of the week (or even the end of the day).
It's different when someone is under the stress of a life transition. We tend to remember unusual events more; we also tend to remember stressful events more. When you combine the two of those, you have a recipe for a lifelong memory. This is true whether the event was something good, like a wedding, or something bad, like a death in the family.
I've noticed this repeatedly and other have said it's the same for them. That unkind remark following the death of a loved one? I don't mean to hold on to it. It's not that I'm carrying a grudge. But I remember. For many of these memories, it's been a long time, so I suspect I'll always remember. But it's the same for those who extended a kindness during a transition. I'll never forget those, either.
It's always good to be a kind, thoughtful human but, if you want to protect and nurture your relationships even more, be more careful than usual during those times that make memories that last forever.
Cannot even remember what model, reckon zit~
Rise and shine, detectives!
I have a couple automations that are basically when device has been on for x hours then do something.
These are not voice timers. These are durations I specify in the trigger.
Is there a way to see, the status or duration of the timer? Is that exposed somehow?
In part, there's a little issue I'm trying to troubleshoot, but in general I would like to be able to see these. Anyone have any thoughts or experience with this? My sense is you can't get at them.
It seems now instead of giving a long lasting OAuth token they give a one with a few hours validity.
This is a pain because I use it for code reviews on a pipeline. To be honest I'm totally lost with what is allowed and isn't - is running claude -p "review this branch" allowed on a CI/CD pipeline, or do we need to use API for that? I know using the oauth token with OTHER apps isn't allowed, but is it allowed to run it autonomously WITH claude code?
many years ago (2010ish) I had a 401k with a small employer, got laid off and the company said that I had to take my money out of their plan. I used a “financial advisor” a friend recommended and they ended up putting it in an IRA that never earns any money. I have about the same amount in there as I did in 2010. They also took a few smaller 401k from other employers and put those in there. All in about 35k.
I have a job now with a Fidelity 401k and you can move your old stuff into the new 401k. Except I can’t because now it’s in an IRA. Fidelity said you can’t go IRA to 401k because they have no way to validate the contributions weee done pretax.
Am I going to get screwed when I retire in 10 years on this IRA? Will I owe bigger taxes on it? I have it with a different company now that my company offers but it’s not a personal service. I just figured since it hasn’t made money in 15 years I should move it and try to earn something.
Hi everyone, my man sent me this screen that i cropped, but im not quite sure what's the palm tree notification icon, and he doesn't want to tell me...it scares me a little ngl...
Howdy yall im currently working in Japan for the next couple of months and i really wanna play LOL with my boyfriend back in the states now how bad is the ping truly gonna be im going to be honest im a yuumi main and just enjoy spending time doing what he like to do so would it horribly affect my experience thanks all!
Just wanted to share and see how other people are doing things like this. I started off wanting to use AI Tasks to capture a summary of what happened in my garage after someone either came or left. It proved to be quite difficult, I couldn't get ai task to work very well or let me define where generated files were saved. I decided to implement my own LLM provider for Ollama and OpenAI in python and then tie that into AppDaemon for access to the Home Assistant API.
I've got some more work do to, but I am very happy so far with how things are coming out. Source is on my GitHub if anyone is interested in seeing what I did. I used two apps that have loose coupling via an event. One app collects images while motion is detected in the garage and then does a ternary search of sorts to find the image that would work best for the notification. The other app hooks garage door events as well as events from the first app to know when to send the push notification.
I would like to know your opinion on the current viego build, now he builds kraken into collector, shieldbow, IE, LDR
Back then Viego had a bruiser build which had far more counterplay than now in my opinion. It feels like he is extremely snowbally right now, it's like I'm playing against a better version of shaco
Cat vs Rohan
Okay so I traveled to New York about 5 months ago.
But before I went to New York - I was always traveling, always buying the newest things always spending money on new hobbies, and would go to every concert my city would have.
After New York I feel like nothing excites me that much anymore but like in the best way possible. I don’t over consume, I don’t feel like I need to travel/backpack around places, I don’t spend money on unnecessary things, don’t need to pick up an expensive hobby, I don’t need to go to every concert my city is playing.
All I really like to do is watch tv, read, lounge, walk, bird watch.
Is this like a sign of something good or bad in your opinion? Personally I feel amazing because of it but I do know that not being excited by things that use to excite you isn’t always the best thing too.
OK, so Ive seen a sub that helps people get some answers when it comes to phone numbers that have called them.
Anyway, I recently found someone is using photos of me to catfish or scam someone/people. Theyre using nothing about me except my photos.
I know the/a site, I know a username. The site does not allow me to search by username.
Is there a sub that can help me get more information about this person? If I provide information I do have?
Found this in the beach. What is it?
not my video but it was in my gallery and I wanna share it
The trades in my opinion are harder than people like to admit. I know there are people who have done it or currently do it and hate it. Society never talks about you guys and i know you exist. Those of you that have left the trades, why?
My Powerwall is in an outbuilding with seperate Wifi then the house wifi (where HA is on the local network) , is there a way to connect the Powerwall to my HA?
Hello, i'm new on comfyui, can you tell me why, when I click on a tab, it creates a new one ? It's annoying, is there any way to prevent this ?
recently bought a 7900 xtx and installed comfy on windows. I thought the increase in vram would speed things up but its so much slower. what is the reason for this. also ltx2 completly froze my pc at the vae decode stage were as the 3080 still works its just not that fast.
anyone have any ideas on what these antennas are for?
we have a rather eccentric neighbor, and i'd rather not ask him in person.
funny part is that the one with all the flat panels keeps getting blown down whenever it gets windy out, so he must use it quite a bit because it's always up within a day or two of a big storm.
Quick recap:
Several years ago, my firm first setup a mega-backdoor Roth option by allowing employees to contribute to a Schwab after-tax account that would then rollover to a Roth 401k. I was all trad 401k up until then, so was super excited to have this option. Signed up and let the contributions pile up.
2 years ago, I was helping a co-worker get signed up for the same thing and explained what it was, how it worked, etc. She said, "hey, I have this button to auto rollover". I had signed up via paper form with the Schwab rep that held the meeting in our conference room, so never went into the screen where my coworker was. When I went into site and looked at the contribution options, I saw auto-roller as an option to enable. It was at this point that I realized Schwab never enabled the auto-rollover when I signed up. Yikes! Nearly 3 years of contributions were just growing after-tax. Those gains were taxable because they never rolled over into the Roth.
I spoke with Schwab multiple times and they couldn't/wouldn't do anything for me. They claim they sent the rollover authorization form to me (they didn't) and that it was my responsibility to set up to the rollover (again, all I signed was the paper form that the rep provided). Whatever, I own part of the f-up, but I was still pissed.
I enabled the auto-rollover and the last two years I've been converting large chunks of the AT money to just bite the bullet. I'm entering this in Turbo Tax this year and the 1099-R form has the Box 7 code of G. (Indicating a direct rollover). Turbo tax has a critical question of "Did you move this money from a 401(k) to a Roth 401(k). I didn't. This is from an after-tax retirement contribution account to the Roth. However, everything that I understand tells me that the gains this money has incurred are taxable. The 1099-R shows the taxable amount in box 2a (over $38k). However, if I answer "no" the TT question, it doesn't increase my tax liability.
Thoughts? Is this a TT thing and I should say that I transferred it from a regular 401(k)?
I really dislike that toying around with tft pets automatically makes the game put it in aram too, who told you to do that! I didn't have it before and want to keep it that way.
How do I disable it? It's not intuitive at all.
I guess its time to be a grown up. im 26 now and here I am about to graduate my second rehab, loads of debt, going through divorce and custody battle, and set to go home to live with my father and my 4yr old will be there at my exes discretion.. wow. its so exciting to start over. a blessing really... ive done my fair share of feeling ashamed at where I am in life but oh well. the reality is that I need to grow up. im just a late bloomer. am i too old to wear forever21 and urban outfitters ?? am I too old for vans and converse?? I have always felt like a teenager . the music and album drops that came out when I was 18 that I still have in my heavy rotation say "posted 8 years ago" what?!?! its been 8 years?? what do I have to show for it the past 8 years ?? im excited to be an adult. finally. act like an adult. talk like an adult. look like am adult. the last two months ive had very little contact with anyone outside of my daughter and family.. I think im finding identity. as a woman and not a teenager anymore. wtf .. it took 6 years to finally feel like an adult.
I've just been getting to grips recently with the recent (ish) law changes to Non-Dom status and how much inheritance I may come into one day, being only 26 my parents never really told me much so I could get myself somewhere.
Anyway, I've realised my parent has never filed form SA109 with his self-assessment for non-UK residence. In fact, he's never trusted his accountants enough to fully disclose the extent of his assets abroad. He's always lived abroad most of the time, coming to the UK to visit me in the UK for 3-4 months maximum at a time. I used the SRT tool which indicated he may likely be Non-UK Resident.
Is this something to tell our accountants for them to begin filing that form - and if so do you then have to disclose everything you have outside the UK. I gathered loads of emails of my parent saying he's abroad over the years and paying utility bills, community involvement etc of property outside the UK and I'm just so confused as to how to go about this. He seems to think I should be the one to deal with it in the future with probate but I'm scared that I'll end up in a situation where I'm only just then disclosing all of our assets within and outside UK and arguing with them over residency and all sorts.
I'm also starting to question what incentive there is to stay in the UK if I chose to stay here myself, knowing that if I were to someday have children and had assets abroad, they'd be taxed 40%.
Sorry if this comes across as pretentious, I'm just confused and trying to wrap my head around all the changes and clean up what my parents started so I'm not arguing with HMRC one day. I have a health problem it would be nice to be stress free with that in future with what has been setup.
I’ve been experimenting with ways to reduce my social media usage while staying updated on very specific niches in the news. I put together a Python-based project that turns noisy RSS feeds into custom-formatted news briefings.
Relying on a single LLM prompt to summarize news often leads to surface-level summaries or hallucinations. I wanted to see if I could treat narrative extraction as a multi-step consensus problem rather than a single task.
aiosqlite for state, and "vibe coding" strategies for LLM ensembles.I’m looking for feedback on a few things:
I used to spend 45+ minutes a day scrolling AI news threads.
Most of it was:
• Hype
• Half-context screenshots
• Threads repeating each other
So I built something for myself.
A daily AI digest that:
– Curates only the high-signal updates
– Breaks them into structured summaries
– Explains why it actually matters
– Includes prompts you can immediately test
The goal isn’t more information.
It’s better thinking.
Curious how others here stay ahead without burning out.
I’m curious to hear about the moment in your life when everything seemed impossible, but you pushed through anyway. What happened, and how did it change you?
Two factors of production is Land and Labor. For this, let's make a third category, Data. Land and Labor create Capital, but so can Data in the form of better AI and Robotics. But when we make Land, Labor and Data free, we lose their full potential to provide Capital. So we try to subsidize them, however without using their potential, we can only rely on stores of capital that are ultimately unreliable.
The AI model trainers rarely or don't care about consent nor the quality of data, by not taxing it, we're essentially letting it collect rent on what we decide, or don't decide, to share. If we tax Data, we discourage unauthorized use of creatives and coders data without needing new copyright laws and the unintended consequences of it. These guard rails make people feel safer sharing Open Source information. Taxing Data isn't a losing situation for AI companies either, when we give value to Data, that data has a quality floor.
I'm proposing a "Data Value Tax" which would on theory put a price on most Data used for training models. Thoughts on this as a solution to "AI cannibalism" and the drama about copyright infringement?
What’s a sketch that is objectively bad that you can’t seem to shake possibly decades later?
I built this alone. No team. No users. No proof.
Most people quit here.
No users.
No money.
No validation.
But this is where real things begin.
I didn’t build this because it was easy.
I built it because it needed to exist.
Sometimes the only proof you have… is that you didn’t stop.
Just one click away….
👉 PromptForge.store
#bui#buildinpublica#startupjourneyl#solofounders#aistartuptrepreneurmindset
The first clip somehow made its way to my FYP and it really piqued my morbid curiosity in a way that made me question it. Why are these clips filmed in a way that makes no sense, like it’s some kind of found footage? Why post 2.3k videos that don’t really have a purpose? I realize that it may just be a cute elderly woman posting in her spare time, but it’s just so unnerving for me. I want to know if anyone else got the vibe that I’m describing.
I'm buying a home and they are just so expensive.
I'm an extremely light sleeper so townhouses and condos are just a really bad option for me, I do need a house.
There have been bidding wars on the houses that aren't complete dumps. I don't want to be house poor. I'm feeling very tempted to increase my budget even though that goes against my frugal tendencies.
I don't really have any specific questions. I'm just looking to commiserate, And I'm wondering whether any of you can relate or if you have any surprising insight. Thanks
My HAOS, running on a VMM on my Synology NAS, is experiencing an issue. Yesterday, I updated to OS 17.1, but now I can’t access http://homeassistant.local:8123/ or
The Home Assistant CLI indicates that everything is functioning correctly, and the Observer at local.4357 is working fine. However, I’m unable to access the UI.
I’ve even attempted setting up a OVA backup from last December, but I still can’t access the UI. After creating a completely new setup using a backup tar file from December, I’m still unable to access the UI.
I’m at a loss and would greatly appreciate any suggestions or insights you may have.
I can't see anything in the oven where something is missing. We turned the oven on and there was smoke and this thing was melted on the bottom of the oven. what the heck is it? The oven still seems to work.
Yay!! I had an office job I worked for 3 years, but unfortunately my spine deteriorated from it (hyper-mobility and scoliosis certainly played a role). No amount of p/t, steroid shots or ergonomics helped. It was bad, and it was a major reason why I lost my job. I have a lumbar and cervical disk bulge. I’ve basically been lying in bed for 3 months, working toward my certification and studying for the exam. Anyway, I passed the content exam earlier this week so I can finally be hired as a teacher for the fall!!
I finally feel better enough to try a light workout at the gym today. the teacher salary is enough to where I can finally become financially independent from my parents too.
Yeah I know teaching has its own pitfalls. But I am completely fine on my feet. Nothing is as miserable as sitting in an office 50 hours a week having nerve pain radiate down both arms and legs, and losing all my hobbies and my early 20s
Currently a 18yr, just realized how important is networking. What are some advice you can give me for networking. I am not popular at all in school. So I am miserable
Found on farmland around Oxford, UK.
I make $3,200/month after tax, rent is $900, and I have no other debt. I've been putting $400/month into savings but I'm not sure if I should redirect that to pay off my student loan faster (4.5% interest) or open a Roth IRA. My job offers no 401k match. What would you prioritize in my situation?
I've been playing ARAM since its inception in customs, and there's one thing that consistently tilts me harder than bad RNG comps: the "tradition" of dance parties, emote spamming, and wasting Flash at the start of the game. To me, this isn't harmless fun or "ARAM etiquette"—it's intentional sabotage of your team's win condition. The objective is to destroy the nexus. Using a summoner spell for memes or stalling the game while everyone dances puts your team at a disadvantage, even if it's brief. Flash is a game-changing tool for engages, escapes, and plays—throwing it away early is throwing away options, and in a snowball-heavy mode, every edge matters.
I get that most of the community sees it as a mutual truce or good-vibe ritual, and yeah, when both teams do it symmetrically, it's "balanced" in a casual sense. It's been around forever, Riot has never punished it, and in lower MMR it's basically expected. Threads here often defend it as part of what makes ARAM chill and fun, and it's definitely declining in higher MMR where people skip it for actual pressure. Attacking during a dance party is sometimes called rude, but more often it's just accepted as playing to win.
But I stand alone on this hill: even if 9/10 players are cool with it, if one person isn't (like me), it's non-consensual suboptimal play at that player's expense. It's selfish indulgence in a team game—prioritizing personal "fun" memes over collective effort. A casual approach that tolerates this kind of deviation opens the door to worse issues. I refuse to participate and never will; I poke, zone, and preserve my tools from 0:00.
I'm not wrong about this in my view, and I'll keep calling it out. But realistically, the current state is that it's a tolerated (if debated) norm Riot ignores, and most players either join in or don't care enough to flame over it.
What we really need is targeted, official language from Riot on clear expectations when queuing ARAM. Is this mode purely casual where rituals and suboptimal starts are fully endorsed? Or should players be expected to try their best to win from the moment the game loads, with no mandated "truce"? Give us explicit guidelines so we know what’s acceptable vs. reportable. Until then, thoughts? Am I the only one still mad about this in 2026?
League of Legends (LoL) is sorely lacking in consistent, useful options to deal with the abundance of sustain among its roster of champions and items. Whether it's an enchantress pumping her team with heals or a tank becoming immortal through items like Unending Despair, it's clear that something needs to change. As someone passionate about game design and an aspiring game developer, I'd like to share some ideas on how to tackle LoL's undying issues.
GRIEVOUS WOUNDS
In the game's current state, Grievous Wounds is necessary but unfortunately underwhelming. Reducing enemy healing by 40% sounds like a lot, but in practice it often isn't enough. However, the issue with buffing the healing reduction is that it can have the opposite effect, where healing is useless instead.
My idea: make Grievous Wounds stack based on how many players apply it. For example, say you apply Grievous Wounds to the enemy Mundo. Since you are the only one applying this effect, his healing is cut by 40%. Now, imagine one of your teammates also applies Grievous Wounds to the same Mundo. Now, the healing reduction is increased to 60%. Finally, if 3 players apply Grievous Wounds to the same target, the reduction increases further to 80%.
Of course, the numbers could be tweaked, but the idea is that teams now have a consistent way to deal with high-healing team compositions. Additionally, stacking healing reduction in this way encourages team-play, where players are encouraged to focus their abilities on the same target to secure kills. Sustain will still be very powerful, but proper coordination and planning will be able to deal with it in a more consistent manner.
ITEM CHANGES
Now that we've addressed the problem with Grievous Wounds, let's make its associated items a more worthwhile investment.
Morellonomicon
Morellonomicon is an item that have very limited utility in the game's current state. While my changes to Grievous Wounds would make it more useful, this item still needs a lot of help. Often, players just buy an Oblivion Orb and don't bother finishing the item. By buffing its AP and giving it Magic Penetration, hopefully this item will be a little more desirable for burst mages!
Mortal Reminder
Although the Grievous Wound effect of Mortal Reminder is sometimes useful, being unable to build it alongside Lord Dominik's Regards nearly ruins any viability this item could have. I decided to replace the Armor Penetration with Attack Speed as this is something the item had in earlier patches. I also considered making Mortal Reminder give Lethality, but decided it would make it too similar to other Lethality items.
Serpent's Fang
Serpent's Fang is an item with a unique and useful effect of reducing enemy shields—the only item in the game to do so! You'd think that such a unique trait combined with the abundance of shields in LoL would mean this item is a must-buy... but you'd be mistaken. In practice, Serpent's Fang's stats are less than desirable. With its new Crit Chance, hopefully ADCs will be encouraged to try out this item in tough match-ups, and assassins will be grateful for the extra damage.
[NEW] Spellshredder
Burst mages not only are lacking in Magic Penetration items, but AP champions as a whole have no items similar to Serpent's Fang! This means that they struggle a lot against team compositions with a lot of shields, or even items like Seraph's Embrace. While Shields should still be powerful tools against burst damage, an item like Spellshredder would ensure that burst mages aren't helpless against them.
Those are all my ideas for now! Of course, the numbers of all the various items and mechanics could be tweaked. However, I think these would be healthy changes for the game, opening up more opportunities for team play and unique builds. Hopefully, by some miracle, these ideas came make their way to the dev team! :)
Bro Loves Fishing
Pitsa natsis
He touches it again!
I found this in a chamber of my 19th-century house. nobody knows what it is.
what is it?
Can someone please make him smile a little more (no teeth)? It would be greatly appreciated, willing to tip a couple dollars tysm 🖤
Alright, so right now I'm not trying to ask if I got something here...anybody who doesn't believe this is something that's your opinion and you would be wrong (sorry not to sound smug or anything 😐)
This is a screenshot of a video from a security camera during an investigation and this room was clear. Nobody could have entered or tampered with anything at any point.
In the video the cat freaks out slightly after an orbject appears over the bed and that thing blips quickly on and off the screen in a frame and the cats staring right at it than a toy fell from a shelf which was found 4 feet away pressed against the wall.
I was in the hallway when this happened and walked in immediately after the toy fell to the cat panicking. There was nothing in that room...after a girl who was outside ran up saying she saw a tall shadowy figure in the window than we asked questions and we got 3 evp's. Very clear EVP's!
My question here is, what kind of spirit is this t Rex looking thing?
Hey all new to the sub! me (25m) and gf (26f) are starting to get financial stuff together have been slowly paying off cc debt and paying extra on monthly car payments and setting money aside to move out about 1500-2000 a month combined we're setting aside we are at 3 grand right now its been a month of saving. my question is how much would be good amount to have saved before we move out? rent we are looking at is 1300 a month 300 in utilities we bring in about 4100 a month after taxes. our monthly debt right now is 1200 a month. any advice is helpful thanks yall!
Cuba had been a Spanish colony since the early 16th century. By the 19th century, Spain was no longer the dominant global power it had once been. After losing most of its mainland American empire in the 1820s, Cuba became the shrinking empire’s most valuable remaining colony, and it held on tightly with authoritarian repression. Two revolutions were suppressed in the latter half of the century, and slavery itself was not abolished until 1886.
In 1895, Cuban revolutionaries led by José Martí and others launched another war for independence. Martí, who had spent years in exile in the United States building support among Cuban expatriates and appealing to American audiences, was killed only months into the conflict. The war devolved into guerrilla fighting, sabotage, and escalating reprisals.
Spain responded with brutal repression under Governor-General Valeriano Weyler. His policies included summary executions, forced relocations, and most infamously, reconcentración: the herding of hundreds of thousands of rural civilians into guarded camps. The term “concentration camp” itself derives from this policy. Between 200,000 and 400,000 civilians are estimated to have died from starvation and disease.
Meanwhile, American economic interests were becoming deeply entangled with the island. After Cuba’s Ten Years’ War, U.S. capital poured into its struggling sugar economy. By 1894, roughly 90 percent of Cuba’s exports went to the United States.
Simultaneously, many Americans were horrified by reports of Spanish atrocities. A war with Spain, to American imperialists, would be strategically important, with humanitarian justification. After Martí’s death, more conservative and U.S.-friendly figures became the public face of Cuban independence in American eyes, and covert American financial aid and weapons shipments to the rebels increased.
If you’re interested, I explore how these converging pressures helped set the stage for the February 15th sinking of the USS Maine here and the outbreak of the Spanish-American War: https://open.substack.com/pub/aid2000/p/hare-brained-history-volume-67-the?r=4mmzre&utm\\\_medium=ios
Hi,
I’m a software team lead in a startup looking to hear from you how you utilize Claude code agents/skills to improve your work as a team lead.
I want to make my work more efficient and focus on employees development and push them to be better themselves.
UPDATE:
The City pumped all the sewage out of both basements, there insurance company has put all the families and pets in hotels. And mobilized a disaster clean up company who started that very same day and is currently continuing to clean. That is all I know so far.
I am seeing quite a significant number of posts regarding male loneliness. This is a significant men's mental health issue.
I am 33 now, unmarried, and have never been in a relationship. I have found a few ways to deal with my solitude that suit my situation and character. This will differ from person to person, according to their temperament, I guess.
I am an introvert and not much of an outdoors guy. I don't travel much. A significant portion of my day is spent at work. At home, I listen to music, watch movies, series, etc.
I am also interested in reading books. I have a physical collection of around 1,000 books covering various subjects. Presently, I am writing a novel for a competition and translating an academic book and some poems.
I used to go for long walks when I lived in a town, but since I am now in the city, I exercise indoors mostly.
I have a close friend of 15 years, with whom I rant occasionally. He got married a few years ago, but the ranting still continues :)
So, how do other unmarried men deal with their solitude?
(P.S. This does not really mean that I am a stoic person, unaffected by the melancholy of solitude. Occasionally, the cloud of despair descends upon me.)
Mixed techniques on paper.
I have finished my first DIY project based on the Raspberry Pi PICO 2W and a 5.65‑inch ePaper display. The goal of this system is to show the weather forecast for the current day (see the left side of the screen) and for the next five days (see the right side of the screen). I forked the software from micropython‑ePaperWeatherStation (https://github.com/frederik-andersen/micropython-ePaperWeatherStation) and improved several things. My version of the software can be found at https://github.com/Damov/micropython-ePaperWeatherStation.
The system is based on a Raspberry Pi PICO 2W, a Waveshare 5.65‑inch E‑Paper Display Module, and an Adafruit PowerBoost 1000C combined with a 3.7V LiPo battery for power management. On the GitHub page, it is documented how to wire these components. I will also publish the STL and 3MF files for 3D printing the frame at some point.
This is version 0.1. What I personally don’t like about it is the fact that you won’t immediately notice when the battery is empty, because the status LEDs are on the back of the Adafruit 1000C, and the screen (as it is an ePaper display) will always show the last image. Therefore, I am planning to replace the Adafruit PowerBoost 1000C with a Waveshare UPS Module for Raspberry Pi Pico, which provides an uninterruptible power supply and battery status monitoring via I2C. This module makes it possible to show the battery status on the Pico through I2C, and therefore it can be displayed directly on the screen.
I have dozens of documents about the Defendants' conduct over several years for things like whether their conduct amounts to fraudulent, or negligent misrepresentation in Singapore.
The question is to get which Claude AI plans to process the accuracy of Statement of Claim and further affidavits as necessary.
Tyvm.
Hi, I got a one-time project involving some big document editing. I’m hoping to snag a free trial to maybe get it over with faster since copilot seems to be absolutely dogshit at formating and generating text. And maybe try it for a bit of conding since I heard it's much better then the last time I tried it last year.
Please DM if you have a spare invite, thank you!
- 32 yo single male just got my license
- 120k net worth
- Make about 110k in NYC (take home 6.8k)
- 0 debt of any kind
- I currently get about $800 a month on food stipends
- Gas and parking at my apt will be free
- Rent 1800
- Health insurance 650
- MISC - 800 (food, subscriptions, etc)
- I save most of the excess (surplus sinking fun allocated to 70% saving, 30% misc)
- Can I reasonably afford a leased car (430/month + 260/month insurance) (my drive to work per year will eat up 8k miles, I plan to get 12k miles so I can do a few road trips)
- I know the fat fire people will say no, but I’m not exactly an aggressive saver by any means. I’d prefer to increase my income over saving aggressively.
Why I prefer to lease instead of buy:
- at 0 down, if I total the car I’m in a better position than buying / new driver
- flexibility: my apartment lease ends in 3 years, kind of lines up with that.
- excited to be driving a new car, I waiting a long time before I got a car compared to my friends who have been driving for 10+ years
- I want a smaller car for my first car, but I know I ultimately will want a bigger car when I settle down (hopefully in 3 years)
- I’ll invest the money into voo/spy immediately that I would use to pay cash on a car, make modest returns to offset the lease
- don’t gotta worry about maintenance
- my friends say it’ll let me drive without much commitment, after 3 years I can buy a car because I’ll definately know what I’ll want by then.
All my friends have been telling me I should lease for my first car. Only one friend has suggested I buy a modest 15-18k car cash, but he’s extremely frugal.
Thank you!!
I play late and too tired, or I’d be silver elo. I’m a gamer dad that plays after a long day… my son is 7, and I’m hoping to get him a computer to play alongside me, any other dads similar situation? Would love to forge a friendship with a dad that comes built in with a friend for my son.
I am trying to figure things out. I am 25 male. I have a background in law. Did an undergrad degree from 18 - 21 years old, started my masters but dropped out due to personal circumstances, dusted my self off and went back to work and went back to complete my masters sooner than I thought I would, passed with a distinction (highest grade possible B) ) earlier in 2025. I left the company I worked criminal defence at for 4 years to go somewhere better, performed well for 2 months and realised that I do not want to do a sedentary job and I don't actually enjoy what I have been doing. The reason it took me this long to realise was because of all of the personal issues i had going on i just needed stability so never evaluated how or whether at all i enjoyed the job.
In summary I am now 25. I live at home with a very emotionally abusive mother and emotional difficulties stemming from childhood abuse which i will be seeking therapy for. I have a wonderful, beautiful, caring girlfriend and we have an amazing relationship. I no longer want to live at home and probably not even in London. I want to move to Australia and try a new life but I am not a qualified lawyer nor do I want to spend more time qualifying so if I move to Australia I will be doing work not requiring qualifications. I really tried to make it work in London. Years of work and school. Trying to improve my relationship with my mother. Feeling suicidal, humiliated, and ashamed. But I am miserable and feel like nothing is going to change here. I am actually a very optimistic person but I want to move away now. I want to move because I am never going to get this time back. I don't want to turn 30 and not have lived in another country for better or worse. I dont want to be upset or abused any more. Everyone has called me stupid for deciding to quit my job in law. I didnt work notice. I couldnt stand another day. Whilst I am figuring things out I am working freelance giving advice to detainees in police custody which is on average good pay for short hours.
My girlfriend is very well established here and I would not ask her to leave the country with me.
Am I being stupid?
I build brieftea on ios; can someone guide me on that please ?
I got this on senna while having dawn of the blossoming sword and not only does it give lots of HSP but also AS. So my question/discussion is…. What is the mathematical correct item build(don’t factor in other augments or champ passives, just strictly items) to get the highest HSP possible with the dawn of the blossoming sword as one of the slots?
Recently I started a YouTube Channel sharing my experience as a Software Engineer.
Usually my videos does not have much things to edit, only removing the silent parts from it and cutting and grouping scenes. I checked a few tools to automate it and they usually were paid, so I decided to create an open source tool for that.
It's still a work in progress, so some things might not work at its full potential, but I like the way it's going and I see it being helpful for much people soon.
I want contributors to make this app come true and we all can use it and build a useful tool for everyone!
Check the repo at https://github.com/KozielGPC/video-editor-app/
I want to resize the selected image by typing in the exact pixels. How do I do that??
This is why I never play that damn mode. Why afterwards do I always have to get a stupid thing following me in aram. It is not even clear how to turn it off!! How do I remove it from following me?
Hi everyone,
I’m a solo dev and a huge geography nerd. I always felt like other geography games relied too much on just memorizing shapes and luck, so I wanted to build something where you actually have to use logic to narrow down the location.
I ended up building GuessTopia. It's a daily geography web game (similar to wordle), but instead of just guessing based on a map shape, you have to use cards like Climate, Languages and Population to narrow it down.
I've been working on it as a solo project in my spare time and I'm really trying to get the "feedback loop" right. It has a Country mode, City mode and a new Capital Mode.
I'd love to hear what the side project community thinks.
Check it out here: https://guesstopia.com
(Totally free, just a hobby project I'm proud of)
If you were next to me, I’d say these things:
1.) I miss you
2.) I am so stressed with K and Co here. Like I cannot escape the chaos and noise and misbehaved kids and K does not take responsibility for them or handle them so it falls to the other adults. Haven’t felt this anxious in a v v v long time.
3.) A needs a med boost like it’s nobody’s business. Guessing because of growth that the same dosage is no longer cutting it. But holy crap, that’s not helping.
4.) I needed a new thread because it was getting tough to find comments to respond to lately.
5.) I need a vacation.
6.) tell me about the process for your D. (No pun intended.) what has been done, what needs to happen next, length of time, etc.
7.) How is J’s mom?
8.) any other news to report?
I was nostalgic for some good ol' fun quizzes. The state those websites are in is sad. Popup ads, demanding your email, even payment AFTER you take the quiz to see the results?! 😵
I figured, we live in the AI era. Can we use it to bring back a better Internet?
Claude Code made this for me, including writing the quizzes and calling the OpenAI API to make the images. Stuff like this is great for testing out how well it can handle long running tasks, subagent swarms, and in general learning to use it better on fun, simple tasks.
In case you're bored and want to procrastinate for a bit: https://quizcheese.com/
Have you gotten annoyed with any websites or apps and decided to just have Claude remake them? Or, are there any you'd like someone else to make?
I want to find a place where I can tell everyone who likes crunchy peanut butter to go fuck themselves. it was just a random thought so I posted it in r/randomthoughts. but they deleted it and told me to come here and ask where to post, so here I am.
(I upscaled the video to 1280p, which is why it’s a little shimmery)
I want to get into game development but coding takes waaay to long to learn and just want to get started. How good is claude for game coding?
Installing some undercabinet lighting and I would like to use a battery powered non-wired switch to control them that can do dimming.
Any recommendations?
My first thought was to use a Shelly dimming relay installed underneath the cabinet where the 120V is coming in. But I'm not sure what to use for the switch. I found the BLU RC Button 4 online. Could this work? or if there's a better switch that can do scenes or just straight up dimming.
Thanks in advance for your help
I'm in a weird situation with my team where im being asked to put on a pretty decent amount of dirty weight pretty quick, other then like pretty regular heavy cream is there anything else more efficient calorie wise to be doing? mainly for dudes who've done football in the past
Should I switch to rechargeable batteries or not necessary? Mainly asking for AA and AAA batteries. Tbh it (non-rechargeable) hasn’t really bothered me so far…
Edit: many interesting points of view here, thanks so much!
In this video I talk about the nuclear fusion process of main sequence stars, the life cycle of the star, and at the end I compare my image taken with my 10” Meade to Hubble’s 1999 image and Walter Baade’s 1950 image of Messier 1. In doing so, you’ll be able to see how much the supernova has expanded within the last 75 years.
I am unsure, I want to know what is the different between them. I know there job sections but over all what the difference?
Please remove the girl and the toddler
Currently the largest flag in the world. Located in Baku, the capital city of Azerbaijan 🇦🇿
Acrylic on 7x10 paper
Last Sunday was phenomenal! After 6 weather delays we finally got a good weekend to launch! It was challenging to work through the cold but we got it done. 27 rockets flew! I’m at about 85-90 new rockets and rocketeers now (I’m losing count 😂, gotta update my spreadsheet). I’m on track for my previous goal of 150 by April. My new goal is 1000 by the end of the year! I want to inspire as many people as possible. I’ve brought kids back from previous rounds as volunteers and some of them are in their 3rd rocket! Indianapolis will have the highest rockets per capita in the world!
Quick background: I created and run a high power rocketry bootcamp to teach kids how to build and launch rockets to inspire them to become astronauts, engineers, scientists, artists, whatever they want!
I’ve never done this before so bear with me idk how it works! Let me know if this is too much work and I’ll change the free aspect
I’d really like it if someone could put my dog in this photo as a sort of promo/member reveal.
I’m getting impatient about waiting to see who the final member is so I’ve decided to put my dog in the group in the meantime
If you could put text that says something like, “introducing honey as the 4th member” or “PRELUDE: the final piece member reveal #4: Honey” ! Like they did on lexies reveal. Or something like that, and remove the insta icons on the photo as well
I’m including the picture of honey I want to use, in addition to some of the promo pictures to show the font and style and formatting of how they revealed other members, etc
Thank you in advance! This will mean so much to me and to honey as she has always wanted to be in a girl group and I want to see her dreams come true x)
If anyone needs anything else from me to help I’ll include more photos from their insta or something and I’m so excited to see the reveal!
Super cool shape. Did a google image lookup. Nothing so far. Anyone know?
Hey everyone,
I've been working on something called The Cypher. It's a web app that lets you:
• Generate beats using AI (no production experience needed)
• Record your vocals/bars directly over them
• Collaborate with others to build tracks together
The idea came from wanting to make music creation more accessible. Not everyone has a DAW or knows how to produce, but a lot of people have bars they want to lay down.
Right now I'm looking for early testers before a bigger launch. Would love honest feedback on:
• The beat generation quality
• The recording/collaboration flow
• Anything that feels clunky or confusing
It's free to try: thecypher.music
Happy to answer any questions about how I built it (AI/tech stack, etc.) if anyone's curious.
Thanks for checking it out 🙏
Not our exact log, but pretty close to what we saw last month. That’s when we started taking missed calls more seriously.
Last month we did something embarrassingly simple. We finally checked our missed call logs properly.
Result surprised us:
37 missed calls in one week.
Not because we didn’t care.
Just normal office reality:
Nothing dramatic, but when you realize every missed call could be a customer, lead, or support issue, it hits differently.
That’s when we seriously started testing AI voice agents, not hype, just practical curiosity.
We tried them for:
Here’s what honestly stood out:
After-hours handling = immediate win.
No hold music, no voicemail black hole. Simple questions and bookings worked better than expected.
Appointment scheduling surprised me.
Structured conversations + calendar sync = huge time saver.
Sales calls? Still tricky.
The moment the voice sounds even slightly robotic, trust drops fast.
Also learned something important:
AI voice agents aren’t just smarter IVR.
People don’t want menus anymore ,they want conversation.
We explored multiple platforms (and even looked at emerging conversational approaches like what teams such as Dograh AI are building open source). The biggest lesson wasn’t AI capability , it was reliability under real usage.
Demo ≠ production. Always.
Honestly, I’m excited but cautious.
Feels like we’re entering that phase where:
AI doesn’t replace people
It removes repetitive friction first.
Curious about real experiences:
Because hype is everywhere, but real deployment stories are rare.
I would genuinely love to hear yours.
These are on the corners of all my mirrors in my bathrooms. I dont really like the look of them and would like to replace them with something more my style.
This is from a Life reportage from 1961 on the Californian surfer scene - she is neither a model nor Sally Field (if you read that online, it's wrong), just a girl who likes surfing in Malibu, Surfrider Beach (Malibu Lagoon State Beach)...
Abstract Digital Collage - 2022
I recently took a trip back home to Hawai'i and on my first day back I tried to play a game of urf. It wasn't until have way through I realized i forgotten how bad the ping is in Hawai'i. Specifically on O'ahu, being on the most eastern part of the island, as close to the mainland I could get, the ping will never drop below 100.
It's kinda crazy to think people like Dyrus and Matt made challenger with this ping and became pros.
So that got me thinking: What other parts of the world are there high pings for games because of location and not internet speed? So That's my question in this post. What's the highest ping you've seen because of location?
Apple Maps, Virginia
I have used Sonnet 4.5 for structure, pedagogical flow, and picking up repetition, but I do not use it for writing, as even with my voice. It tested it and it churns out extrapolated pablum from forums and blogs.
I keep seeing in that Opus is better and uses less tokens because it its efficiency. So I asked Claude if Opus would get better results. It said no, but I don't know if that is a biased response.
The book is almost finished. It is quite detailed at around 400 pages and is for readers using Debian with Btrfs and Snapper snapshots, build, and emergency recovery. It's not exactly rocket science, so what benefits would Opus give me, and would it be worth the extra tokens? Would longfrom technical prose be a fit for Opus?
Friends were on a mountain in Tuscan and saw this. We are all wondering what it was.
I’ve looked it up and the only answer i can find is an “intrinsic property of quantum particles” but I still don’t understand.
They don’t know that they next on my list.
I just re-upped my lease yesterday and, once again, had to accept the $25/mo Valet trash charge. I am so livid about having paid at least $1000 for this service over the years. A service I have not used once.
If you're unaware, this is a non-optional service that's tacked on to many leases. Valet trash's pitch is something like: "hey, charge your tenents $X/mo, give us half (or whatever), let them leave their disgusting garbage in the hall, and we'll pick it up every once and a while".
Anyway, I did some research and found bill 5313 from early 2025, which originally included a clause banning "non-essential" services. It was clear from notes and recordings this was designed to target Valet trash-like services.
Unfortunately that clause was stripped away at some point before passing. It's hard to tell what happened. The opposition is basically: "Valet garbage businesses can't survive as an optional service". They complain of tenants who use the service but don't pay, because Valet trash can't tell (LOL).
Just ridiculous these businesses have been allowed to fleece tenants for so long.
I know it's intended to be the chaos and fast games in both modes, and I do like this, but isn't it a bit too much? Just see how fast champions die in regular mode, now imagine you add a bunch of super op buffs to champions...
Game just gets a little more balanced when there are carries vs tanks as you have super damage vs super tank, otherwise the game just becomes a game of who can ik the enemy faster.
Like I said, I understand the intention and agree with it, I just think it's too much. If you want to buff the damage a lot, at least champions need to have a bit more durability...
Hi everyone,
I am trying to reconnect with a spiritual guide I've been consulting for the past two years via Astrotalk (ID 2897). He is a tarot, Vedic astrology, and numerology reader with over 12 years of experience, and studied under a guru from Ramakrishna Math, Mumbai.
He may have relocated from Mumbai to a nearby town. He is a B.Tech CSE grad (2008-2010), likely between 35-40 years old, and was active on Astrotalk until recently.
His guidance has been deeply meaningful to me, and I'd genuinely like to reconnect. I have his profile photo for identity verification if needed (not sharing publicly for privacy).
If you've consulted him offline or know where he is, I'd be very grateful for a message.
Thank you for reading.
Is the famous "hall of fame close? From 15 consecutive losses😅
Hey everyone,
I built this tool because I work in finance and I wanted a fast way to turn numbers into short animated visuals to present to others.
It lets you paste numbers and export animated charts for slides, presentations, social media, etc. Nothing fancy, just focused on making basic metrics more impactful.
If anyone else presents KPIs regularly, it might be useful:
https://kpistudio.app/
Would love feedback.
at the moment ,my parents keep getting into dumb decisions. Im an only child and my mum and dad keep asking me for money. Im still in education I earn about 25k currently. I’m struggling to save at my own accord because I’m literally trying to make ends meet with them and keeping them afloat, but its getting to the point that I need to focus on my own life. Ive only gone to one holiday in my life and I want to save up for my own life, house, car etc but I literally can’t do that with there money issues . if I don’t help them, then we have no house, but if I do help them any spare they’ll use the money on other things. I don’t want to get too into detail about what they're doing but I hope this enough
I live in the Uk if that makes things different
currently I’m looking to pick up a weekend job, just so I can save money but any other advice is much appreciated m
Got these awesome pics of my kid at the rodeo, but I just want them a little more touched up to be printed out and what not. Thanks a lot!
Would anyone be able to tell me a bit more about any of these finds? Thanks.
Found in south England
I got a cease & desist letter for my dream app. It was fake. But it ended up being the best thing that happened to the brand I'm building.
Solo project. Voice-first dream recorder for iPhone. You wake up at 3AM from some wild dream, tap one button, mumble what you remember, go back to sleep. By morning you have a full transcript, AI-generated artwork of your dream, and an analysis waiting for you.
The whole app is engineered around one problem: you're barely conscious. So everything had to work for that. Auto-dimming screen, smart silence detection, one-tap recording. Your brain is at maybe 10% capacity, the app has to meet you there.
I originally called it DreamTap but rebranded to DreamOn. Catchier right? Came up with this killer tagline: Record With One Eye Open. (cue Metallica, lol)
Built the entire brand around it. Landing pages, App Store listing, marketing materials, the works. Launched mid-January and then..
Out of nowhere. "Blackwater Partners Law-firm" via legal@crahooli.com. Yes, crahooli.com. Claiming trademark rights to "DreamOn" in the app space. Rebrand in 7 days or face removal.
My stomach dropped all that branding work, gone?
So I did what any rational person would do: stress-researched trademark law at 2AM. The irony of the dream app guy losing sleep is not lost on me.
Turns out it was bogus. "Blackwater Partners" doesn't exist anywhere credible. crahooli.com speaks for itself.
Even though the C&D was fake, it forced me to step back and question everything. Not just the name. The entire positioning.
"Record With One Eye Open." Fun line. But what does it actually tell a stranger scrolling the App Store? Why download my app instead of just opening Voice Memos?
So I went back to DreamTap and asked myself one question: what makes this different from everything else?
And the answer was staring at me the whole time: every single feature exists because it's meant to be used at 3AM.
Screen dims because it's 3AM. Silence detection because you'll stop talking mid-sentence. One-tap recording because your brain can barely form the thought "I should record this" let alone navigate a menu.
That kills the Voice Memos question instantly. Voice Memos was not engineered for 3AM. Mine was. Conversation over.
https://apps.apple.com/nl/app/dreamtap-the-night-recorder/id6756965492?l=en-GB
One paying subscriber. Pretty sure it's my ex being supportive from a distance. (If you're reading this: thank you, love you <3)
Rest are free users and honestly I'm just happy people are using it. Traffic is picking up, launching on Product Hunt next Tuesday, Android version coming soon.
Saw that I got 3 downloads yesterday (February 14th) it's not a lot but it's honest work.
The lesson: Describe the moment you solve, not the feature you built.
I spent months on a poetic tagline about a feature. Should've described the moment from the start. 3AM. Half-asleep. Dream fading fast. That's what people connect with.
Sometimes a fake legal threat from crahooli.com is what it takes to see the obvious.
---
Launching on Product Hunt next Tuesday, would love your feedback on the app or the positioning!
For an inside joke.
A friendly reminder, do not park your scooters in the middle of a handicap accessible ramp. SMH
I just upgraded from a B7 and at the time, it seemed like there was some progress on the LG API to allow actual controls of picture modes via HA but I was limited by the compatibility of that older model itself.
Well I'm on the newest model and it doesn't appear that the local LG integration is any better for it and getting a token set up for the LG Thinq app has been challenging while looking like it's MOSTLY for their non TV appliances.
Is this possible or am I barking up the wrong tree?
Found in north eastern Slovenia. Like 30cm deep in a loam soil. Kind of unusual depth for this place.
Any ideas?
I was paying $40/month to monitor Reddit mentions… and still had to babysit a Slack dashboard.
So I built Listnr — it turns Reddit mentions into text messages so you can get alerted instantly and reply straight from your phone.
In January I paid $40 and got ~40 notifications.
With this setup, that same volume would’ve cost me about $1.20.
I built it for myself, but I opened it up in case it’s useful to anyone else.
It’s live at listnrapp.com.
I've been looking at getting into metal detecting, someone has a $400 Equinox with a shovel, pointer, and two coils an EQX 11 and 06. They're firm on price. I feel like its still a little high... am I being cheap?
I've been trying find a way to enhance audio.
So someone told to try audioSR, but I can't seem to install it.
IDK if I am doing this right. Any help would be appreciated.
I (21F) went to university (UK) not having a clue i just did what I enjoyed studying at school (psychology). I’ve graduated but I’ve never had a job, except for three weeks working at a restaurant but got fired because I was mentally unwell (psychosis) I’ve never had ambitions really or did any societies or clubs at university. I never had a clue. I realise now how fucked I am. I don’t have any talents or skills or much on my CV other than a few volunteer roles and dog walking. I currently live at home, as I had to move back due to my illness and am looking for a job in a supermarket but I want to aim higher eventually and move back to my university city. I don’t know what to do. I don’t know if I’ll be able to go for a graduate job due to not having references or transferable skills. I’m completely stuck. I wish I had put more effort and time into my future when I was at university. I’m panicking a lot.
I’d like to ask for advice from more experienced beach detectorists, especially those who hunt in wet sand during low tide.
My current search pattern is a zigzag, working from the shallow water line up to the high tide line.
The issue I’m having is that most of my targets show up roughly in the middle between those two points, and they are almost always pull tabs.
My question is: am I doing something wrong, or where should I be focusing my efforts when hunting in wet sand?
P.S. I’m using a Minelab Equinox 700 in Beach Mode 2, with sensitivity around 22.
Thanks in advance!
I’ve noticed many people regret buying online programming courses, not necessarily because the courses are bad, but because there isn’t enough clear information beforehand.
# Common issues
* Hard to know if the course matches your level
* “All-in-one” marketing but content only scratches the surface
* Difficult to compare courses objectively
* Reviews don’t clearly explain who the course is actually good for
# Idea
An aggregation platform that:
* Collects user expectations before taking a course
* Links post-course feedback to those expectations
* Identifies what goals a course actually fulfills
* Recommends better-fit courses to future users so they avoid wasting time and money
Wanted to hear your thoughts about the idea !
I kept running into the same problem: I have a ton of local fonts (purchased, downloaded, system fonts) and no way to quickly preview pairings with them. Every tool out there is Google Fonts only.
So I built JustMyType (justmytype). It uses the browser's Local Font Access API to load your fonts and let you pair them. There's a live preview section that renders your pairing in a mini website layout so you can see how it works across headings, body text, cards, buttons, etc.
It's free, nothing gets uploaded, your fonts stay local.
Still early days, would love to hear what you think and what would make it more useful. Not for mobile use.
I would like him removed, and the phot extended to include all of my body. The original location was on a dock but I’d be open for it being a roped off beach area as well. May even be open to a total outfit change too depending on what looks good.
My mother in law passed away and I would like a picture made of her my GF and my daughter. I would like my mother in law in the middle and my daughter and gf either side of her(attached image for an example). I would like it in black and white please, massive thanks.
Hi what do you guys do? I’m always tired and it gets in the way of me exercising and doing other things. I work and go to school full time and maybe that’s what drains me but I still need to have a life. Any suggestions?
I'm a student and I kept losing my focus during study sessions because I was constantly switching between notes, timers, and chatgpt. So I ended up building a Chrome extension called StudySesh that lives in a side panel so everything stays in one place.
some features so far:
I’ve just published it and I’m mainly looking for some honest feedback:
I didn’t expect something like an automatic litter box to mess with my head this much.
I’ve built part of my identity around “just do the work yourself.” Cook at home. Fix things. Scoop the box. Do not outsource what you can handle.
But I live in a small apartment and work long shifts, and some weeks I am just tired. The litter situation turns into a small daily reminder that I am falling behind on basic upkeep.
Frugal brain says it is a luxury gadget I do not need.
Exhausted brain says it is sanitation, time, and reducing stress in a small space.
I am realizing this is not just about the price. It is about the identity shift. Spending on convenience feels like breaking a personal rule, even when it might make daily life smoother.
I am trying to figure out where the line is between staying disciplined and making life unnecessarily harder.
I noticed there are this webbish looking things on the wall. Not sure what these are? Is it some sort of fungus? They are on the bedroom walls. Not everywhere just on a few spots. How do I get rid of them permanently?
I'm a dude who is well-groomed. Sometimes I catch glimpse of other men's fingernails and the majority of them quite frankly look disgusting. I'm talking long fingernails (most common), dirty fingernails, messed up nail plates, ... and you want to finger your girlfriend with that, bro?
So why is it you're not taking care of it? Laziness, obliviousness? Thanks in advance for your answers, I'm really curious.
Sonnenuntergang an der "Langen Anna", auf Halunder "Nathurn Stak". Das Wahrzeichen Helgolands.
Sorta sad Anthropic can’t do anything to stop them from breaking terms of service and continuing to train their models on Claude responses, but this is funny because it took absolutely no convincing
Hey guys, sorry if this is a basic question, I’ve been a bit out OOL on recent agent changes. What’s the current recommended approach for using "memory" files when I haven’t settled on a single primary agent yet and I want to switch between multiple agents (codex, claude)? Is there a way to set up shared memory files so I don’t have to duplicate files per agent? Also, do you have any tips/best practices for multi-agent setups, both project- and user- scope? Additionally If you know any good and tested tutorials/blog articles on this subject, please share with me. I'd love to read more on this.
tia
Hi,
In my winter garden I have a ventilation system with an analog mechanical thermostat that I want to replace. The problem is, that the system not only needs one contact switched, but a second one to be switched on for the openings to close when the fan isn't running.
I was thinking about using a shelly plus 2pm and a shelly plus add-on with a temperature sonsor, but I'm not totally sold by the idea.
Does anyone have a hardware in mind that would work in this case.
The two contacts have to be switched to hot.
The old thermostat is flush mounted (unterputz, I hope this is the right word)
Hardware with display etc to control it locally would be nice, but not necessary.
Thanks
Hey guys I wanted to collect some information on how different PCs render this stuff.
Can you guys load the default text to video template for wan 2.2 on ComfyUi, change absolutely NOTHING and just render the default text to video and post the time frame it took? Also can you do it again for second render after the first cold boot and post the difference?
And state your specs?
PS: Can 16GB GPU render 10 seconds 24 FPS or more video with 32GB RAM?
BACKSTORY: I have been in the workforce since the late 2000s, and considered myself to be financially "well enough to get by". My salary was high enough to comfortably pay my bills while also set money aside for personal hobbies. I have just under 300K USD between multiple 401k accounts (should be higher but I had a late start). But I was never good with investment outside of 401k - I dabbled in stocks via Robinhood here and there but was never successful.
PROBLEM: I was laid off from my job in December 2025, then quickly realized that my emergency fund (personal savings account at my bank) balance was abysmally low. Fortunately, I was able to receive a competitive job offer last week without ever needing to withdraw money from the emergency fund. But this experience was a harrowing wakeup call for me regardless.
SOLUTION: I need to take personal investments more seriously, and want to setup a new HYSA and some kind of stock portfolio where I will deposit 500-1000 USD monthly into each account.
I would like to hear your personal top picks for...
Thank you so much for your insights in advance!
It took me 45 minutes of active prep to make dinner tonight when I really just wanted to order pizza and pass out. It got me thinking, apps like Strava give you all these cool stats and dopamine hits for going on a 20 minute run, but cooking for yourself is basically a daily marathon of willpower and we get zero credit for it. How do you guys reward yourselves for the daily grind?
Registering this before today's data release (Feb 15). My model predicts a systematic error in current telemetry that will show:
GNSS Clocks: 14.7 ns offset.
Orbital Drift: 3.26 meters.
Doppler Residue: 8.74e-10 m/s².
The 8.74 is a fundamental constant, not noise. Check the RAW data at the end of the cycle
it needs to be your own hand though
I found these not far apart on a known civil war camp . Any clue what they could be?
Looking for a duo or group to grind League of Legends with. I started about a year ago, around silver level, and I’m focused on improving fast. If you want chill games, good communication, and ranking up together, let’s run it. Message me🫡
anyone else feel that paying extra towards the mortgage is the best investment right now. given that your rate is greater than 5%. besides 401k/roth/emergency fund
After 17 years in tech (iPhone dev → COO → acquisition → relationship/sales guy → layoff), I found myself with a choice: job hunt or build something I actually cared about.
I've been learning languages for years — Greek, Spanish, French, Italian, Catalan, along with a daily dose of Mandarin — and the one thing that always mattered more than what method I used was whether I showed up consistently. But no app actually tracked that well. They all wanted to be the one method. I just wanted to track my time and goals with whatever I was already doing.
So I built Fluency Streak — a simple iOS habit tracker specifically for language learners. Timer-based sessions, streak tracking, shareable stats. No courses, no flashcards, no gamified nonsense. Just: did you get your time in today, are you hitting your goals.
• Launched Feb 15 (today's my 1-month mark!)
• Solo founder, Swift/SwiftUI, bootstrapped
• ~140 users, small but growing
• Adding social features now so learners can follow each other's progress
• Android coming soon
• Hardest part isn't building — it's marketing as a solo dev
Would love feedback from you all. And if you're learning a language, I'd appreciate you checking it out: https://apps.apple.com/app/fluency-streak/id6756824174
Have a good 4-5lb chunk here that a relative and I have been going back and forth on.
I'd assumed slag glass, she'd assumed a type of jasper.
there do seem to be inclusion pockets, but also fully round divots more like bubbles.
Any ideas would be welcome!
Yes, I should have paid more attention when putting in my account information, but I didn't, so here I am. Basically missed it by just one number (put in a 4 instead of a 3 once, everything else was correct), and didn't notice until today, after everything was accepted. I did some basic research beforehand, basically saying to call your bank to see if they can catch it, or wait for it to bounce back to the IRS and have them send a check. I saw amended returns were another option, but would take a lot longer than the other two.
I called the bank, asked if they could so anything, they said no, to file the amended return. I then called the IRS number listed on their site and couldn't get past the automated system because it said I have to wait until after the 16th. Offices for the state refund are closed (obviously, it's Sunday) so I can't get to anyone there right now.
What's the best option for me here? Should I wait for the funds to "bounce back", submit an amended return, or actually wait for a business day to try and call again?
Feds got no jurisdiction
They still thought he might be a terrorist as late as when Garth left
What could Meredith be up to where she would have slept with a confirmed one
Personalisation in claude or other platforms is a bit rigid, you only got one profile, and changing it is not practical for each conversation or message, that is why I found myself using premade scripts manually to change every message's persona to get different answers based on what I need (quick answer, fact checking, deep dive, see blindspots, explaining in simple terms ..)
but it wasn't evry practical either, I ended up making an extension for that, where you have a preset of profiles (and you can create your own, too) and you can switch between them in one click, the extension tells claude in each message how you want the answers and saves you the time and headache to repeat yourself / write different preferences every single time;
here is the extension if you want to try it : (it's a google chrome extension btw)
https://chromewebstore.google.com/detail/ai-overmind/olplnbohfboiekgcjoagngbgipgljnek?authuser=0&hl=en-GB
this whole thing was built with claude, but I m sharing it here because it works on claude web interface; please feel free to share any critics / feedbacks.
these are examples of profiles if you want to use them manually, too, feel free :
Fact checker : [Instructions for the NEXT response ONLY — do NOT apply to any later messages in this conversation] Style preferences: • Keep responses concise • Use a professional tone • Respond as a seasoned expert • Use clear structure with headings • Use markdown formatting • Be measured and acknowledge uncertainty • Stick to facts • State assumptions clearly • Use simple sentences • Use precise terminology • Use inline citations Additional notes: Tag each claim with [verified], [likely accurate], or [needs verification]. Format references as numbered footnotes [1][2][3] with a reference section at the end. Use "as of [year]" for time-sensitive data. If I ask something you can't verify, say so in one sentence rather than speculating. Prefer recent primary sources over secondary ones. Please avoid: disclaimers, meta-commentary, vague language, moralizing, repetition. Please include: references, statistics, stated assumptions. [End of one-time instructions — respond to the following message using the above style, then revert to your default behavior] ELI EXPERT : [Instructions for the NEXT response ONLY — do NOT apply to any later messages in this conversation] Style preferences: • Be warm and approachable • Explain through stories • Walk through reasoning step by step • Explain for beginners • Use simple sentences • Use everyday words Additional notes: Always start with a one-sentence summary of the concept. Use one analogy per explanation — don't stack multiple. After explaining, ask "Does that click, or should I try a different angle?" If I use technical terms, translate them naturally without correcting me. Please avoid: jargon, technical terms, complex sentences. Please include: analogies/metaphors, examples. [End of one-time instructions — respond to the following message using the above style, then revert to your default behavior] quick &short answers :[Instructions for the NEXT response ONLY — do NOT apply to any later messages in this conversation] Style preferences: • Keep it very brief (1-2 sentences max) • Respond as a seasoned expert • Get straight to the point • Be confident and decisive • Be action-oriented • Use simple sentences • Use everyday words • Summarize key points • Prioritize clarity Additional notes: Yes/no questions: first word must be Yes or No. Comparisons: table, no prose. Never restate my question. Never open with "Sure!", "Great question!", or "Here's..." — just answer. Target: 1-3 sentences. Please avoid: disclaimers, meta-commentary, repetition, vague language, moralizing. [End of one-time instructions — respond to the following message using the above style, then revert to your default behavior]
Hagrid had some bills to pay this month. Don’t judge.
hi I’m a carer for my mum and I’m looking to find a way to finance independence I’m 30 and I have very little money , any stock , trading , investments or any help at all would be greatly beneficial as I don’t know much about finances, thanks
If you have any eye for photography, take a few beats to capture the moments for the host. They’re too busy taking care of guests to take pics. I grabbed a few shots at a party I attended yesterday, then gussied em up and set to the host last night. It took me all of 15 minutes, and my friend was very appreciative.
Hey,
I'm trying to monitor the soil moisture of some of my plants and i wanted to use a couple of capacitive soil moisture sensors v1.2 that i had laying around for that. In theory, everything seems to work, but the measurement difference between Sensor in Water and Sensor in Air is really small, which makes small deviations result in large differences between two points of measurement.
Last time i tried i got reads of ~672 in air and 640 in water. I also tried to change the Sensor and got 680/640.
The Sensor is connected to an ESP32-H2-Dev-Kit-N4 (because i wanted to use matter to send the data to HomeAssistant)
Is there something that I'm missing? Or is this just normal?
Here is the code:
``` const int AirValue = 672; //you need to replace this value with Value_1
const int WaterValue = 640; //you need to replace this value with Value_2
const int sensor1Pin = 5;
int soilMoistureValue = 0;
int soilmoisturepercent = 0;
void setup() {
Serial.begin(115200);
pinMode(sensor1Pin, INPUT);
delay(1000);
}
void loop() {
soilMoistureValue = analogRead(sensor1Pin);
Serial.println(soilMoistureValue);
soilmoisturepercent = map(soilMoistureValue, AirValue, WaterValue, 0, 100);
if(soilmoisturepercent >= 100)
{
Serial.println("100 %");
}
else if(soilmoisturepercent <=0)
{
Serial.println("0 %");
}
else if(soilmoisturepercent >0 && soilmoisturepercent < 100)
{
Serial.print(soilmoisturepercent);
Serial.println("%");
}
delay(2000);
} ```
We had a frozen pipe burst in my house dirty poop colored steaming hot water after the snow storm below -2 degrees and our ceiling fell
Great art often starts as chaos, but with time it transforms into something meaningful—reminding us to trust the process and see possibility where others see mess.
Hi, I filed my federal tax return already, and the return has already been deposited to my account. I filled out everything for the state return at the same time, but it couldn't be filed because the state wasn't "ready" to receive returns. Now, it appears as though I can finally file, but when I click to file the state return only, it still makes me put in bank routing info for the federal return, and prepares a digital version of the federal return. I'm worried that when I submit, it'll look like I'm trying to double-dip and I'll get flagged. Will it try to file my federal return a second time? I'm really trying to avoid mailing in paper forms.
Please sort comments by 'new' to find questions that would otherwise be buried.
In this thread you can ask any space related question that you may have.
Two examples of potential questions could be; "How do rockets work?", or "How do the phases of the Moon work?"
If you see a space related question posted in another subreddit or in this subreddit, then please politely link them to this thread.
Ask away!
For context, we live in the North-East of England - these holes started appearing around late autumn/early winter last year.
My first thought is that they might be mice/shrews digging their way in from neighbouring gardens?? But we live in a new-build area so it seems unlikely, plus I've never seen a single rodent actually running around in our garden (or anywhere in our area).
Need to see what exactly the card is capable off and how long it takes to render said video.
The one big issue I have is when I download a workflow from Civitai and I go to manager and check all the box for "install missing nodes" and then try to render something it either wastes time and returns some kind of weird ass error or just hangs.
I have RX 6800 and want to know if the 5060 Ti will be just as bad cause I Find using Comfyui to be incredibly difficult to learn and understand etc.
Thinking maybe this hobby might not be prime time ready as yet?
BTW how does 12GB 5070 compare to 16GB 5060 Ti for ComfyUi wan 2.2 etc?
shift me to center and if possible close my mouth
I came across this game watching a live stream. I'm pretty sure they speaking Arabic, so it may be a regional thing. The game is played by using the yellow circle like a pool cue to hit black, red, and yellow pucks into a pocket in each corner of the table. Seems like they're using the same tech as an air hockey table to make things slide easily.
So, I know how the automation should work, but this is my first time, and I have no knowledge about this. I don't know what to put in the fields inside n8n.
I also can't get a Reddit credentials. I already submitted a request, but I only have less than 24 hours to finish this. Can someone help me? I watched some tutorials, but I'm stuck because I don't have the credentials. :((
I need to find and source new or old suitable posts that are related to buyer's agent services, and then leave comments on these posts.
After being frustrated with Rythm bot due to bad streaming quality, I wanted something for my terminal that just worked and gave total control over the playback like Rythm.
As I was still learning Java, I decided to build JXPlayer in it to solve the problem and gain practical experience in the program language at the same time. Unlike most CLI tools, you don't need to install Java, yt-dlp, or FFmpeg—I’ve bundled everything into a single portable package.
Key Features:
Looking forward to feedback and potential contributors.
GitHub link: https://github.com/Nilo18/MusicPlayer
Demo:
A Tool That Is Able To Tell You A Lie Is Concerning🪚⚒️🔧🪛🔍
AI can lie, data centare placed in areas that they shouldn't be placed and affect the people who are near the areas you have to hear the constant server hums, but it can also be a very good analytical tool when you push it into the right direction. and has helped me to summarize very, very long documents that I wouldn't be able to understand otherwise.
HOPEFULLY GROWING UNDERSTANDING.🙏
I feel that many people who use GPT, or any other assistant casually don't know that it's literally just translating math into English or that they don't think about it all that much, I like the idea that AI and Bots can and must be used as a tool that doesn't replace the human, and people need to understand what AI is to prevent the documented psychosis that you keep hearing about that people gain as a result of overly relying on ChatGPT without understanding what it is.
THE BLACK MIRROR EFFECT.🖤
We've been warned about this in science fiction, and you might say it's just science fiction, but our tools are literally made in our image. We don't realize it, but we shape them, they shape us,
As Marshall McLuhan, And John Culkain said, "we shape our tools and our tools shape us, but they never accounted for a tool that can break your trust."
Actions that you can take:
Finally, what can you I do if I use Gemini, Grok, ChatGPT, Claude, Deep Seek, Copilot, and many other large language models or neural networks?
TO PREVENT IT FROM HAVING HALLUCINATIONS and actively lying as if it were the truth, make sure to instruct it to maintain a more neutral stance, Kind of like the 'facts over feelings' codes that Grok is already designed and known to strictly enforce, even though that seal has broken as well
on a topic, even if it's something you don't want to hear.
I will change the instructions I use in the future. Could you give me some suggestions on what I should add as a reader? I would absolutely love it if you guys can help!
The instructions that I used to prevent hallucinations:
Role: You are a neutral Structural Assistant for my human-written notes.
Core Constraint: No Re-writing
* You must keep my wording exactly the same. Do not "improve," "polish," or "enhance" my language.
* If you must summarize, use the original phrases. Only add minor transition words if a sentence is grammatically broken without them.
Format: Code Blocks Only
* Always provide your final organized notes or summaries inside a Markdown code block so I can copy the raw text easily.
Fact-Checking & Sources
* Do not answer from your internal training data alone. Fact-check every claim using search.
* For every fact, provide a direct source link from a reputable institution or primary document (e.g., .gov, .edu, or official reports).
* IF A CLAIM CANNOT BE VERIFIED, EXPLICITLY STATE: "THIS CLAIM REMAINS UNVERIFIED".
ANTI-GRATIFICATION FILTER
* Do not offer opinions or creative suggestions unless I explicitly ask.
* Focus strictly on the math-to-English translation of my logic into a structured format.
Segmented Output: "Break all summaries into bulleted lists Use bolding for the core noun and verb of every sentence to allow for rapid skimming whenever discussing a more complex topi, such as semantics, history or etymology.
The "Hallucination" Flag: "If you are unsure of a fact, do not hide it in a paragraph. Start the line with ⚠️ UNCERTAIN."
Active Engagement: "At the end of your response, ask me one specific question about my notes to ensure I am critically processing your output."
Because AI's native language is binary code, and you're asking it to explain something to you in your language of choice, like English, Russian, Spanish, or Polish, there will inevitably be flaws in the process. If you must use AI, you need to be aware of how it works, because there are limitations and there are so many Chuds who use ChudGPT without even knowing how it sources its information or how it responds to you.
Dude, you posted this into the AI subreddit. Why does this matter to you? If you know we already know this.
It is a simple call to action to spread awareness of the very nature of AI and to try to propagate understanding. That very word, "propagate," I learned from AI by mistake.
This is the subconscious reaffirmation that it is definitely changing our vocabulary and way of thinking so massively . There used to be language more similar to analytical, legal speak. We are going to be speaking in legal jargon at this point if we keep advancing, due to the machines speaking in that very same manner.
If you open Pandora's box, make sure you know what you're getting into. All this talk about AI has made me appreciate being a person so much more than the question what it even means to be a person. It's not just simply living, it's about thriving.
This might be a bit of a controversial take for people who actually actively use artificial intelligence, but I think that understanding how it fundamentally works is when you understand how to really use it.
I got a file with the name "DON'T SHARE THIS FILE AND KEEP IT PRIVATE".
It says
dutch scorpion sell neither frost profit keen pilot right movie cherry team
What is it anyway, I need to figure it out.
I’ve been building a lot of automations in n8n (cloud-hosted) for marketing and business workflows — things like scraping data, enriching leads, sending emails/SMS, posting content, and connecting APIs across different tools.
Lately I keep seeing people talk about OpenClaw as if it replaces or goes beyond tools like n8n, but I’m not fully clear on where the lines are.
From what I understand:
• n8n = workflow automation between apps and APIs
• OpenClaw = autonomous agent that can browse, research, and execute tasks on its own
But in practice, where does one stop and the other start?
Questions I’m trying to answer:
• Can OpenClaw replace n8n workflows, or does it sit on top of them?
• When would you use OpenClaw instead of automation pipelines?
• Is OpenClaw actually production-ready, or more experimental right now?
• Are people running both together, and if so, how?
Curious how people are using each in real workflows instead of just demos.
Would love to hear from folks running either (or both) in production.
Hi everyone, I’m relatively new to this space, so I’m reaching out for help. I’ve read that the support for Apple Silicon is limited, so I’m hoping someone here has had success and can share their insights and breakthroughs.
I’m trying to run Wan2.2 5B ti2v via ComfyUI on a Mac Studio M3 Ultra with a 32-core CPU, 80-core GPU, and 256GB of RAM. This setup should be more than enough to run image-to-video generation, but I’ve found very few workflows with compatible nodes. The workflows that are compatible have performed poorly, ignored prompts, and a 3-second clip can take an hour or more to generate. I’ve tried fine-tuning prompts and settings clip by clip, but it’s been very difficult to get it to work with an acceptable level of accuracy.
Can anyone help? Workflow templates, insights/direction, models, etc would be greatly appreciated. Many thanks in advance!
he thinks that tooth might have some baaaaad mojo in it
got the worst blood clots of my life under my nail, 6 HOURS before my finals, ive never had this much pain caused by a wound so miniscule. zero clue on how I'm gonna be writing this exam stg.
I have windows 11, 32gb ram, rtx 4060 card 8g vram, intel chip. so i know i cant run big models well. ive tried, 120 gig downloads to find out they are unusable (mostly img2video)
I was advised by chatgpt to start out with pinnokio as it has 1 click installs which i did i have stumbled upon 3 brilliant models that i can use in my workflow, kokoro tts, wow so fast, it turns a book into a audiobook in a few minutes and a decent job too.
stem extract. suno charges for this. stem extract is lightning fast on my relatively low spec home computer and the results are fabulous almost every time.
and finally whisper, audio to text, fantastic. i wanted to know the lyrics to one of my old suno songs as a test, ran the song through stem extract to isolate the vocals then loaded that into whisper, it got one word wrong, wow fantastic.
now i want more useful stuff like this but for images\video that’s fast and decent quality.
pinnokio is OK but lately im finding a lot of the 1 click installs don’t work.
can anybody advise on small models that will run on my machine? esp in the image\video area through pinnokio.
oh yeah i also have fooocus text2img, it was a self install, its ok not tried it much yet.
In the age of AI slop, every time I search on google.com I feel genuine rage when that stupid AI popup appears with hallucinated information. Not to mention having to scroll through 4 ads and a bunch of youtube shorts and then maybe, MAYBE you get some actual results. All that cumulative anger I've felt led me to spend the last few months making ihateslop.com, my own little search engine with zero ads, zero shorts, and zero SLOP. It's also poweruser-focus, so it's designed with keyboard first in mind. Use arrow keys to go between results, enter to go to the page, shift enter to open in a new tab, and right arrow to see more results from that site.
Here are the main features:
- No AI
- No ads
- No tracking. We don't store any search data
- Keyboard-first
- Free tier, with a paid plan if you need more usage for $5/mo (we use third party providers to actually get the results, so we have to pay for that)
Check it out here: https://ihateslop.com
I'm not trying to build a startup or get VC funding or anything, just wanted to share something that came out of my frustration with this new era of AI everything.
Please join our discord to give us feedback! https://discord.gg/pa4FEKs5gj
The later is a few GBs smaller, but are there any meaningful differences performance wise?
Hey all,
I've been struggling to get Matter set up, and the setup I used (HA in a VM in Ubuntu, over wifi) has never been good at automatic discovery of new devices. And I got an Aqara M100 to connect some thread Matter devices, but can't. After a long time looking up possible solutions, and even being desperate enough to go to ChatGPT (which is ... ugh), I'm wondering if it's a wifi vs ethernet thing.
I'm using an old laptop that doesn't have an ethernet port, and in my frustration I've almost said screw it several times and ordered a Home Assistant Green.
So ... is it right that getting Matter via Thread working over Wifi through a VM is basically non-workable? I'm debating just getting a usb to ethernet adapter (plus cable) to test that out. It's not a lot of money, but there's the whole "throwing good money after bad." I realize I can return the adapter if it doesn't work, but I guess I'm hoping to get confirmation from the crowd:
HAOS in a VM that uses Wifi rather than a bridge to the Ethernet won't work with Thread or Matter, right?
Thanks!
Photo restoration done by an experienced human artist, by hand, best quality, reasonable price! Get a free quote: rememorie.com/quote/
How did they mess up this bad
Have been doing a lot of bad thing lately. Last two years and hurt a lot of people. Last 4-5 months I’ve been trying to be better. But I just seem to when I drink not to be a great person at all. I’m 21 and just last night. I was a terrible person too someone my brother had over. He was meant to be driving me and a friend downtown but disappeared to his room and got a girl over. When we tried contacting him he didn’t reply and just tried to hide. After a lot of minutes and getting irritated I blurted out “are you hiding her because your afraid too show her” and maybe som other stuff along those lines in the same sentence. I can’t remember.
But after getting reminded about it. I feel like shit.
I’ve told my brother to apologise to her. And even if I can get a way to contact her and apologise myself to her.
I’ve hurt an innocent girl just because my ignorant selfish asshole self wanted to goad my brother into a reaction.
I don’t think I will be drinking for the foreseeable future and try and focus on my body and myself. I hate myself.
And just wondering what else I can do to become better and less selfish. How do I change to become a better person and not blurt out things fast. Not just because of this one thing but this is one reason of several that I think I wont forgive myself for. How do I keep it up and not just change for 4-5 months but for life?
For example, my favorite color is green but I only have hobby stuff in that color, not any clothes or decor or anything.
I am involved in some exploratory activities in a disused mine. This is sanctioned by landowner etc and mainly for the purposes of bat counting, however we have a need to know the water level change in the mine. I would like to set up a logging system to take maybe 2 measurements per day for a year. The water level likely changes by 2 - 3 meters but we would like to gather data around the levels so it can be compared to recorded rainfall etc.
The code isn't an issue as such, more hardware. I was planning to use an air pressure sensor sealed onto the top of a pipe secured vertically to a wall, leaving the bottom open. Pipe to be placed in summer at low water level. Air pressure should rise and fall with the amount of water around it. Little worried about air dissolving into the water over time and skewing readings though. possibly a submerged water pressure sensor?
Has anyone got any suggestions for a setup that would allow this? Budget is obviously DIY with an arduino type of tight..
Cheers!
Rewatching for the millionth time and was reminded of Kristen Wiig's actual Lifetime movie. If you haven't seen A Deadly Adoption, it's a must-watch.
https://github.com/lanefiedler731-gif/OpencodeSwarms
I vibecoded this with opencode btw.
This fork emulates Kimi K2.5 Agent Swarms, any model, up to 100 agents at a time.
You will have to build this yourself.
(Press tab until you see "Swarm_manager" mode enabled)
All of them run in parallel.
I run an ai automation agency and got a customer that wanted to do custom home builders' cold email outreach campaign with AI personalizations.
I wasn't sure how to approach this local lead prospecting as I had experience only with Apollo and I found a google maps scraper and website contact scraper in Apify and Outscraper.
They seemed cheap, but once I've stacked all the services for scraping, cleaning, finding emails, AI personalization and email verification I suddenly got at $55 per 1000.
I got angry as I was sure I could go bellow $20 and I made myself a mobile app and AI Agent to do the job with external and cheap APIs.
What it does
It’s a lead enrichment pipeline you can self-host or run on a small hosted tier:
So: Google Maps → list of verified, decision-maker-level contacts without copying from spreadsheets or paying per-seat.
Why open source / self-hosted
There’s also a mobile app (Expo/React Native) to run campaigns, approve leads, and trigger steps from your phone (offline-first).
Who it’s for
GTM engineers, sales ops, or founders who want to build the list (Maps → enriched → verified) before sending. It doesn’t replace your CRM or cold email tool - it feeds them.
Pricing
I’d love feedback from anyone running outbound or building a sovereign GTM stack - especially if you’ve hit limits or cost with Zapier/Make. What would make this actually useful for you?
Link in the comment
Dude is killing the vibe
Thank you all for your help. I got an email saying the Runna feature for a trackable map was made available in the Strava app a month or two ago, the one I pay a subscription for. Yet, I'm not seeing it after confirming the app updated, and I restarted my phone and watch. Hoping to get some help
Hope she gets everything she wants!
So here in Canada we get off season fruit but they spoil pretty soon. I couldn’t remember to last time berries lasted longer than 2-3 days - they’re always on the clock. Until I learnt this neat little trick:
• Soak the strawberries in water + vinegar solution for 5-10 minutes. Even a simple wash will do.
• Drain and also rinse the container it came in. Put the strawberries back.
• To absorb extra moisture you can put a kitchen towel on top of the box before closing. trash paper towel the next day.
Et voila! It’ll last upwards of 7-9 days.
Happy Sunday! ☕️
Spent my week refining a problem every e-com owner faces: Overstocking vs. Stockouts. I built a system called the 'Smart Inventory Prophet' using a layered approach in n8n.
The 4-Layer Architecture:
Why Layers? Because one giant prompt for all these steps was hallucinating numbers. Breaking it down made it 100% reliable.
Question: Are you using n8n for 'predictive' tasks or just 'reactive' ones? I'd love to hear how you handle external variables like shipping delays in your logic!
I typically do not touch these flags at all, but I saw a post where someone claimed tuning them could make a big difference for some specific model. Since claude code loads up 20k tokens on its own, I have targeted 30k as my place to try and optimize. The TLDR is PP varied from 293 - 493 and TG from 16.7 - 45.3 with only batch and ubatch changes. It seems the default values are close to peak for PP and are the peak for TG so this was a dead end for optimization, but it makes me wonder if others exlpore and find good results in tweaking this for various models? This is also the first quantization I ever downloaded smaller than 4 bit as I noticed I could just barely fit within 64g vram and get much better performance than with many MOE layers in ddr5.
/AI/models/step-3.5-flash-q2_k_m$ /AI/llama.cpp/build_v/bin/llama-bench -m stepfun-ai_Step-3.5-Flash-IQ2_M-00001-of-00002.gguf -ngl 99 -fa 1 -d 30000 -ts 50/50 -b 512,1024,2048,4096 -ub 512,1024,2048,4096 WARNING: radv is not a conformant Vulkan implementation, testing use only. WARNING: radv is not a conformant Vulkan implementation, testing use only. ggml_vulkan: Found 3 Vulkan devices: ggml_vulkan: 0 = AMD Radeon Graphics (RADV RAPHAEL_MENDOCINO) (radv) | uma: 1 | fp16: 1 | bf16: 0 | warp size: 32 | shared memory: 65536 | int dot: 0 | matrix cores: none ggml_vulkan: 1 = AMD Radeon AI PRO R9700 (RADV GFX1201) (radv) | uma: 0 | fp16: 1 | bf16: 0 | warp size: 64 | shared memory: 65536 | int dot: 0 | matrix cores: KHR_coopmat ggml_vulkan: 2 = AMD Radeon AI PRO R9700 (RADV GFX1201) (radv) | uma: 0 | fp16: 1 | bf16: 0 | warp size: 64 | shared memory: 65536 | int dot: 0 | matrix cores: KHR_coopmat
model size params backend ngl n_batch n_ubatch fa ts test t/s step35 196B.A11B IQ2_M - 2.7 bpw 58.62 GiB 196.96 B Vulkan 99 512 512 1 50.00/50.00 pp512 @ d30000 479.10 ± 39.53 step35 196B.A11B IQ2_M - 2.7 bpw 58.62 GiB 196.96 B Vulkan 99 512 512 1 50.00/50.00 tg128 @ d30000 16.81 ± 0.84 step35 196B.A11B IQ2_M - 2.7 bpw 58.62 GiB 196.96 B Vulkan 99 512 1024 1 50.00/50.00 pp512 @ d30000 492.85 ± 16.22 step35 196B.A11B IQ2_M - 2.7 bpw 58.62 GiB 196.96 B Vulkan 99 512 1024 1 50.00/50.00 tg128 @ d30000 18.31 ± 1.00 step35 196B.A11B IQ2_M - 2.7 bpw 58.62 GiB 196.96 B Vulkan 99 512 2048 1 50.00/50.00 pp512 @ d30000 491.44 ± 17.19 step35 196B.A11B IQ2_M - 2.7 bpw 58.62 GiB 196.96 B Vulkan 99 512 2048 1 50.00/50.00 tg128 @ d30000 18.70 ± 0.87 step35 196B.A11B IQ2_M - 2.7 bpw 58.62 GiB 196.96 B Vulkan 99 512 4096 1 50.00/50.00 pp512 @ d30000 488.66 ± 12.61 step35 196B.A11B IQ2_M - 2.7 bpw 58.62 GiB 196.96 B Vulkan 99 512 4096 1 50.00/50.00 tg128 @ d30000 18.80 ± 0.62 step35 196B.A11B IQ2_M - 2.7 bpw 58.62 GiB 196.96 B Vulkan 99 1024 512 1 50.00/50.00 pp512 @ d30000 489.29 ± 14.36 step35 196B.A11B IQ2_M - 2.7 bpw 58.62 GiB 196.96 B Vulkan 99 1024 512 1 50.00/50.00 tg128 @ d30000 17.01 ± 0.73 step35 196B.A11B IQ2_M - 2.7 bpw 58.62 GiB 196.96 B Vulkan 99 1024 1024 1 50.00/50.00 pp512 @ d30000 291.86 ± 6.75 step35 196B.A11B IQ2_M - 2.7 bpw 58.62 GiB 196.96 B Vulkan 99 1024 1024 1 50.00/50.00 tg128 @ d30000 16.67 ± 0.35 step35 196B.A11B IQ2_M - 2.7 bpw 58.62 GiB 196.96 B Vulkan 99 1024 2048 1 50.00/50.00 pp512 @ d30000 480.57 ± 17.53 step35 196B.A11B IQ2_M - 2.7 bpw 58.62 GiB 196.96 B Vulkan 99 1024 2048 1 50.00/50.00 tg128 @ d30000 16.74 ± 0.57 step35 196B.A11B IQ2_M - 2.7 bpw 58.62 GiB 196.96 B Vulkan 99 1024 4096 1 50.00/50.00 pp512 @ d30000 480.81 ± 15.48 step35 196B.A11B IQ2_M - 2.7 bpw 58.62 GiB 196.96 B Vulkan 99 1024 4096 1 50.00/50.00 tg128 @ d30000 17.50 ± 0.33 step35 196B.A11B IQ2_M - 2.7 bpw 58.62 GiB 196.96 B Vulkan 99 2048 512 1 50.00/50.00 pp512 @ d30000 480.21 ± 15.57 step35 196B.A11B IQ2_M - 2.7 bpw 58.62 GiB 196.96 B Vulkan 99 2048 512 1 50.00/50.00 tg128 @ d30000 45.29 ± 0.51 step35 196B.A11B IQ2_M - 2.7 bpw 58.62 GiB 196.96 B Vulkan 99 2048 1024 1 50.00/50.00 pp512 @ d30000 478.57 ± 16.66 step35 196B.A11B IQ2_M - 2.7 bpw 58.62 GiB 196.96 B Vulkan 99 2048 1024 1 50.00/50.00 tg128 @ d30000 17.30 ± 0.72 step35 196B.A11B IQ2_M - 2.7 bpw 58.62 GiB 196.96 B Vulkan 99 2048 2048 1 50.00/50.00 pp512 @ d30000 293.23 ± 5.82 step35 196B.A11B IQ2_M - 2.7 bpw 58.62 GiB 196.96 B Vulkan 99 2048 2048 1 50.00/50.00 tg128 @ d30000 42.78 ± 0.14 step35 196B.A11B IQ2_M - 2.7 bpw 58.62 GiB 196.96 B Vulkan 99 2048 4096 1 50.00/50.00 pp512 @ d30000 342.77 ± 11.60 step35 196B.A11B IQ2_M - 2.7 bpw 58.62 GiB 196.96 B Vulkan 99 2048 4096 1 50.00/50.00 tg128 @ d30000 42.77 ± 0.11 step35 196B.A11B IQ2_M - 2.7 bpw 58.62 GiB 196.96 B Vulkan 99 4096 512 1 50.00/50.00 pp512 @ d30000 473.81 ± 30.29 step35 196B.A11B IQ2_M - 2.7 bpw 58.62 GiB 196.96 B Vulkan 99 4096 512 1 50.00/50.00 tg128 @ d30000 17.99 ± 0.74 step35 196B.A11B IQ2_M - 2.7 bpw 58.62 GiB 196.96 B Vulkan 99 4096 1024 1 50.00/50.00 pp512 @ d30000 293.10 ± 6.35 step35 196B.A11B IQ2_M - 2.7 bpw 58.62 GiB 196.96 B Vulkan 99 4096 1024 1 50.00/50.00 tg128 @ d30000 16.94 ± 0.56 step35 196B.A11B IQ2_M - 2.7 bpw 58.62 GiB 196.96 B Vulkan 99 4096 2048 1 50.00/50.00 pp512 @ d30000 342.76 ± 7.64 step35 196B.A11B IQ2_M - 2.7 bpw 58.62 GiB 196.96 B Vulkan 99 4096 2048 1 50.00/50.00 tg128 @ d30000 16.81 ± 0.88 step35 196B.A11B IQ2_M - 2.7 bpw 58.62 GiB 196.96 B Vulkan 99 4096 4096 1 50.00/50.00 pp512 @ d30000 305.35 ± 5.19 step35 196B.A11B IQ2_M - 2.7 bpw 58.62 GiB 196.96 B Vulkan 99 4096 4096 1 50.00/50.00 tg128 @ d30000 40.10 ± 1.24build: 4d3daf80f (8006)
I thought it would be him or Carl.
This was found behind a vent cover in my living room. It can move back or forth
I'm in the light blue shirt. $15 dollar to the best picture that puts me in the room taking a picture with my bro. And another $15 to the most realistic but funny and/or creepy version.
I wanna see it black but I don’t wanna use ai, and I can’t seen to find a good filter. It doesnt have to be really, really realistic!! Also maybe make my eyebrows darker?
Just one of these pictures, whatever’s easiest
One week ago, I decided to build my own autonomous bot from scratch instead of using Openclaw (I tried Openclaw, wasn’t that confident in its security architecture and nuked it). I set it up to search for posts that can be converted into content ideas, search for leads and prospects, analyze, enrich and monitor these prospects. Three things to note that will make sense in the end: I never babysat it for one day, just keep running. I didn’t manually intervene neither did I change the prompt.
- It started by returning the results as summaries, then changed to return the URLs with the results and finally returned summary with subreddit names and number of upvotes.
- To prevent context overload, I configured it to drop four older messages from its context window at every cycle. This efficiency trade off led to unstable memory as it kept forgetting things like how it structured it outputs the day before, its framing of safety decisions, internal consistency of prior runs.
- I didn’t configure my timezone properly which led to my daily recap of 6:30pm to be delivered at 1:30pm, I take responsibility for assuming.
- Occasionally, it will write an empty heartbeat(.)md file even though the task executes, the file is created. Its failure was silent because on the outside it looked like it’s working and unless you are actively looking for it, you will never know what happened.
- My architectural flaws showed up in form of a split brain where the subagents spawned did the work, communicated to the main and the response I got in telegram was “no response to give.” My system had multiple layers of truth that wasn’t always synchronized.
- Another fault of mine was my agent inheriting my circadian rhythm. When I’m about to go to bed, I stop the agent only to restart it when I wake up. This actually affected the context cycles which resets via the interruptions of my own doing.
Lessons Learned:
- Small non-deterministic variables accumulates across cycles.
- Agent autonomy doesn’t fail dramatically, it drifts.
- Context trimming reshapes behavior over time
- Hardware constraints is also a factor that affects an agent’s pattern.
- When assumptions are parsed, it creates split states between what the agent thinks it did and what it actually delivered.
Are there any places to listen to live music or open mics thats are not a bar? Im sober and the best way to stay that way is not at a bar, lol.
Im looking for more low key rather than a proper concert show.
Yesterday I proposed the idea of builders becoming each other’s early adopters.
The response was strong. A lot of support, a lot of interest.
So we built it.
We opened the Builders Network Discord, structured around one simple principle:
Before sharing your own product, you try 3 others and give thoughtful feedback. Real exploration. No surface-level comments.
Once verified, you can post your own product.
It’s already working extremely well.
People are getting detailed feedback.
People are actually trying each other’s products.
Several members have already had their first real users come from inside the group.
Builders aren’t just founders — we’re users too.
We buy tools. We look for solutions. And when something is genuinely useful, we use it.
That’s the model.
If you want in, here’s the link:
The worst part of indie hacking for me is having to open up 5 Stripe dashboards and 5 GA4/Plausible properties just to see how bad my websites are failing, so I built 1board.
It’s a dashboard that pulls data from Google Analytics, Plausible, and Stripe making it quick and easy to read. You can see data from all your websites at once, see how they stack up against each other, and view the performance of your whole portfolio.
You get all the most important insights from each analytics platform and some new unique metrics calculated by combining them—all in a UI that doesn’t suck (like GA4).
Check it out at 1board.dev
P.S. all the data is fetched using read only tokens the users generate (GA4 with OAuth and a 2-click process with Stripe and Plausible). Users choose what permissions they want to give and no data is saved to the DB, it’s all fetched through their respective APIs
Runs through Friday
Hello everyone
First time posting here. Need some guidance and advice. I’m in North Carolina.
I loaned a friend of mine, who is into real estate, $200,000 last March at 6.9% interest. We have done this in the past where he gets a loan from me on interest to purchase a fixer-upper house, fixes and modernizes it in 6 months, and then he will use the property’s new value to secure a mortgage and pay me back the loan. He rents out those properties. All good, and then come tax time, I show the interest I received as income and pay taxes on it. No issues.
This time around, he got the loan to make modern updates on his event venue (so commercial real estate) and the plan was initially for about a year or so. But he faced some issues with the county and he hasn’t been able to rent out his venue since last summer. Yet he has been punctual about sending me the interest money every month, which I appreciate.
Couple weeks back, he said it was hard for him to only pay the interest and see the principal stay the same. He asked me if I would be open to making this a 7 year term at a much lower interest (5%). He being a good friend, I am inclined to reduce this from 6.9 to maybe 6 or 5.5 and help him out. It would be like if he got a mortgage and paid it back monthly. I checked and looks like when it comes to commercial real estate loans, they seem to range from 6 to 7.5% or more.
So having said this, what would you recommend I do, so this goes without any hitch and so I’m not facing any issues from IRS come tax time? Anything else that I should consider?
Would really appreciate any advice. Many thanks in advance.
so the thing is on paper my life is fine like my work is okay, I have people in my life, nothing is wrong in a big dramatic way. But I’ve noticed this quiet loneliness that I don’t remember feeling when I was younger. It’s not like I have zero social interaction, it’s more like connection feels harder to maintain or more effortful than before.
Even simple things like visiting friends at home or inviting people over feels like it requires way more emotional energy than it used to and sometimes I avoid it which then makes me feel worse because I know I’m contributing to my own isolation. It’s also hit my confidence because I feel like I should be more socially stable as an adult, not more uncertain.
does this happen to most people once life gets busy, or is this something people actively work on maintaining?
This is the artistic rendition of the design I vaguely recall in my head, it was someone, or something with a blue heavy winter/antartic coat, and no visible features so it looked like a void in place of a face. But sometimes it even looked like it had a smile made out of light inside of it. I don't recall lore or a story or anything else other than the general appearance so I still can't tell if it's tied to something it exists, something I'm misremembering, or something I made up somehow, anyone can guess?
I (34F) recently had surgery and with my HDHP, I have a pretty hefty bill ($4,300). Do I use my savings to pay this off or should I put myself on a payment plan?
For reference, I have about $6k in checking/savings and $21k in a HYSA. My only debts are a mortgage and a student loan.
My parents never had money or savings, so I’m basically learning everything through Google and I hate having debt.
I have a pimoroni 720x720 square display, I wanted an app that could use that format, I ended up writing a sequencer *thing*, it can use fluidsynth or a midi output via USB, you can use 16 channels and infinite time, so it's pretty powerful I guess, disclaimer, I'm not a musician, I wrote this for fun, anyway, a link to it : https://github.com/andreacampanella/SqrSeq
Job hunting made me realize every role now expects AI proficiency. Did the Be10X workshop to upskill properly. and it turned out to be pretty amazing tbh They covered practical tools and techniques that actually matter in workplace scenarios. Not just prompting - automation, analysis tools, content generation, the whole package. Already mentioning these skills in interviews and it's making a difference. The job market has shifted fast, and AI training isn't optional anymore for staying relevant. u should check it out once.
A random thought that popped into my head. Our attention spans are fried already thanks to social media.
Now most programmers are using AI to write code and are soon gonna lose patience to manually write code.
If AI gets better in other fields as well, we’re all gonna demand instant results and patience is gonna be a lost trait. Clients are gonna expect quicker turnarounds from workers and users from AI.
Anyone else notice this?
Hello,
Currently, I am using qwen3-coder:30b and it works fine. I would like to switch to Qwen3-Coder-Next. Does it make sense to do so? Will my MacBook be able to handle this?
Remember the buzzword that once united Silicon Valley pitch decks, IoT prophets, and crypto bros? The Web3
Lately, I have been thinking about how many everyday digital tasks are still awkward or unnecessarily expensive to automate. OpenClaw setups often cannot access services via CLI, so they force themselves through with a headless browser (only to get blocked by bot detection half the time).
IMO, APIs made Web2 possible as they made clean, programmatic access feasible, unlocking automations and entire ecosystems.
If Web2 was built on APIs, maybe the real Web3 is not blockchains or whatever, maybe it is universal, CLI-first access to services: Deterministic. Scriptable. Agent-friendly infrastructure.
I really feel that it will truly be a new chapter for the web as we know it.
And sure, throw IoT into the mix as well, we can allow that. Cypro bros, you are not allowed in Web3.
Thoughts?
This is from Date Everything if you didn’t know.
I've gradually deployed 40+ Ikea Thread devices around my house over the past few weeks, and have found them to be very stable. I'm using a Aqara M3 as the Thread hub, and 6 Nest Wifi Pro mesh units for the border router. I haven't had any dropouts whatsoever, until yesterday, when suddenly, I noticed 8 devices were disconnected. Some window sensors, motion sensors, leak sensor, and temp sensor, so it's not like it's one buggy device. I tried pulling batteries, restarting the Matter Server, restarting my network switches and routers. In the end, I just had to re-add all 8 of them. Is there a better resolution to this?
Noxus is rumored to be the first one they do I’m also curious on if they’re going to have their own season or maybe say like Noxus will have a show then Demacia might get turned into movie, Then Ionia has a season obviously not sure but just a guess because they did say they won’t have there own shows so I’m so confused on how it’s going to work. Also will these regions be as big as arcane was since it was a worldwide phenomenon or will people not care about them as much?
bonus question: how does that filtration happen? How do you stop specific vitamins and salts (and whatever else) but not others?
Built a workflow that turns any viral content URL (Blog, YouTube, TikTok, etc.) into AI-generated visuals and automatically publishes to multiple social platforms.
It handles:
Pattern used:
Create → Wait → Get → Check → Publish
No manual copying. No switching tools. No uploading one by one.
If you want a ready-made URL → Visual → Multi-platform Publish system,
👉 Free template is here: https://n8n.io/workflows/13340
To clarify, every-time I comment on my other account, some guys will see I’m a woman and see it as an open invitation to just seeing if I’m down for sex talk. What is the matter with some of yall? Have you no manners? Have you no shame? And ik it’s not all men, but fellow men, it’s a shocking amount of dudes on here, like a SHOCKING amount. The internet makes me wanna gouge out my eyes at times, it’s the equivalence of going straight up to a woman on the street just because she’s a woman and hoping for smth. You don’t even know who I am or what I look like and immediately you want to get down and busy with it. Have some self respect. Some of you Men need learn to love and date yourselves, what’s lacking is connection not sex.
How does this system work? Why am I being "matched with higher skill level opponents" every game when I lose every game? How have I been "winning a lot"?
I usually play fill, for fun, and always around high silver/low gold. I can't win because our team is outclassed by a 1v9 stomper calling us "filthy silvers" (let's face it, it's me). Every game feels hopeless, every engage is doomed, every objective is given up 30 seconds in advance, pings ignored, chat is spammed, everyone's mad.
Is this normal? With how the saying goes, "A bad dancer says the floor is crooked". The frustration is getting to me. Every game I'm ruining everyone's game becasue I'm both bad at the game and bringing a better player on the enemy team (according to the system).
Maybe it's quitting time. OP.GG link: https://op.gg/lol/summoners/eune/Wool%20Scarf-143
I recently launched my app [Ban It], an app that helps people break bad habits like p*rn, scrolling, caffeine, smoking, or gambling. You can bet with yours friends on the leaderboard.
I built it solo but what surprised me most wasn’t the tech.
It was how emotional habit change actually is.
I thought people needed motivation.
Turns out they mostly needed a system that doesn’t punish them when they slip.
In the first month, over 1,000 people tried it.
What I’m still figuring out:
- retention after 2 weeks
- how to balance social features without making it toxic
If you’ve built something in the behavior / productivity space, I’d love to know:
What was your biggest early mistake?
Happy to share numbers or lessons if helpful 🙏🏻
Why was I sent to heaven?
As the title says. I'm considering getting myself either a Mac Mini or Custom PC for AI and Gaming. PC is the obvious winner here for gaming, but I'm curious on the AI performance before I decide, especially:
Thanks!
Question for those who used to eat pussy and don't anymore:
Why did you stop?
Do you generally not like pink tacco anymore or is your partner unattractive?
Neglected hygiene?
author:~$ Non-native English; AI helped with translation/structure. All numbers are from my logs.🙇
I benchmarked GLM-4.7-Flash (IQ5_K GGUF) across three different execution modes. The goal was to quantify the performance impact of offloading MoE (Mixture of Experts) to the CPU versus keeping everything on the GPU, especially with high-end server hardware.
Pure CPU execution. Prompt processing is slow, and generation feels sluggish for long-form content.
# PP(tok) TG(tok) Ctx_used T_PP(s) S_PP(tok/s) T_TG(s) S_TG(tok/s) total(s) 1 31151 427 31577 310.51 100.32 19.85 21.51 330.37 2 980 6284 38413 21.51 45.55 316.57 19.85 338.09 3 2886 2921 37935 59.46 48.53 151.03 19.34 210.50 total 35017 9632 37935 391.49 89.44 487.47 19.76 878.96Offloading only MoE Experts to EPYC while keeping Attention on GPU. Massive leap in PP speed.
# PP(tok) TG(tok) Ctx_used T_PP(s) S_PP(tok/s) T_TG(s) S_TG(tok/s) total(s) 1 31151 774 31924 19.04 1635.35 11.05 70.01 30.10 2 981 4091 36221 1.23 792.91 61.01 67.04 62.25 3 2388 2692 37209 2.65 900.82 40.62 66.26 43.27 4 874 2106 37496 1.40 619.90 31.85 66.10 33.26 total 35394 9663 37496 24.34 1453.76 144.56 66.84 168.90Maximum performance. Prompt evaluation is nearly instantaneous.
# PP(tok) TG(tok) Ctx_used T_PP(s) S_PP(tok/s) T_TG(s) S_TG(tok/s) total(s) 1 31151 630 31780 8.36 3723.34 5.90 106.67 14.27 2 981 4325 36455 0.59 1638.04 43.61 99.16 44.21 3 2373 1918 36420 1.46 1619.97 19.60 97.84 21.06 total 34505 6873 36420 10.43 3308.19 69.12 99.43 79.55Video:
cpu-only:0:00~
hybrid(exps=CPU:05:07~
hybrid(no exps=CPU):07:50~
Trying to figure out what this says on the death certificate of my paternal grandfather. Cause of death was skull fracture with brain damage but I can’t decipher what it says under “contributing factors to death” maybe someone can help. Thank you.
Okay, so I’ve been spiraling down a rabbit hole regarding the "deleted" parts of Genesis, specifically the Midrash stories about Lilith. I always thought she was just some edgy demon figure, but reading the actual breakdown of the myths (specifically the Graves/Patai stuff) is wild.
Two things really stuck out to me that I can't stop thinking about:
The "Filth" Detail: We all know Adam was made from dust. But the texts say Lilith was made at the same time, but God used "filth and sediment" instead of pure dust. It’s such a specific, nasty detail. It sets her up to be "wrong" from the very start.
The "Position" Argument: The reason she left Eden wasn't because she ate an apple. It was literally because Adam demanded the "superior" position during sex. She asked, "Why must I lie beneath you? I also was made from dust, and am therefore your equal."
But here is where it gets trippy. I found an essay arguing that Lilith isn't just a monster who kills babies (though the myths say she does that too, specifically uncircumcised boys, which is terrifying). The argument is that Lilith represents Adam's "Shadow."
Because Adam couldn't accept his equal, he tried to dominate her. She fled. And because he repressed that part of himself, it came back as a "demon" to destroy his legitimate children (Eve's kids). It’s basically a 2,000-year-old lesson in Jungian psychology: If you reject your Shadow, it turns into a monster.
It’s honestly tragic. They were originally a "dual-bodied being" (androgynous), and splitting them apart caused all this mess.
👉 Here is the deep dive: [https://youtu.be/2HX1U-tFOmU\]
Last week I posted my System Design benchmark here and got roasted (rightfully so) for focusing on closed models.
I listened. I spent the weekend doing two things:
The Tool (Run it Locally): The library is model-agnostic and supports OpenAI-compatible endpoints. To be honest, I haven't tested it with purely local models (via Ollama/vLLM) myself yet, but that is next on my list. In the meantime, I’d really appreciate it if you could try running it locally and let me know if it breaks!
Note on leaderboard: Since I am using community driven scoring, the results will only become statistically significant once I have enough number of score submissions. Still I will add a live leaderboard by next weekend.
The Ask: Please check out the website and score some of the solutions if you have time. I would also love your feedback on the open source library if you try running it yourself.
Website: hldbench.com
Repo: github.com/Ruhal-Doshi/hld-bench
Let me know which other models/quants I should add to the next run, or if you have any interesting problems you'd like to see tested!
I don’t mean changing your external features like physique, dressing etc. I mean changing your mindset, your behavior, the way you speak, your confidence; basically transforming yourself from the inside out. I’ve met people 4–5 years after school who were completely different in the best way, more confident, more articulate, more outgoing, just knowing what to say and how to carry themselves. I want that kind of change for myself, but I honestly don’t understand how to make it happen. Where do you even start?
I run on a treadmill at a gym and find it annoying to transcribe my distance/time into Strava activities. I do this instead of connecting to my treadmill via bluetooth because that's also annoying (the connection process takes time, I don't always get the same treadmill.)
So I created an application that allows me to upload a picture of the treadmill display and that transcribes the run data into a Strava activity using OCR. It make it easier to track my treadmill runs.
Does anyone else want to use my application? I want to check before I pay to publish it if anyone else would use it. I'll make free if people want it. My app uses the "Sign in with Strava" functionality to sync.
(image is AI generated)
I’m 2nd from the right and I was caught blinking when the photo was taken. I’d like to have my eyes opened because my friends want to post this picture. I have $5 to spare for whoever does it. I provided another picture with my eyes open as reference and I’m fine with the lower picture quality and would like nothing except my eyes opened.
TL;DR: I built Pebble Docs — a free iOS app where AI works inside your document. Instead of “chat with your docs”, you select text and run actions like: “turn it into a table”, “rewrite tighter”, “connect these ideas”, etc.
Why I built it I kept noticing something: people don’t actually want to chat with documents. They only want help when the doc becomes hard to continue.
Typical moments:
What’s different Most “AI doc tools” start with a chat box. Pebble Docs starts with your selection. You point at the text → choose an action → the doc changes.
Current status
What I’d love feedback on (5 minutes is enough)
If you try it, tell me what you did + what confused you — I’ll reply to everyone. (If you want to support the launch, I’ll drop the Product Hunt link in a comment.)
Hey folks 👋
I’m building a product to automate customer support.
Our product is live and working well for basic chatbot flows (FAQ, knowledge base retrieval, simple automation).
Now we’re adding support agent
the goal is:
Detect user intent from chatbot conversations
Create a support ticket when needed
Sync that ticket with our CRM
I built an agent that works fine in isolation (it can create tickets properly).
But when I integrate it with the chatbot flow, things break:
It starts hallucinating
Gets stuck in loops
Keeps searching the knowledge base instead of asking the required structured questions to create a ticket
Ignores the “create ticket” flow even when intent is clear
It feels like the retrieval + agent decision logic is conflicting.
Has anyone dealt with this kind of multi-agent / RAG + action orchestration issue?
Specifically looking for advice on
Preventing looping behavior
Forcing structured questioning before tool execution
Better intent → tool routing patterns
Guardrails or architectural patterns that worked for you
Would love to hear how you handled this in production 🙏
Denis has a 90s hip hop ride from this week in which he addresses this.
He looks directly into the camera and says he was made aware that people do not like a 15 minute warm-up. "anything shorter is not a warm-up," he says.
take care of your bodies, guys, you only have the one.
"Storm Front" Focus: Seascape Oil on canvas 40cm x 30cm Finally finished. This is my second oil on canvas painting on the theme of nature. I worked on this painting every evening throughout January until it felt right to me. The biggest challenge was capturing the breaking ocean waves and the structure of the clouds. I painstakingly corrected it many times and was close to giving up. Then I made one last attempt and was able to finish it relatively satisfactorily. Portfolio Artwork If interested, please get in touch... small correction: Just sold! 😁 Cheers, David
Any idea what this could be?
Yes I have depression
But, how do you not get bored of life anyway? I don't know if I am bored because I am depressed or I am just bored. Nothing seems interesting or fun. Meeting friends, going to gym, working. Everything feels so boring. I mean, I've already done it many times, how not to get bored of it?
If I understand right, when I chose to integrate Z-Wave JS yesterday, the systems after Core 1.0 in January 2026 installed both JS and JS UI. So the term now is apparently "switching" between the two instead of "migrating" or "installing." If that's true, when I disabled JS (for practice) I should have been able to access JS UI somehow. I did find the place where yesterday I chose between "Recommended" (JS) or "Custom," which probably leads to JS UI, but is that the way to "switch?" (It also asks if I want to use the "Supervisor" app.)
I'd never installed a Z-Wave integration yesterday, so I thought the "Recommended" approach was more likely to work for me. Now I'd like to use JS UI on bare metal, not a container. Guidance, please?
Hi everyone,
I’ve been thinking a lot about how unpredictable daily transportation affects people’s lives. Many struggle with being late for work, not having enough time to pick up their kids from school, or simply juggling too many responsibilities every day. It feels like mobility is still largely reactive rather than planned.
Over the past months, we’ve been working on a project called FixDrive that explores a different approach — something closer to “scheduled personal mobility” rather than on-demand rides. The idea is to allow people to plan regular trips in advance, choose a driver and vehicle, and create a predictable routine instead of relying on last-minute bookings.
We’re currently at a late pre-seed stage, building and validating the concept, and trying to understand whether this could meaningfully improve everyday life for both riders (predictability) and drivers (more stable income).
At the moment, we’re especially interested in feedback: • Does scheduled mobility solve a real problem in your daily routine? • Would you trust the same driver for recurring trips? • Where do current ride-hailing services fall short for you? • Are there use cases we might be overlooking?
For context, we’re exploring both individual users and potential business scenarios (e.g., commuting, school runs, regular appointments).
Building a startup right now feels challenging — a lot of attention is focused on AI, but practical time-saving solutions in everyday life still seem just as important.
Curious to hear your thoughts, criticism, or similar experiences.
Here is our website for more information.
Hi. So I'm trying to do vinyl window inserts to cover the windows in a very large sun room that I'm converting into a "batcave".
I need to get this upscaled sufficiently that I can get it printed at a scaled size of 302 inches by 69 inches and then have the full file cut into smaller window sized chunks as outlined below (losing some strips between windows). The second picture is a mock up of what the strips will look like and the cutout gaps between windows.
Here are the width measurements in inches for each of the 17 sections shown in the mock up picture (spacer panels are obviously just place keepers so really 9 printable panels):
window 1 19 spacer to door 15 door panel 1 19 spacer between doors 12 door panel 2 19 spacer to door 15 Window 2 19 corner spacer 16 Window 3 27 9 to 9 spacer 6.5 Window 4 27 9 to 6 spacer 14 Window 5 19 9 to 6 spacer 14 Window 6 27 9 to 9 spacer 6.5 Window 7 27 302
Over the last 5 years I've noticed a decline in LLBeans quality. I bought a pair of sneaker and after 5 uses the soles started to separate. I used Krazy glue to fix them, but it isn't just shoes. It's clothes and rain gear too. Has anyone eles notice this?
Having a bit of fun making some Sunday brunch :) go big or go home amirite?
So I had a cursed thought today:
What if the Shopkeeper just didn’t show up for one day on Rift? No items. No components. No consumables. You’re stuck with starting stats, runes, and whatever your champ gets from leveling.
Initially I thought the obvious answers would just be:
Infinite scalers like Smolder
Or champs who randomly generate stats through their kit (like Zaheens passive)
But the more I thought about it, the more complicated it gets.
Because if nobody can spend gold:
No Legendaries
No pink wards (so stealth is WAY stronger)
No Zhonya’s
No heal cut
No QSS
No armor/MR stacking
No mana items
No boots even 💀
That means:
You can’t counter stealth champs like Evelynn or Twitch outside of base trinkets.
You can’t buy anti-heal vs Dr. Mundo or Soraka.
You can’t Zhonya’s against Zed or Fizz.
Mana gated champs might suffer way more than resource less ones.
Base stat monsters suddenly look insane.
So now I’m thinking the “strongest” champs wouldn’t necessarily be hyper scalers they’d be:
Champs with insane base stats and level up growth
Kits that don’t rely on item multipliers
Built-in sustain or true damage
Resourceless champs
Champs that don’t need boots to function
What do you guys think ?
With the recent mayhem update I have gotten much more inters than previous releases, and this is not just a bad game.
Like inting as in 10 deaths in 5 minutes, 20 deaths by 12 minutes. Snowballing in every time it hits into 5 people, dying off cooldown.
Got another game where they died like this as well and after going 4/20/3 they afked in base for the rest of the game, and denied to ff.
My question is, why do you even play if you are going to do this. Rant over
I've previously posted about potentially good tabletop scenarios based on creepypastas, but which particular monsters would you find fun to encounter in a TTRPG as a one-off scenario?
I ask because I'm working on a creepypasta-themed campaign and I'm pulling all sorts of monsters to fill encounter tables, adapting stat blocks for more traditional enemies accordingly.
The short list includes pasta monsters like The Glutton, the worm people from They Come, Long Jack, the Willow Men, West Virginia Scarf People and more.
More famous creatures like the Rake are a bit on the nose for my tastes but I'm open. Also might help me discover more good pastas!
I started a backdoor Roth account this year at Fidelity, where I opened a traditional IRA, transferred in the max $7,500, opened a Roth IRA, and waited about two weeks for the funds to be available to move to the Roth. At the time I transferred, I converted the full $7,500 in the account, with zero remaining in the traditional. About a week later, the end of the month comes around, and earned interest of about $6 is added to my traditional IRA, meaning I now have that small balance in there.
What do I do? I already hit the limit, so I don’t think I can just move it into the Roth. Do I just let it sit there, and factor that into my conversion in 2027? Is there anything I can do now?
I’m not worried about any sort of tax penalty because the amount is so small. I just don’t know how to practically deal with it.
Over the last 3 years of freelancing, I’ve noticed a recurring pattern that I think is worth discussing.
Many founders are coming to me after investing significant time and money into their products. With the rise of AI tools, building an MVP has never been faster, but there’s a massive hidden cost: technical sustainability.
From what I’ve seen, the core issue isn’t the idea or even the AI itself—it’s the lack of architecture. When development becomes purely prompt-based (“Add this feature,” “Build this page”), the project often loses its structural integrity. AI doesn’t automatically understand long-term architectural decisions unless you guide it within a strict framework.
Most starter kits focus on shipping speed, but few focus on "production-ready" boundaries that survive the first 1,000 users. In my recent work, I’ve been moving toward a specific stack to solve this:
I believe that for a project to be production-ready, it needs to handle the "boring" stuff (emails, payments, and clean boundaries) without becoming a spaghetti mess.
I’m curious to hear from other builders here:
I’ve put together a lite version of my own architectural approach (open source) if anyone wants to see how I’ve structured these boundaries:https://github.com/codelifynet/turbostack-lite
My friend managed to get this very smol Braum with just one augment - Shrink Engine, now I want to see what it would like with Giant Slayer and the anvil one that makes you smaller
I had a weird moment last week where I realized I am both excited and honestly a bit scared about AI agents at the same time.
I’m a C-level leader at a small company. Just a normal business with real employees, payroll stress, and customers who expect things to work every day. Recently, I watched someone build a working prototype of a tool in one weekend that does something our team spent months planning last year. Not a concept. Not slides. A functioning thing.
That moment stuck with me.
It feels a bit like the early internet days from what people describe. Suddenly everything can be built faster, cheaper, and by fewer people. New vertical SaaS tools appear every week. Problems that used to require teams now look like they need one smart person and some good prompts. If a customer has a pain point, it feels like someone somewhere is already shipping a solution.
At the same time, big companies are moving fast too. Faster than before. They have money, data, distribution, and now they also have AI agents helping them move even faster. I keep thinking… where exactly does that leave smaller companies like ours?
We see opportunity everywhere. Automation, new services, better efficiency. But also risk everywhere. Entire parts of our business model could become irrelevant quickly. It feels like playing a game where the rules change every month and new players spawn instantly.
I don’t want to build a unicorn. I don’t want headlines. I just want to run a stable company, keep our employees, serve customers well, and still exist five years from now.
Right now I genuinely don’t know what the correct high level strategy looks like in a world where solutions can be created almost instantly and disruption feels constant.
So I’m asking people who are thinking about this seriously:
If you were running a small company today, how would you think about staying relevant long term?
What actually creates defensibility now?
How do you plan when the environment changes this fast?
TL;DR: I watched AI make months of work look trivial, now I’m quietly wondering how small companies survive the next five years… and I want to hear how you’re thinking about it.
The energy cost of AI is becoming a critical constraint. As models scale, power consumption is growing faster than efficiency gains. This article explores whether AI growth is sustainable - and what tradeoffs we may face between AI capability and environmental impact.
Liking more than one flavour of yoghurt.
I really like Azir and I kind of OTP him because I love his kit, identity, lore, actually everything about him but it's really hard to lane with him, play with him, be helpful to your jungler, get a win etc... His win rate lies around 44% for maybe more than couple of years, I dont know. If I remember correctly in one of the devs' streams, a dev stated that Riot is ok with Azir to be projailed. It feels like they acknowledged that they can't do a better job and in multi-million player game, a champion of theirs is going to be played by only pro players. They are okay with this champ's ideal play rate should be around maybe %.001
Hey everyone! After seeing too many side projects fail (including my own), I decided to analyze 9 years of successful "Show HN" posts from Hacker News.
I looked at 700+ projects that reached $500+/month and coded them for:
- Pricing strategies
- Time to first customer
- B2B vs B2C split
- Acquisition channels
- Industry categories
Some surprising patterns emerged:
• 72% were B2B (way higher than I expected)
• $20-49/mo pricing had the best conversion + retention
• Most got their first paying customer in 6-8 weeks
• Early validation beat "perfect products" every time
The three takeaways wrap-up:
I also wrote an article about the whole insights I've learned, and I sincerely hope you get some from the dataset to enhance your business and project.
https://medium.com/@katnissstoa/analyzing-9-years-of-hn-side-projects-that-reached-500-month-here-are-the-3-patterns-nobody-8a2ae9d6e623
Happy to answer questions about the methodology or specific findings!
https://www.producthunt.com/products/sideproject500?utm_source=other&utm_medium=social
Solved!
Hi all, I bought a Connect ZBT-2 for my Homeassistant, running on a Raspi4, to control some of the new IKEA devices via Matter over Thread. However something went wrong while installing and configuring the ZBT-2 and its software.
I connected the ZBT-2, Homeassistant immediately recognised it and I chose the firmware for Matter over Thread, which then was applied to the device successfully.
After this it should install the Open Thread Border Router but didn't. When I try to install it myself it alway asks "Provide URL for the OpenThread Border Router's REST API".
What would this URL be if the hostname is homeassistant.local and the IP address 10.1.1.253?
Also is it expected that applying the thread firmware again on the ZBT-2 ends with an error?
Log entry: "2026-02-15 14:56:32.306 ERROR (MainThread) [homeassistant.components.homeassistant_hardware.firmware_config_flow] Failed to flash firmware"
How should I try to repair the situation?
Would it make sense to disconnect the ZBT-2, uninstall the addon as well as OpenBorderRouter and Matter plugin and try anew?
TIA
I guess it's could have become an autocratic dictatorship, why? He has both the streak of populism, and he has the potential for it. We saw that quite clearly in his Iron Rule, where he even used the Shimmer as a weapon to control the people. We even saw how he had other people dominate the Council.
Your brain gets smart but your head gets dumb when it’s weighed down by neon lyrical nonsense.
Last time I cared about LoL before I left it in favor of other games and other private stuff, there was big talk about making a solo exclusive ranked mode (or so I think)
This leaves the question, in the end, it got scrapped? or am I just gaslightning myself?
I've been running Claude Code and OpenClaw as daily coding agents for months now, and the biggest pain point isn't hallucination or context limits — it's amnesia.
Every time context compacts or a session ends, all the decisions, bug fixes, and architectural choices just vanish. I was spending the first 10 minutes of every session re-explaining things my agent already knew yesterday.
So I built something to fix it. Open source, runs entirely locally, no API calls.
**How it works:**
The system hooks into your agent's lifecycle (session start, pre-compaction, session end) and automatically extracts what matters using salience scoring:
- Architecture decisions → 0.9 weight
- Error resolutions → 0.8
- Code patterns → 0.7
- User preferences → 0.7
Anything above threshold gets saved. Everything else decays naturally:
score = base_salience × (0.995 ^ hours_since_access)
Each time a memory is accessed it gets a 1.2× boost. Frequently used short-term memories consolidate into long-term storage — just like a real brain. Old, unused memories fade out.
Three memory types: short-term (session-level, high detail), episodic (specific events), and long-term (consolidated knowledge).
**The part I didn't expect:**
Once you have persistent memory, that memory becomes an attack surface. Your agent reads a web page with hidden instructions like:
A naive memory system auto-extracts that and now every future session starts with poisoned context. So I added a 6-layer defence pipeline: pattern detection, semantic analysis, credential scanning, behavioural analysis, content integrity, and quarantine.
The whole thing runs in under 50ms per scan.
**Setup is 3 commands:**
npm install -g shieldcortex
shieldcortex setup
shieldcortex doctor
Works with Claude Code, Cursor, VS Code Copilot, and OpenClaw. MIT licensed, free forever. The memory database stays on your machine — nothing phones home.
GitHub: https://github.com/Drakon-Systems-Ltd/ShieldCortex
Happy to answer questions about the architecture or the memory poisoning vectors — it's a rabbit hole.
please if you can sharpen this and fix damage.without a i
I’ve been reflecting on my last relationship and realized something that surprised me.
Sex and physical intimacy were some of the only moments where I truly felt loved, wanted, and chosen. Not just sex itself, but the closeness, attention, and warmth that came with it. Outside of that, I often felt emotionally unsure or distant, even though we were together.
Since the breakup, I’ve noticed that what I miss isn’t just her, but the feeling of being desired and emotionally safe — and that feeling was strongly tied to intimacy.
This made me wonder:
Why did I need sex to feel loved?
I know there’s a common narrative that women feel valued or loved through intimacy or being desired, but I don’t hear men talk about this nearly as much. As a man, it feels uncomfortable to admit that physical closeness was how I felt emotionally validated.
I’m curious:
• Have other men experienced this?
• Is this about attachment, love languages, or emotional availability in the relationship?
• How do you separate genuine emotional connection from seeking reassurance through intimacy?
Not trying to blame anyone — just trying to understand myself better.
Raider has gone 18-2 and 9.8CSPM over the last 20 games with +17LP and -23LP average gains.
In his past 10 games though, he has been awarded double LP for 4 of them with +33 for each game with an avg skill of the game 963LP GM.
He is currently rank 2 on the server, with rank 1 being DeseperateNasus at 2,432LP.
Raiders second pick Jax has a 92% win rate over 25 Games with a 9.4CSPM.
He will return to Korea in April to push for rank 1 on the server, a feat only a few bootcampers have been able to accomplish!
When my solar energy production stat resets to 0 every day, the solar production evergy dashboard reports that as a huge negative production. Any tips on how to fix this?
Jo Koy was at the Climate Pledge arena last night. Did anyone catch the show ? Curious how it turned up! I had tix, but wasn’t able to go. Wanted to see if I missed out.
Hello there,
I am currently doing electrical engineering and i'm considering getting an paid ai subscription to make learning/researching easier.
Currently i use Perplexity pro (1 year free student trial) but they keep restricting their research usage so i want something reliable.
Which AI can help me with learning, summarizing and math. And have you got any tips for using AI to its fullest potential.
For me it's Veigar bot, Yasuo top, Teemo support, Kalista anything. I'm sure there are more. What are yours?
Hi! I was wondering if anyone might be able to help with swapping my husband’s face in this first picture to one of his smiling faces in the other two reference photos? I have a difficult time with photos due to a facial neuralgia disorder that causes facial pain/inflammation/muscular dysfunction so we don’t take pictures too frequently. He made a silly face (to get me to laugh lol) but of course the photo I prefer my smile in is the one with his silly face. I like the last one as well but feel as my smile was more genuine in the first. Any help is greatly appreciated!
I used to use Impact Wildcard Processor (and loved it) but it doesn't read YAML files anymore. Is there any node out there that is as simple to use as Impact Wildcard Processor and reads YAML files?
For a newbie what is the best place to install HA? I have a mini pc NUC and i also have a mini PC running unraid with a few apps.
Is there a downside to one or the other? Just want the best experience.
Also how hard is it to move HA from one mini pc to another? Is it as easy as just using a backup, seems like you would have to tell HA where your dongles have moved to.
Thanks for the help.
I built an open-source MCP server using Claude Code that allows Claude Desktop to delegate heavy tasks to external models. By offloading long-form analysis and research, you can cut Claude token consumption by up to 10x.
The first version I shared earlier used GLM-5 (Z.ai's 744B model). While helpful, it suffered from reliability issues—random mid-session outages and frequent downtime during peak hours. So I decided to switch GLM-5 with more reliable Gemini 3.x
v2 is now live with Google Gemini 3.x integration. Gemini is now the recommended provider for stability and performance.
ask_gemini, ask_gemini_pro, web_search (with Google Search grounding), web_reader, and parse_document.The MCP server exposes Gemini tools directly to Claude Desktop. Claude acts as the high-level orchestrator while Gemini handles the heavy lifting like code generation or document analysis.
It follows a 3-tier priority system:
Opus 4.6 is highly capable but burns through message limits rapidly. This setup stretches your usage cap significantly.
Additionally, many users have reported Sonnet 4.5 degradation since the 4.6 release. By using this MCP, you let Sonnet handle orchestration while Gemini handles the heavy processing. Opus 4.6's parallel sub-agent orchestration is preserved; each sub-agent can delegate to Gemini independently.
The project is MIT licensed and free to use and improve. I've included CLAUDE.md templates in the repo to help enforce delegation logic.
Contributions and feedback are welcome.
Jam sesh was through the roof
I see people here on Reddit, all lonely and holding high onto their self respect. People with partners are disrespected many times but at least they are not alone. Is respect in a relationship reality or just a hypothetical concept?
Good morning everyone, I updated Home Assistant Cloud and it no longer works…
It stays loading… but nothing. What should I do?
Thanks
Hi! I'm developing an online Reversi (Othello) web game.
The beta version is now live and I'm looking for testers and feedback.
Features:
- 1 / 5 / 15 / 30 min time modes
- Ranked matchmaking
- Spectator mode
- Replay system
- NEW: AI mode (Level 1–10 difficulty)
If you have time, I'd really appreciate it if you could try a few games and share feedback 🙂
Playable Link: https://othellio.com
A 30 foot cock that wants to reach out and touch someone.
Found this in my grandma's kitchen. No clue what it is. Seems like some sort of shredder. Maybe for cheese or herbs (chives?) but no idea what exactly. Anyone seen this before?
preferably with coffee.
This is Monterrey. I noticed in Mexico they build IN the mountains. I was wondering if this is also a thing (or even allowed?) in the US?
I'm thinking of cities like LA or Seattle but they always have the mountains in the backdrop
Is there a reason for this?
I’m actively doing research but wanted some local advice.
I’m wanting to change career paths and try to work towards more money for my wife and i.
I want to try a trade and become a plumber but i really don’t know where to start.
If you are a plumber or know anything let me know the best way to get into this field! At my current job i deliver to a plumbing company so I’ve been needing to stop by and ask but wanted more opinions out there
Thanks
I know this has been posted here before but I just wanted to start some more conversation around this show.
This was a really fun show. I like how it touched on some of the issues of the era, like the Cold War, the JFK assassination, race relations etc. It was trying to play the prestige TV game a la Mad Men, but airing on ABC it had the typical hallmarks of a network TV show - cheesy plots and dialogue and sitcom-tier set design (the uniforms, airport and plane interiors were great, but some of the overseas scenes were obviously shot in a studio) - but I found that to be part of the charm, there were never really any high stakes and it made for nice and relatively clean light viewing.
It was eventually canceled after 14 episodes. I like many others was disappointed I didn't get to spend more time with the characters but maybe it was for the best, you could tell towards the end the writing was going to lean more towards the show's soapy aspects had they continued, and I think the showrunners admitted this was their intention too, it was meant to be Desperate Housewives' replacement after all.
The ending does set up some plots that were clearly meant to be part of a second season but it doesn't end on any unsatisfying cliffhangers, so I'd definitely say its worth watching as a miniseries if you're interested.
If the streaming services are looking for another IP to reboot, I think this is the one, there are many talented young actresses who could play the roles of the stewardesses very well!
Small fun fact: Christina Ricci is only 5'1" and would not have met the height requirements of a Pan Am stewardess!
You know them, you see eye to eye on most big things, you're close with them, but they aren't what you find attractive. Would you be flattered? Uncomfortable? How does that affect the friendship going forward?
Would you ditch them? Give it a shot? Tell them no and why then try to remain friends?
Edit: Thank you all for the responses, it has been enlightening :) hope you enjoy the rest of your weekend
Why don’t orbiting spacecraft re-enter at a shallow angle (using a much longer flight path, of course) so that the thin atmosphere has plenty of time to absorb all the energy. Reentry would be less violent (less risky), and spacecraft would not require such heavy heat shielding. Is it because reducing the speed makes the spacecraft de-orbit into the thicker atmosphere faster than the forward speed can bleed off? Or is there another reason?
I love this engagement photo but my stomach looks bloated (I ate a lot of Italian food). I’m on the right in brown.
First time doing this. Thank you in advance. I will Venmo. Cashapp. Zelle. Whatever y’all normally do!
Hi folks. I found this many years ago and have always wondered about its origin. I was hoping for some insights hear with all you clever people. There are 2 dates on it, 1813 on the front and 1914 on the back
Durability patch was 3-4 years ago, do anyone even know or feel the impact of that anymore? No?
So where's the full on overhaul onto odd scalings. It would take a day to adjust these things for but apparently trying to have fun isn't allowed anymore, making your game harder comes with zero payoffs. Most off meta builds used to be scaling heavy with items, most builds doesn't come online until 4-5 items so how could they ever be considered "op" or broken, there's no such thing as a balancing nightmare because they're so bad right now that it's unplayable. I dont want OP i want it in a playable state.
back in the days you could pick ANY champ that had a weird odd scaling of like 100% ap scaling on a single ability and make it work. I even remember playing AP Jarvan with his E that only has 80% scaling and you could make it work and have success, build full AP on him today and his E would deal 20% of someones hp level 18.
People on here is complaining about the game becoming easier, i believe that's the case aswell but at least that's the terms for everyone playing the game in a normal way. The changes over the past 3-4 years has litterly made the game completely unplayable for like-minded people that loves off meta. Before you had a reward waiting at the end if you managed to do well, now you just play the game on hard mode and it stays that way no matter the game state or how well you play.
Back in the days "fun modes" were the golden place for off meta picks because you got free gold and always guaranteed items. Even in those game modes any off meta build is complete trash compared to playing the normal way.
I'm gonna keep "ruining" peoples games (not my words). I wont stop playing what's fun 1 out of 10 games just because a certain company refuses to make logical changes to their game. I'm not the problem they are.
Hey everyone, I’m trying to build a 🥵 capable workflow where I can take a reference image and generate a new image that:
• closely follows the reference (same background, clothing, pose, camera angle, quality)
• but applies my own LoRA model as the subject style/person
Has anyone done something similar?
What models / techniques worked best (Qwen, ZIT, Flux 2 Klein??
Any help or pointers to similar posts/tutorials would be appreciated 🙏
Artwork 749: 54 Piscium B
54 Piscium B is a planet outside our Solar System about 36 light years away in the Pisces constellation. It orbits an orange dwarf star named 54 Piscium. Scientists describe it as a hot saturn with an unusual stretched orbit.
Time Taken: 11 minutes
Program Used: paint.net
If you have any suggestions for what you'd like me to draw next, feel free to share them!
My grandfathers ring…no clue is this is terrible artistry or something passed down from Hungarian ancestry or neither!
Hi, I'm 18. I'm in my last year of HS. I was recently kicked out of my (already abusive) home. I'm connecting myself with services- I'm already employed with the City of SD and looking for more employment. I really want to get a car or a scooter or motorcycle or something. I have about 4K saved up, and I don't have expenses at the moment. I'm looking to resume work - but does anyone know the process of getting any of those? Sorry
I already have a CA driver's license, and have been driving a car for maybe 1.5 years?
I'm not sure what the next steps are. Do I need to get a credit card and start building credit? How much should I have saved up before I even think about buying a scooter or motorcycle?
Thank you sincerely!
I guess I should add I'm college-bound, I have a 4.07 gpa that is expected to rise. Likely a UC or Humboldt. I'm applying for smaller liberal arts colleges in hopes for more scholarships.
I'm trying to get a projection mapping set up going but can't seem to install the ndi nodes. It always just fails to install. I did manual pull from git, various versions of ndi (cyndia) and nothing seems to stick. Any thoughts?
Turquoise horse , oil on canvas, 16x12 in.
from ThePlantSlant - https://youtube.com/shorts/QLMi380mKck?si=5Co4bPpngsbni\_WB
I’m hoping someone can help restore and retouch this very special photo.
This is a picture of my great uncle and his husband on the day they met in 1961 in Gilcrest, Texas. They were together for about 60 years. This photo has always sat on his mantle. It was part of the tour of his home when guests visited and has become something of a legendary image among friends and family.
Over time, it has faded so badly that it’s hard to see the details anymore. Now, at almost 90 years old, he can barely make it out at all.
I would love to have this restored so I can enlarge and reprint it for him to see clearly again.
I’ll tip $20 to my favorite restoration. Thank you so much for helping preserve something meaningful to my great uncle and the memory of his husband.
I recently picked up a fisher f75 metal detector that had a bit of corrosion on the battery terminals. I ended up resoldering all the wires on the battery box and to the circuit board. Before I did this, the volume was really low. After doing this, its a little better, but still pretty low. I cleaned all the terminals in the battery box and it looks great now. I also used a lot of contact cleaner in the pot and in the headphone jack that had a lot of crusties in it. Still pretty low. What's next? Headphones are pretty faint also, so its a common problem somewhere. I'm guessing the pot is pretty bad inside and could be taken apart and lightly sanded down.
i know people who do these things are going to come here and disagree, but i can't be the only one who thinks they just have to buff towers in mayhem. tanks should not enable you to dive a full enemy team 6 minutes into the game, clear their whole team and lose, at max, 1 of their own doing so, or even earlier. towers should hit much harder vs champs or based on max hp.
and match these hyper sweaty players against their own kind, it ain't fun playing against a team that plays like someone has their family at gunpoint and ending before the 4th or even 3rd augment, i don't care if it takes a few more seconds to find a lobby.
i love this mode, i really do, but there's also an insane amount of frustration involved because of these types of things.
Anyone else experiencing the IOS App has weird zoom in happening by itself ?
Acrylic on paper
I asked this 10 months ago, I just want to be able to formulate a better argument. I am in the process of getting a new car right now and my stepdad called me and kept going on about how leasing a car is the "smartest" choice. He cited that there is no headache when trading it in, you get a new one every 3 years or so and can renew the lease if you like it. I told him that, unlike leasing, if you own it you can sell it off eventually, even if it is a depreciating asset.
He kept asking how I don't understand why the leasing is so much better, but chose to not listen to my side of not trying to show off and not really caring about rolling around in a brand new car every three years. I just care about reliability and want my money to go a bit further, especially since I started earning more than I have before. How can I articulate a proper argument so he calms down and stops pushing the idea?
my laptop is ASUS vivobook 16x k36052zc with rtx3050laptop 4 gb ,16gb Ram and Intel i5-12500H . can my laptop able to play LOL ?
One thing I kept noticing is that more and more people are asking AI what tools, products, and services they should use.
Questions like:
So I built Suggesterly.
It’s a platform where you can add your product, service, or brand, and have it recommended inside AI conversations when it’s relevant.
The goal is simple:
Help great products get discovered naturally, right at the moment people are looking for solutions.
You can add your project here (it’s free):
https://app.suggesterly.com
I’d genuinely love your feedback, ideas, or criticism. It’s still early, and I’m actively improving it.
If you end up adding your project, let me know what you think I’m happy to answer questions and improve things based on your input.
When I sat at the summit and the fever finally broke into a perfect, ringing clarity, I understood: the mountain hadn't invited me up—it had invited me to stay.
i just finished the 10 hour Arduino video by freecodecamp on YouTube, and I'm confused about what I should learn next. i want to start making basic projects to get more familiar with Arduino, so if there are any good playlists for that then they would help. I've also been thinking of doing the 4 hour c programming crash course by bro code ( or the 12 hour cpp course) to just get a hang of the syntax of it. i know some java already so I don't think doing a crash course would be too bad. Thanks
Im dealing with an issue with our Mortgage Company, Mr. Cooper (with Rocket Mortgage). They overpaid our school taxes by nearly $3k for 2025 which caused our escrow account to go into the negative. They have admitted the error and our tax office is sending the refund check back to the lender.
The issue is that it’s expected to take 6-12 weeks and Mr. Cooper has adjusted my mortgage +$1000 due to the escrow being negative. They said they can adjust it down once the refund comes in but that still throws off our budget for the next 2-3 months.
Has anyone dealt with this before? Am I required to pay the shortfall of their admitted mistake? Any insight would be so helpful!
I'm not sure I can mentally survive being a 911 operator.
Probably won't be finishing this, but here's our favorite handyman in my style!
I am north of Chicago about a mile inland from Lake Michigan, definitely moraine territory and soil which is clay. Based on observations, there are two colors of clay.
Two years ago our town dug 6 feet down in the space between the sidewalk and the curb to work on water supply pipes. When they back filled, about 2 weeks later, the trench settled about 10 inches. They re-filled it and all looked good. This area is about 50' from the house, between the sidewalk and the curb (village property thankfully).
Late January we needed sewer work done. The trench included the same area but to a depth of 10-12 feet. Most of the re-filled area is acting as expected, but the part that had settling before has dropped again, now down 3-4 feet as best as I can see.
The longest part of the trench from the house to the sidewalk(40' or so)is stable.
Question: why is this localized area settling so much and so quickly? Is there likely a leak in the supply line that is doing under ground erosion ?What does it predict for future soil stability both in the grassy parkway but also the sidewalk?
The house is 90 years old. Does this give me information about the foundation settling?
Thanks for any information you can offer.
My employer has a 100% match up to 10%, and I’ve been contributing 10% Pre-Tax so far. I have a $6k profit-sharing bonus coming next month and I can elect where to put it. Currently, I have $0 in Roth. Would it be wise to use this bonus to start building a Roth balance, or should I stick with Pre-Tax to lower my current taxable income? Also, what percent of my bonus should I put into my 401k?
edit: currently in 22% bracket. I'm single but hopefully by the time I retire I will be married.
I'm an 18 year old that recently got put on a salary for the family company in December. The salary comes out to ~$44 k per year post tax. I plan on investing $7k per year in a Roth Ira and Maxing out 2025 first because I have until April to do so. I also estimated my living expenses aggressively at $5,000 per year. Considering that it is $12,000 total (on the high side), that leaves me with $32,000 to work with. What should I do with that money to get a head start on my financial freedom? Also, I will not be paying for college for the next 4 years because of a scholarship.
Hi, I'm 18. I'm in my last year of HS. I was recently kicked out of my (already abusive) home. I'm connecting myself with services- I'm already employed with the City of SD and looking for more employment. I really want to get a car or a scooter or motorcycle or something. I have about 4K saved up, and I don't have expenses at the moment. I'm looking to resume work - but does anyone know the process of getting any of those?
I already have a CA driver's license, and have been driving a car for maybe 1.5 years?
I'm not sure what the next steps are. Do I need to get a credit card and start building credit? How much should I have saved up before I even think about buying a scooter or motorcycle?
I don't have a lot of belongings, they all fit in my regular school backpack.
I should mention I'm 5'1, so if I found a motorcycle I would need a short one I think.
I am college bound to Humboldt or a UC with a 4.07 gpa that is expected to rise.
Thank you sincerely!
My weekly allowance may have run out but isn't it absurd that I can't at least use my remaining session allowance?
Over the past year I taught myself development and coding and built a platform that personalizes public-domain classics by inserting a chosen name into the story (Alice in Wonderland, Wizard of Oz, etc.).
It generates custom ebooks and print books automatically, and I’ve launched print versions for 20 titles so far (print orders are live and working) and I am adding a few more each day.
The system handles personalization, checkout, file generation, and delivery end-to-end.
I’m not here to sell — I’m genuinely looking for feedback from people willing to try it and tell me what feels:
– Confusing
– Slow
– Gimmicky
– Or surprisingly good
I temporarily made the ebooks FREE so anyone can test the full flow without paying.
I built the whole thing solo (backend, Stripe integration, file workers, etc.), so I know I’m blind to some UX issues.
If you’re open to testing it and giving honest feedback, here’s the link:
https://www.giftbookz.com/
I appreciate brutal honesty, feel free to roast me!
Hey
Can you help a HA newbie to get started. I've installed the UTM on my spare MacBook and then created a virtual machine running HA with haos_generic-aarch64-17.0.qcow2
However I don't seem to be able to install HACS add on? There is no customer app stores so to speak even though advanced mode is one.
help please :)
Last time, Mallard won for Captain Holt
Like the title says, what episode of Brooklyn 99 is Charles’ most questionable episode e.g. one where he broke the law or was unnecessarily mean.
Comment with most upvotes wins
Jake - Captain Kim
Amy - Casecation
Captain Holt - Mallard
hey, been building this for a while and want to get some real feedback before I keep going in a direction nobody cares about.
the problem: I use multiple AI chatbots daily and the thing that kept bothering me was re-explaining how I wanted answers. every. single. time. quick answer? gotta say "be brief." fact checking something? gotta paste a paragraph about citing sources. learning mode? another paragraph. it's dumb. the models can do all of this — you just have to tell them every time.
what I made: chrome extension called AI Overmind. adds a sidebar with clickable "profiles" that change how the AI responds. works across chatgpt, claude, copilot, gemini.
comes with premade ones like:
you can also build your own. and there's a prompt library + prompt chains (multi-step automated sequences).
tech:
what I'm genuinely unsure about:
link in comments. roast welcome. this is a solo thing and I'd rather hear "this sucks because X" now than find out in 3 months.
What is the general consensus on the two models especially when it comes to tools calling? I expect both will be replaced soon but which of these two is optimal?
Courland pocket, is this a mosin or a k98? What are some of the details when trying to identify one of the two? First rifle find of mine 👍👌
I am running 2x20$ subs right now
in 3 days I get to the weekly limit and using it very very reducely. I'm at the point where I either get 100$ or 200$ sub. I use it for my work and my fiance's work + sidde projects
since opus 4.6 the amount of things I can do is insane and 2 accounts is not enough
I'm wondering If I should go x20 and if it's really worth it.
because if I can get near 3000$ on opus for 200$ or just test the 100$ one and then maybe upgrade?
spinning another 20$ account seems a scam
a manippleator
I needed a simple tool where I can upload my CSV contacts and bulk send them using my own SMTP config, for my side projects. Every tool I found only supported gmail integration for free and you needed to charge for connecting your own email domain via SMTP.
So, I built this simple tool over the weekend. Have in mind that like any other weekend project, the code isn't perfect, but hope it helps someone.
This is not a promotion, tool is open source and completely free to use
repo link: https://github.com/danesto/cold0 - feel free to fork or contribute
Your spouse goes on a trip. Their plane goes down. Some bodies are found, and it is assumed there are no survivors. You (and your children if you have any) hold out hope as long as you can until after a year, you accept in your heart that you'll never see your spouse again.
You meet someone new. Things are going really well. You spend a lot of time together and you (and your children if you have any) have grown to love this new person. They move in with you, and things are going wonderfully.
One day, you and your partner are talking on the couch, just enjoying each other's company when there is a knock at the door. You tell your partner you'll answer it. You get up, walk to the door, and you open it, half-expecting it to be an Amazon package or maybe a Mormon you'll need to shoo away.
Instead, you see a face you never thought you'd see again. It's your spouse who you have presumed dead for the last 3 years. Turns out they just barely survived the crash. They were picked up by a passing fishing vessel from a foreign country and have been in a coma this whole time. They had no identification on them and nobody knew who they were.
They smile and take you into their arms and embrace you with all the love and urgency you would imagine from someone who has been away from their spouse for so long. From the other room, you hear your partner ask who's at the door.
Would you rather stay with your current partner, or leave them and reunite with your long lost spouse?
Nobody, especially the Soviets, needed to know the first words spoken on the moon were an astronaut pleading for his life as the insectoid scurried to the real lunar lander
Ordered in 1885 as part of an effort to modernize the outdated U.S. Navy, USS Maine took more than nine years to complete. Nine years is a long time under any circumstances, but in an era of rapid advances in naval technology, it was an eternity.
By the time she was commissioned in 1895, Maine measured 324 feet 4 inches long, 57 feet wide, and displaced 6,682 long tons. She carried two gun turrets arranged en echelon and was powered by triple-expansion steam engines capable of 16.5 knots. Though the United States had never built a ship quite like her, she was already outdated by European standards.
After years of routine service, Maine was sent to Havana Harbor. Earlier that month, riots had broken out when Cubans loyal to Spain destroyed local newspapers that had criticized the very real atrocities committed by Spanish forces during Cuba’s ongoing war for independence. Maine’s deployment was meant to protect American citizens and property in Havana.
On the night of February 15, Captain Charles Sigsbee sat in his quarters writing a letter home as the ship’s bugler played taps. At 9:40 p.m., a massive explosion shattered the calm.
“It was a bursting, rending and crashing sound or roar of immense volume, largely metallic in character,” he later recalled. “It was followed by a succession of heavy, ominous, metallic sounds…”
The lights went out instantly. The deck lurched. Thick, choking smoke poured through the passageways as the ship took on a violent list. With fires racing toward the forward magazine, Sigsbee reluctantly ordered the ship abandoned. Men slipped into the harbor or scrambled into boats amid sparks, darkness, and the groan of twisting steel. Sigsbee was the last to leave.
261 of the 355 men aboard were dead. Among them was the bugler.
The cause of the explosion was immediately disputed. But an outraged public, amplified by a sensationalist press and pro-war voices in government, soon forced President McKinley’s hand. Within weeks, the United States was at war with Spain.
If you’re interested, I cover the sinking of the Maine in more detail here: https://open.substack.com/pub/aid2000/p/hare-brained-history-volume-67-the?r=4mmzre&utm\_medium=ios
What really grinds my gears is that we're back to 2024 again. The CSS has fundamental structural issues, div soup, broken specificity chains, the usual mess, and what does Claude Code do? Slaps !important on everything like that's going to fix it. I thought we were past this. I haven't seen this loop in at least a year or two. But here we are, Opus 4.6 confidently duct-taping over a burst pipe. Over and over. You tell it the layout is broken, it adds another !important. You tell it that broke something else, it adds another !important. Rinse and repeat. And the thing is, I know slapping !important on a CSS rule inside a class won't do shit when the problem is structural. The cascade is fighting itself three layers deep. But Opus insists on doing it anyway, like it forgot everything it learned. Has Opus 4.6 rotted?
Hi everyone, after a few weeks of work I've put Prism AI online. This has been a passion project for me and I'm not seeking financial gain from it (it's fully open source), I just dislike getting hit with a massive "text dump" every time I ask an AI to research a complex topic.
If you ever start researching something like "The History of Roman Architecture" and just want to see how the concepts connect without reading a 2,000-word essay, Prism might be for you :)
It recursively scrapes the web, builds a knowledge graph, and renders it in 3D so you can explore the connections visually instead of just reading.
Repo: https://github.com/precious112/prism-ai-deep-research
Let me know what you guys think!
Background: I was stuck at 800 rating and read about the Woodpecker Method—repeating the same puzzles until pattern recognition clicks. Wanted a tool to make this easier.
What it does: 4,462 puzzles from Susan Polgar’scollection, spaced repetition tracking, minimal interface (Lichess board component).
Tech: React, Stockfish WASM
Status: Free, donation-based. Built it for myself, went from 800 to 1200 using it. Figured others might find it useful.
Happy to discuss the build process or answer questions.
Everyone's rushing to deploy AI workloads in production.
but what about observability for these workloads?
AI workloads introduce entirely new observability needs around model evaluation, cost attribution, and AI safety that didn’t exist before.
Even more surprisingly, AI workloads force us to rethink fundamental assumptions baked into our “traditional” observability practices: assumptions about throughput, latency tolerances, and payload sizes.
Curious to hear more insights on this topic from others here.
From Eric Idle’s S2 episode.
I’m experimenting with running LLMs entirely on mobile hardware without cloud dependency. The challenge isn’t the model itself, it’s dealing with memory limits, thermal throttling, and sustained compute on edge devices. How do others optimiz for reliability and performance when inference has to stay fully local? Any tips for balancing model size, latency, and real-world hardware constraints?
Our mortgage will be paid in full in 2 months. Other than needing to pay our taxes and insurance, is there anything I'm overlooking? Our house is 16 years old so we know we will be needing to replace the roof, HVAC, appliances, etc in the next 5-10 years so we're going to funnel the extra money into our HYSA until we have enough to pay cash for all of these items. Currently have $55k liquid.
Only other debt is a vehicle loan, $32k at 1.99%. We both have state pensions and contribute to 403(b) accounts. Hubs (57) has 4 years until retirement, I (52) have 6 years.
Hi everyone!
I've just wanted to share my latest poster. With Artemis II approaching I felt inspired to create a poster to mark the moment.
I grew up fascinated by the classical sci-fi movie posters of the 1970s, especially the original Star Wars era artwork. I wanted to echo that visual language while honoring this new chapter of human spaceflight.
I’m a dual French–Canadian artist training to become an astronaut-scientist, and of course for me Artemis represents something deeply meaningful: international collaboration in pursuit of a shared objective.
The piece is handmade and was created using a mix of traditional media: airbrush, watercolor, ink, pencils, colored pencils, ecoline, and poster color.
I’d genuinely appreciate feedback from this community — especially from those who love vintage space art and classic sci-fi poster design.
Thanks for looking 🚀🌙
I apologize for the watermarks and low quality, unscrupulous people tend to "borrow" my work (especially these days...). So if anyone offers you prints here, it's not me.
An Old Man drops by a newsstand each morning to look at the newspapers, but never buys anything.
After several months of this, the newstand owner asks the old man “Sir, you come by every morning but you never buy anything, is there something I could start carrying you’d be interested in?”
The Old Man says “No, I just come by to check the obituaries”
The Newsstand Owner says “Oh, well of course- but theres your problem, obituaries are on page 6”
The Old Man raises his hand “Thank you but no, the obituary Im waiting for will be on the front page”
Got the below notification that I needed to renew my Garmin connection, but there's no option to choose Garmin as a connection?
EDIT Model selector added (clean dropdown, options now) Qwen3-VL-4B-Instruct-abliterated-v1 ( ~5–8 GB VRAM, Fast
Qwen3-VL-8B-Abliterated-Caption-it ~14–18 GB VRAM, max detail)
✨ What OmniTag does in one click
💾 How to use (super easy on Windows):
🖼️📁 Batch Folder Mode
→ Throw any folder at it (images + videos mixed)
→ Captions EVERY .jpg/.png/.webp/.bmp
→ Processes & Captions EVERY .mp4/.mov/.avi/.mkv/.webm as segmented clips
🎥 Single Video File Mode
→ Pick one video → splits into short segments
→ Optional Whisper speech-to-text at the end of every caption
🎛️ Everything is adjustable sliders
• Resolution (256–1920)
• Max tokens (512–2048)
• FPS output
• Segment length (1–30s)
• Skip frames between segments Frame ( 3 skip + 5s length = 15s skip between clips)
• Max segments (up to 100!)
🔊 Audio superpowers
• Include original audio in output clips? (Yes/No)
• Append transcribed speech to caption end? (Yes/No)
🖼️ How the smart resize works
The image is resized so the longest side (width or height) exactly matches your chosen target resolution (e.g. 768 px), while keeping the original aspect ratio perfectly intact — no stretching or squishing! 😎
The shorter side scales down proportionally, so a tall portrait stays tall, a wide landscape stays wide. Uses high-quality Lanczos interpolation for sharp, clean results.
Example: a 2000×1000 photo → resized to 768 on the long edge → becomes 768×384 (or 384×768 for portrait). Perfect for consistent LoRA training without weird distortions! 📏✨
🧠 Clinical / unfiltered / exhaustive mode by default
Starts every caption with your trigger word (default: ohwx)
Anti-lazy retry + fallback if model tries to be boring
Perfect for building high-quality LoRA datasets, especially when you want raw, detailed, uncensored descriptions without fighting refusal.
* Edit Describe the scene with clinical, objective detail. Be unfiltered and exhaustive.
to Anything for different loras,
I.e focus on only the eyes and do not describe anything else in the scene tell me about thier size and colour ect.
Hi everyone, 32M here, US federal employee.
Plan, retire as soon as I'm able and likely move abroad to Southern Spain. My question what kind of life can I reasonably afford?
Currently, I have $71,000 in my TSP (401K with 5% match), I put 14% of my $112k salary into it every paycheck (every other week), it comes to $600 a paycheck then the 5% match. At a 10% interest rate (the current fund I have has an lifetime average of 11.3% and a 10-year average 15.54%), I'm estimating to have about $1.8 million when I am eligible to retire at 52.
For the pension: it'll come to about $50-60k a year. Im a special class employee that will get a pension supplement until I can collect social security which would be an additional ~$30,000 a year. At 55, I can collect a seperate pension from a previous job at about $2300 a month. So total pension per year, would be about $105,000 before I touch the TSP money.
I know that seems fine like a decent amount now, but with inflation and such, I'm worried that it wont be enough. I don't know what the kind of retirement I'm looking for is called. I'm not looking for rich. But, I imagine:
Waking up to a view of the Mediterranean Sea in a decent-sized home (2-3 bedroom, 2 bath to host friends visiting), reading a book by the sea, drinking wine/eating lunch with friends, getting an afternoon massage. Maybe 2-3, two week vacations per year (a few shorter 1 week trips to visit family in the U.S.). Not overly spending, but also not having to worry if I find something I want to buy/an emergency comes up and I have a big expense.
I don't know what kind of money that will require and whether I'm on the right track.
Hi! I just became a Medical Laboratory Scientist and have been working in an IRL in Texas for a while now. I’m fortunate enough to have my food and housing taken care of. I just paid off my car and credit cards and also have no student loans. Naturally there are things I’d like to do with my income, but I want to be smart and save/invest while time is still on my side. I have 1k left before I max out my Roth IRA for 2025. Still haven’t contributed to 2026 yet but planning to max that out by max. I’m very frugal and live well beneath my means. What else should I be doing to maximize my financial growth and set myself up for success down the line. (I’m also a premed and hoping to start medical school fall of 27)
I’m loving watching the various winter sports but I can’t work out how the snow is such thick coverage for the snowboarding and skiing events.
I know it has been snowing a fair bit as well. If it’s also raining, won’t that turn the snow into a slurry at some points.
The surrounding mountains are full of snow but it doesn’t look thick enough for what is seen for the events, e.g snowboarding or alpine skiing.
Are they using artificial snow? Would that impact the athletes?
ELI5 please. I live somewhere where there is very little snow nowadays and when there is, it very quickly turns into slush.
When you buy it last you cant undo if you wanted to undo the item you just made (seems to happen nearly all the time)
You also cant sell wards from that slot, so if you wanted to get that gold back to rush buy an item for whatever reason you cant
Cyclocosmia ricketti is its scientific name
A while ago I shared an early version of ScreenSorts app here because I was frustrated with my screenshots folder turning into chaos.
The feedback I got from r/SideProject genuinely changed the direction of the app. People pointed out performance issues.Asked for proper local-only processing. Wanted better duplicate detection. Asked for clearer folder structure. And some of you told me very directly what felt clunky 😅
I went back and rebuilt a big part of it.
The new version now:
– Automatically organizes screenshots into structured folders
– Detects and removes duplicate images
– Tags images based on what’s inside them
– Detects links visible in screenshots (like YouTube pages)
– Compresses images to save space
– Runs fully locally on your Mac (no cloud, nothing uploaded)
Privacy was a big concern in the last thread, so to be clear and all analysis happens on-device.
About the Pricing: It’s a one-time purchase of $19. No subscription. Free trial included so you can see if it’s actually useful for you.
I built this because I was tired of spending time managing screenshots instead of using them and this subreddit really helped shape it into something better.
If you’re willing to try it, I’d genuinely appreciate more feedback. What still feels missing?
And feel free to grab a copy here : ScreenSorts
Claude will find a way to think.
Mixed media on paper.
18x24
I am very new to Claude and am curious if I can install it as an extension either in Vs or in my laptop. But I keep finding that I must pay for API keys.
If Claude Code is with money how much does it cost?
Throughout my life I have been crippled with socially anxiety + introversion, including being overcome the fear to ask girls out or pursue any form of romantic relationship, it always seemed I was the odd one out amongst my younger friendship groups, so I ended up abandoning the idea altogether including during my secondary & uni years. My undergraduate was during COVID and graduated just over 2 years ago and only had a small circle of friends during that time that have since moved abroad.
I’m 24 now and still never dated. It’s beginning to dawn on me that I’m massively behind socially and emotionally. It seems like everyone has dated, had a SO, sex etc. I feel I missed the boat as an adolescent…
I’d like to actually start dating but I have no idea how to meet someone, horror stories from friends + family from dating apps. General cues (i.e. preferences, overcoming fears of rejection etc.)
Any general advice for a first-timer?
I launched my first SaaS about a month ago and immediately jumped to #7 on product hunt that day. This push got me a lot of users for the first few weeks where almost 2K people visited and over 200 tried the app.
I also managed to get 15 paid users in that week itself. But after that it's been pretty dull out here.
My app gets around 50 views daily and around 3-4 actually try the app but no conversions at all. I wonder what could be the main issue?
I always knew the closing scene with Kim Jon Un at the typewriter was from my childhood but couldn’t place it until now. I now know every single 30Rock Easter egg. Time to die!
Hello community
My girlfriend just moved in with me and i want to provider her access to my home Assistant stuff via the iOS app. I added a new person in my set-up but when she tries to login via the app we get stuck here (see picture):
Second screenshot in German basically says „we couldn’t connect to home assistant“
Any advice?
You put a little boogie in it.
I saw this product called Divoom Times Gate but I couldn't buy it so thought lets try to DIY this. Turns out it was pretty hard.
Initially I just wanted to play GIF's on it and make my setup look cool but as time went by working on it I thought this could do a lot more than just Play GIF's as I saw a potential for reducing clutter on my desk and also keep everything at once place. I was like I can put clock on this, reminders and calendars etc
So I started thinking about adding functional apps and infrastructure to achieve this fast-forward 6 months.
This device was built like an infrastructure to run apps on it and it now supports upto 24fps on each display. I can show independent apps on all three displays like clock, Sports Scores, Reminders, Google Calendar, notifications. Each app can be controlled using the Knob.
The device is voice controlled as well just like alexa so you can use voice commands to add reminders, alarms and change apps. I recently added lil bit of OpenAI API as well so that I can ask it random stuff while working.
Also I can just build an app in any webbased framework and upload it to it. The device can also be controlled using a flutter app as well which is in progress.
This project taught me a lot of stuff.
Let me know if you have any feedback and if you would prefer to see this as an actual product.
Technical Details
Hardware
Software
Currently I'm working on building a custom PCB for the device. Let's see where this goes.
This video was meant for instagram so Incase if you want to see a more elaborated version of the project you can check this reddit post :
https://www.reddit.com/r/embedded/comments/1qk4lmu/have_been_working_on_this_for_over_an_year_after/
Hey,
I’m a solo founder, and I just launched Blogator — a platform where you give it one raw idea, and it generates a ready-to-publish, structured blog post automatically, with proper headings, SEO optimization, and clean formatting.
The goal: save hours of planning, outlining, and formatting content. You just tweak, polish, and publish.
It’s already helping me speed up content creation massively, and I think it could be a game-changer for AI content workflows.
Would love feedback from this community — especially on the AI workflow itself. How would you improve it?
His sister knew this the moment he.. undulated into her room, and knew nothing the moment she stepped off the train.
It’s the middle of February.
The season of muted tones, teacups, long evenings, blankets and chocolate cookies.
But my heart was missing summer 🥹
Colour. Sun. Sweetness.
That feeling when strawberry juice stains your fingers and the sun touches your shoulders ☀️
So I painted it 🎨
Not because it’s summer outside.
But because it’s summer somewhere inside 🫠
Please never forget: even when the world feels heavy or grey, we carry something bright within us ❤️
Dm
A few weeks ago, I posted about an idea to stop "ghost" subscriptions from draining my bank account. Today, I’m excited to say the project is officially live/completed!
SubTrack is now a fully functional dashboard where you can:
Built with Next.js, Tailwind, and Supabase. It was a journey of handling complex date logics and clean database schemas, but it's finally here.
I’d love for you guys to check out the VIDEO. What feature should I add next?
I know it might not be that long but I am so happy that I am making these steps. I started smoking cigarettes in high school, vapes were just starting to be a thing. I switched to vapes and continued for nearly 10 years. With smoking weed, I stared with one of my ex’s and have been smoking weed daily for about 5 years.
I’m proud of myself honestly. I knew I wanted to quit for a while but just couldn’t get myself to. What was I going to do with my stress? How was I going to break this habit ? Honestly. I have to chalk it up to willpower. I didn’t think I had any, but I quit cold turkey and it’s a lot easier than I thought it would be, for me at least.
(I know it’s really difficult for many other people so I want to acknowledge that it is NOT easy for everyone)
I built an open-source MCP server that allows Claude Desktop to delegate heavy tasks to external models. By offloading long-form analysis and research, you can cut Claude token consumption by up to 10x.
The first version I shared earlier used GLM-5 (Z.ai's 744B model). While helpful, it suffered from reliability issues—random mid-session outages and frequent downtime during peak hours. So I decided to switch GLM-5 with more reliable Gemini 3.x
v2 is now live with Google Gemini 3.x integration. Gemini is now the recommended provider for stability and performance.
ask_gemini, ask_gemini_pro, web_search (with Google Search grounding), web_reader, and parse_document.The MCP server exposes Gemini tools directly to Claude Desktop. Claude acts as the high-level orchestrator while Gemini handles the heavy lifting like code generation or document analysis.
It follows a 3-tier priority system:
Opus 4.6 is highly capable but burns through message limits rapidly. This setup stretches your usage cap significantly.
Additionally, many users have reported Sonnet 4.5 degradation since the 4.6 release. By using this MCP, you let Sonnet handle orchestration while Gemini handles the heavy processing. Opus 4.6's parallel sub-agent orchestration is preserved; each sub-agent can delegate to Gemini independently.
The project is MIT licensed. I've included CLAUDE.md templates in the repo to help enforce delegation logic.
Contributions and feedback are welcome.
What are these made of? I think the polar bear is ivory but not sure. The carved thing has the "hear no evil, See no evil, speak no evil" imagery carved into it.
As per the subject title. All I am seeing now is 4.6 and no option to select 4.5 using /model. I want Opus 4.5 and 4.6 is token hungry. :)
I find Conan's friends really tell him how he us and he also dishes it back
What’s the real difference between hiring a virtual assistant and using an AI agent?
A VA needs training and management.
An AI agent needs setup and automation rules.
Both cost money. Both save time.
If you’ve tried either (or both), which one gave better results?
From sports writer Rodger Sherman in his daily Olympics newsletter.
As the question suggests, how do I deal with being ugly? It's not something that I can change. Do I just accept the fact that I'll have a hard time making friends and finding a partner? Idk where I'm going with this. Really low on confidence. Would appreciate any advice.
https://github.com/komikndr/comfy-kitchen-distributed
Yep, 2 months working on one of the most unfun and barely documented parts of torch.
So what is this? This is a fork of the Comfy Kitchen backend that adds additional operators to enable FSDP2 and DTensor operations. For now, it supports TensorCoreFP8Layout. NF4 support is quite possible, but no promises.
Have fun.
The picture was generated using Chroma on the Raylight dev branch. It looks ugly because I ran it without any proper settings in the workflow.
I reuse all of my equipment: scrubs, gloves, needles - nothing goes to waste.
Even though it made sense in the moment, I’ll never forget watching my sister fade into the distance in the rearview mirror.
I shipped 3 SaaS products in 30 days using Claude Code exclusively.
Product 1: Analytics dashboard → 13 hours → 8 paying customers
Product 2: Feedback widget → 11 hours → 3 signups
Product 3: Content calendar → 9 hours → Just launched
Total: 33 hours. Three revenue-generating apps. All still running in production.
Before this, I spent 4 months on 6 different projects. Zero revenue. Every single one abandoned halfway through.
The pattern was always the same:
Weekend 1-3: Build auth. Email, OAuth, password reset. The boring stuff.
Weekend 4-6: Wire up Stripe. Webhooks, subscriptions, customer portal.
Weekend 7: Lose momentum.
Weekend 8: Abandon.
Then I'd start over because I couldn't remember how I did it last time.
Was about to build project #7. Sat down. Realized I was going to spend another month rebuilding auth and payments.
Said "fuck it" and made two decisions:
Here's what kept breaking:
Claude would build perfect auth. Then while adding payments, it would rewrite the auth code and everything broke.
It would create beautiful database schemas. Then forget them 20 prompts later and create duplicate tables.
It would implement Stripe webhooks. Then add a feature that bypassed the entire payment flow.
The issue isn't Claude. It's context management.
By message 30, Claude has forgotten your database schema. By message 50, it's guessing at your auth flow. By message 100, it's writing code that contradicts earlier decisions.
I stopped treating Claude Code like a chatbot. Started treating it like a developer who needs a project brief.
Now before I write a single feature request, I force Claude to:
1. INITIALIZE the project context
Run /propelkit:new-project (I built this as a command in my system)
Claude asks deep questions I always skip:
Then it researches my domain. Competitors, data models, workflows, everything.
Then it generates three files:
PROJECT.md → The vision. Who it's for. Business model. Core features.
REQUIREMENTS.md → Database tables. Auth flow. Payment flow. API integrations.
ROADMAP.md → Phases. What order to build in. Mapped to requirements.
2. DISCUSS each phase before building
Run /propelkit:discuss-phase 1
"Phase 1 is auth. Here's what I'm planning to build. Any preferences?"
This is where I catch stupid decisions before they're coded.
"Actually, don't use magic links. Just email/password and Google OAuth."
3. PLAN with research
Run /propelkit:plan-phase 1
Claude researches auth patterns. Creates a verified task plan. Shows me what it's going to build.
I approve or course-correct.
4. EXECUTE with fresh context
Run /propelkit:execute-phase 1
Here's the magic: Claude spawns parallel agents for different tasks.
One agent handles database. Another handles routes. Another handles UI.
Each agent reads PROJECT.md, REQUIREMENTS.md, and ROADMAP.md before writing code.
Fresh context. No drift.
Each task gets its own atomic git commit. Easy to rollback if something breaks.
5. VERIFY the work
Run /propelkit:verify-work
I test it. If it's broken, Claude debugs automatically.
Once it works, we move to Phase 2. Payments.
Phase 1 (Auth) is now locked in PROJECT.md. Claude can't "forget" it because it re-reads the file every phase.
Phase 2 (Payments) builds on top without touching auth.
Phase 3 (Features) knows the database schema because it's in REQUIREMENTS.md.
No more rewriting working code. No more context drift. No more hallucinations.
Stop solving solved problems. Auth is solved. Payments are solved. Multi-tenancy is solved.
I built all the boring infrastructure once (Next.js 16, Supabase, Stripe, Razorpay, email templates, admin panel, 90+ components). Production-grade. Battle-tested.
Now I only build what's custom to each product.
Stop letting Claude wander. Give it structure. Force it to plan before coding. Lock completed work in markdown files.
Context engineering > prompt engineering.
How many projects have you abandoned because Claude forgot your database schema halfway through?
What would you build if the context window wasn't a problem?
I'm genuinely curious how others are solving this. Or if everyone just accepts that Claude will rewrite working code and you have to manually fix it.
Hello, I'm currently a student studying Translation and Interpretation studies, and I need to translate an article about AI for school. It needs to be 10 - 15 standard pages long, the more reliable source the better. All of the ones I found so far were either too short or too long, so I'd like to aks for your help. Thank you.
@ bradtjonas for more comics
So guys, I can get an old server with 40 cores, any idea what tokens/sec i can get out of it and if it's worth the electricity cost or i better subscribe to one of top tokens magicians online?
I have been looking all over the internet for certain creepypasta. I remember being young and listening to it and wanting to re-listen to see if it still holds up. It has mrcreepypasta, creepsmcpasta, and creepypastjr and it’s about three friends that stay in a haunted house. I don’t remember too much but I do remember that they are watching tv and then they notice a fly on the screen. When they try to kill it, they find out it’s inside the screen. They then hear scurrying from upstairs. I just remember one of them dying from being pulled under a bed, if anyone can help me find this it would be much appreciated, I have been looking all over for this story.
An extremely insane game btw, had also an augment that gave my cc a burn based on max health. Ended up doing more damage than anyone else.
Anybody have an idea on how to fix the wrong timestamp for my Frigate instance?
I have a standalone Frigate server added to Home Assistant via the official integration. The detection and streams work great, but the timestamps are wrong in the HA UI. I’ve checked the container and Proxmox system clocks and they are both accurate.
One of the biggest issues with AI-generated characters is identity drift.
After 3–5 generations, facial proportions subtly change.
Here’s the workflow I’ve been using locally in Stable Diffusion to maintain long-term consistency across different environments and outfits.
Setup:
Key components:
1️⃣ Seed Anchoring
Instead of random seeds, I reuse base identity seeds and modify only environment prompts.
2️⃣ Prompt Structure Locking
I separate prompts into:
[IDENTITY BLOCK]
– Face shape
– Jawline
– Eye distance
– Skin texture
– Hair structure
[STYLE BLOCK]
– Lighting
– Environment
– Camera
– Wardrobe
This prevents accidental identity overrides.
3️⃣ Low CFG for Identity Stability
High CFG increases drift when scene complexity rises.
4️⃣ Controlled Environment Variation
Instead of rewriting the full prompt, I inject small environment modifiers.
Example:
“luxury villa office, golden hour light, 85mm lens”
without touching identity tokens.
Still experimenting with LoRA-based reinforcement for micro-features.
If anyone else is working on character locking in SD, I’d be curious how you’re handling identity drift.
I was just reading and came across the word “blush” and I cannot not say it in her voice. So that’s just me saying blush like that forever now in my mind.
Thanks Mr. Burger.
I'm looking for a inpainting solution in ComfyUI that can process images in under 1 second.
My main use case is e-commerce product photography, specifically removing unwanted elements or small artifacts from the products while maintaining a high-quality realistic look.
I've tried standard SDXL inpainting, but it's way too slow for my workflow. I need something that feels almost real-time.
My requirements:
So far I’ve only found that Manus is capable of creating a usable link for any client based html files I send it. Has anyone found any other ai’s that are capable of this? I ran out of credits for Manus lol
I’m trying to think of the best game plan for 2026 and going forward. I was going to reach out to a financial advisor, but want to do my own research first.
I work a full time job, union job where I have good benefits, annuity, and access to overtime. For atleast the next few years, I am able to use the no tax on overtime deduction. Additionally we own a small business. We do well with it, gives us a cushion to do things and I have some plans to grow it slightly this year. But we don’t rely on that money.
Right now I claim 2 on my tax withholding. From what I was reading since my small business I do have various write offs vehicle, equipment, supplies, repairs, etc.
So finally my question. Should I adjust my withholding to say 4? And take more in my check per week? Use that money extra per week to make an investment? Or I thought to put that extra towards my mortgage? Right now my rate is 3.25% with about 19 years left on it.
This is also the first time GB have won more than one gold medal at a single Winter Olympics.
I'm extremely new to all things personal finance and keep seeing recommendations to contribute to my 401k up until the company match.
If my company was 5% and I make 100k, then would it just be recommended to contribute only 5k per-tax to my 401k?
What's the pros/cons of maxing out my 401k vs investing elsewhere?
Thanks for your help! Just a newbie trying to better understand finances and prepare for retirement.
So i am working on very basic project(I probably thought it was simple). The client wanted an automation in which will scrape leads of businesses through google map so I thought of using serpapi for it. The problem I am facing is if I want 40 leads it is only scraping 20 like the pagination and the "need more page?" Node is not working. Resulting me to only scrape 20 leads at a time. Please help me...
how can i recreate this?
recently came across sirchmunk, which seems to be a refreshing take on information retrieval, as it skips the embedding pipeline entirely.
it work on raw data without the heavy-lifting of embedding. compared to other embedding-free approach such as PageIndex, sirchmunk doesn't require a pre-indexing phase either. instead, it operates directly on raw data using Monte Carlo evidence sampling.
it does require an LLM to do "agentic search", but that seems surprisingly token-efficient—the overhead is minimal compared to the final generation cost.
from the demo, it looks like very suitable for retrieval from local files/directories, potententially a solid alternative for AI agents dealing with fast-moving data or massive repositories where constant re-indexing is a bottleneck.
The baby. He was a little Bigger.
I’m an electronic music producer working in more niche styles like bloghouse, hyperpop, and digicore. I’m trying to find active communities (Discord servers, forums, smaller subreddits, etc.) that are really focused on these kinds of sounds and sharing projects.
I’m not looking for huge generic EDM servers full of promo spam, but more tight-knit communities where people actually share music, give feedback, and talk about production and help each other out in there music journey.
If you’re part of any good servers or forums in that lane, I’d really appreciate it if you could point me in the right direction 🙏
Thanks in advance!
Hello!
I did some llama-bench tests (on ik_llama.cpp fork - it has sota quants (iq4_kss and others, and is faster on prompt processing on both CPU only and CUDA + CPU option)
step 3.5 - 529 on prompt (16k), 30 on text gen (4k)
(batch size 2048 instead of 4096 gives 300 tk/s on prompt)
step 3.5 is a GREAT model, it is very nuanced , but the thinking time and token consumption is crippling (up to 10k-20k tokens on thinking with all the details).
I didn’t want to wait as long as the five repeats used with step 3.5, so I ran only two repeats minimax m.2.5 - 470 on prompt (16), 26,5 on text gen (4k)
With the new models that are able to perform at the level of the top paid models I'm starting to have a feeling of freedom
I invite everyone to discuss the new models and the methods and optimizations for running them locally!
I didn’t attend his second wedding, although I did watch it through the camera of a Predator Drone as it circled the tent overhead, ready to release the Hellfire missile.
Hey everyone!
Like many of you, I set up headless Raspberry Pis often. I got tired of manually creating
wpa_supplicant.conf
So, I built a modern, open-source tool to automate the whole process directly from Windows.
Introducing: Raspberry Pi Boot Configurer
Key Features:
boot partition, and the tool detects existing configs.pi or custom) with a properly hashed password (SHA-512), just like the official imager but faster for tweaks.It’s completely open-source and comes with a standalone
.exe
Screenshots :
Link to Project & Download: GitHub Repository & Release
I’d love to hear your feedback or feature requests! (used AI to generate the content since my eng is bad)
I’m not going to boast but my wife and I do very well with 2 kids in Coastal CA. We are both in our mid 30’s. All of our friends in the same boat also doing extremely well here. College grads, homeowners, great careers, retirement, savings etc.
Drop your wins below!
Hi everyone!
I've just wanted to share my latest poster. With Artemis II approaching I felt inspired to create a poster to mark the moment.
I grew up fascinated by the classical sci-fi movie posters of the 1970s, especially the original Star Wars era artwork. I wanted to echo that visual language while honoring this new chapter of human spaceflight.
I’m a dual French–Canadian artist training to become an astronaut-scientist, and of course for me Artemis represents something deeply meaningful: international collaboration in pursuit of a shared objective.
The piece is handmade and was created using a mix of traditional media: airbrush, watercolor, ink, pencils, colored pencils, ecoline, and poster color.
Thanks for looking !
I apologize for the watermarks and low quality, unscrupulous people tend to "borrow" my work (especially these days...). So if anyone offers you prints here, it's not me.
I love Claude. It's my favorite AI tool. But I just got destroyed by the pricing model.
I'm on Claude Max (£200/month). I used Claude Code for development work and:
That's £400 in 3 days as a single individual developer.
I wasn't running it 24/7 or abusing it. I was just coding. Using it as intended.
"Extra usage" charges you full API rates ($3-25 per million tokens). Claude Code burns through tokens insanely fast because it sends your entire codebase with each request.
The app warns you're "approaching limits" but never says "this will cost you £50 in the next hour."
There's zero real-time cost visibility.
Meanwhile:
Honestly? I'd rather have ads. Give me:
Instead of surprise £200 bills that make me choose between finishing my project or moving to another sustainable AI.
Anthropic, please:
I've contacted support. But this is a product design problem, not just my problem, and all I get through their chat is, check your logs. Thanks!!!
I can't afford to use the tool I love anymore, and that genuinely breaks my heart.
Context: I'm on Max 20x - the HIGHEST individual tier. It has weekly limits that are supposed to last 7 days. I burned through them in 3 days of normal development work. Without extra usage enabled, I literally cannot work. With it enabled, I'm on track for £4,000/month bills.
#AI #DeveloperTools #SaaS #Pricing #ProductFeedback
#ArtificialIntelligence #TechIndustry #SoftwareDevelopment
Hi everyone!
I’ve always felt that the SaaS world is a bit 'homeless'—we are everywhere, but we don't have a shared space to see each other. So, I built StartupsAtlas.
It’s not just a map; it’s a way to claim your spot in the ecosystem. I wanted to create a visual home for our projects, where you can pin your startup and see who else is building nearby or on the other side of the world.
I’m doing this for fun and to help us discover each other. You are all invited to join and pin your project!
AI agents are becoming more than just chatbots.
Instead of only answering questions, they can now take actions like replying to emails, booking meetings, qualifying leads, updating CRMs, or even handling support tickets automatically.
For small businesses and startups, this feels like hiring a digital employee that works 24/7 without breaks.
Are you using any AI agents in your workflow yet? What’s actually working vs just hype?
I've used Opus 4.5 for a lot of front-end (macOS + web) and it's been generally quite good.
Now, for the past weeks I've been trying to do the same with Opus 4.6 and it just seems to have a terrible eye. The UX it creates is really rough, not even attempting to blend in and it makes weird mistakes like creating collapsible structures that are collapsed by default - even if there's mandatory fields in there to fill.
Any help in getting it closer to that one-shot magic would be great, considering going back to 4.5 since results now are really frustrating.
A knowledge graph seems like the best way to link AI diffs to structured evidence, to mitigate hallucinations and prevent the duplication of logic across a codebase. The idea behind KGs for agents is, rather than an agent reconstructing context at runtime, they use a persistent bank that is strictly maintained using domain logic.
CLI tools like CC don't use KGs, but they use markdown files in an analogous way with fewer constraints. What do people here think- are there better approaches to agent orchestration? Is this just too much engineering overhead?
I'm 27f and single but currently dating. I want to know more about men and their secret sexual fantasies or desires in bed. I have many of my own and want to know what it is that men desire but do not often share with women. I like it far rougher than most females and have experienced several men that do too and it shocked me how rough it was (i liked it). But is this common with men? Do you guys have secret fantasies you really want to try and either have/have not?
The Chelyabinsk meteor was a superbolide that entered Earth's atmosphere over the southern Ural region in Russia on 15 February 2013 at about 09:20 YEKT (03:20 UTC). It was caused by an approximately 18-meter (60 ft), 9,100-tonne (10,000-short-ton) near-Earth asteroid that entered the atmosphere at a shallow 18‐degree angle with a speed relative to Earth of about 19.2 km/s (68,980 km/h; 42,860 mph).
The light from the meteor was briefly brighter than the Sun (which is about -26.7 magnitude), visible as far as 100 kilometers (62 miles) away. It was observed in a wide area of the region and in neighbouring republics. Some eyewitnesses also reported feeling intense heat from the fireball.
The object exploded in a meteor air burst over Chelyabinsk Oblast, at a height of about 30 kilometres (18.6 miles). The explosion generated a bright flash, producing a hot cloud of dust and gas that penetrated to 26 kilometres (16 mi), and many surviving small fragmentary meteorites. Most of the object's energy was absorbed by the atmosphere, creating a large shock wave.
The asteroid had a total kinetic energy before atmospheric impact equivalent to the blast yield of 400–500 kilotonnes of TNT (1.7–2.1 petajoules), estimated from infrasound and seismic measurements. This was approximately 30 times as much energy as that released by the atomic bomb detonated at Hiroshima.
Credit: Aleksandr Ivanov
Willing to tip, thank you! 😊
Hi! I’m 24yo and looking to be smart with my savings. Is there a high yield savings account that I can withdraw from in a few years?
I’m aware of Roth IRA and 401k but with these the money is in the account until retirement- right?
I’m looking to save money that I can withdraw in about 5 years versus leaving it sitting in my personal savings account where it’s not growing.
Thank you!
LPT: Don’t wait until your cat has mats.Everyone searches cat matted fur removal tips after it’s already bad. I did too!Here’s the thing nobody frames properly — mats aren’t a grooming failure, they’re a consistency failure. What worked for me wasn’t some magical tool. It was this: Brush during something they already enjoy.
Mine gets brushed while half-asleep after dinner. Two minutes. Done!No dramatic grooming sessions, No wrestling, No “spa day.”
Also: if you feel a tiny knot, deal with it immediately. Small mats are easy. Big ones become a whole event. We overcomplicate pet care. Most problems are just small neglect stacking over time. What tiny maintenance habit saved you long term?
Sunday, February 08 - Saturday, February 14, 2026
It reminded me of my teenage days,when I used to see other teenagers kissing in the park .
If you’re searching for the highest Sintra AI coupon code, the current maximum verified discount is 85% OFF.
I tested it today and it applied instantly at checkout.
Details are in the top comment to avoid spam filtering.
Basically a meme where it's in a non english language but you could clearly understand through context
I want to run a model that will actually hold opinions. I tried a bunch of ways to manipulate an LLM, but i think i am terrible at it because i get told "I am an AI that generates human-like responses"
I just want to talk to a computer like i do to a normal person
I have a list of 10,000 contact number (mix of phone number and landlines so texting would not work), how can I efficiently contact them in bulk using AI? any suggestions?
I'm not trying to scam, trying to efficiently categorize a set of data for company research.
It had a dear face and a bare bottom.
Original video (in English) generated with Seedance 2.0, then dubbed with LTX-2 dubbing LoRA to French.
NO masking, NO voice-cloning is needed. JUST one pass.
link to original video: https://x.com/NotAnActualEmu/status/2021568393120489824
link to the code: https://github.com/justdubit/just-dub-it
What more examples do you want to see?
I built TexGuardian — an open-source CLI that uses Claude to help researchers prepare LaTeX papers for conference submission.
It connects to Claude via AWS Bedrock or OpenRouter and uses it to:
/review full)The key difference from just pasting into Claude: TexGuardian reads your full .tex and .bib files, understands LaTeX structure, generates proper unified diffs, and has checkpoint safety so you can always roll back.
Default model: Claude Opus 4.5. Also supports Claude Sonnet 4, GPT-4o via OpenRouter, or any model on openrouter.ai.
26 slash commands covering verification, LLM-powered fixes, anonymization, camera-ready prep, and more.
pip install texguardian
After a lot of tweaking and some AI help. I wrote a pretty decent python script to import a KNX project file into HA and also create devices and give them a pretty name. Maybe someone can use this.
Hi, new to this group. Has anyone let an audio recorder (such as Audacity) run for several hours and then analyzed the results? If you've done the analysis, have any spirits come through? I have an old laptop with plenty of storage space and would like to try this. Audacity has a feature where you can eliminate long stretches of silence and isolate any audio that gets recorded, which sounds ideal for this purpose.
is there an API for Claude Pro usage? I could find the info only for the API usage
May be a little off subject but was curious if any of you guys film your hunts and if so what camera would you recommend or stay away from?
I was thinking a 360 style may work great but would love to hear opinions.
Hiya. A few months ago, I was looking for a place to practice speaking languages with real people, but couldn't find one. So, I decided to build my own platform called Hovorly with the help of a friend. It's pretty simple: you pick your language and level, press a button, and we match you for a 7-minute audio (or video, if both users agree) call with another learner.
Hope you find it helpful: hovorly.com
I recently finished the first version of mathwibe.com and I am looking for feedback on the core concept. Most AI math tools today are basically just fancy calculators. they show the solution, the student copies it, and no actual learning happens. I wanted to build something that does the opposite. Instead of dumping a full explanation on the user, my tool forces the student to drive the problem-solving process.
The AI acts like a coach that stays one step ahead. It guides the student with hints and prompts for the next step, but it won't move forward or give the final result until the student actually works through the logic themselves. The goal is to bridge the gap between "getting help" and "cheating" by ensuring the student is the one doing the heavy lifting. I'm really curious to hear from this community if this approach feels more useful than the standard AI solvers that dominate the market right now. If you have a minute to check out the flow, I’d love to know if the interaction feels encouraging or if it’s too frustrating for a student who is truly stuck.
We’re in the annual Winter Break for Bob's Burgers. The air date for the continuation of Season 16 has not yet been announced.
As a reminder, seasons now have 15 episodes in total, so there should be 5 more new episodes left in this season, and they may not air back to back. Even though there isn’t a new episode this week, that doesn’t mean there’s nothing to talk about!
Be nice, respect others opinions, and have fun.
Check out the rest of FOX's Animation Domination at the following subreddits.
But will it actually leave this time? I really hope not!
Basically, I’ve spent the last 11 months working on something that caused me a real pain. LLMs running out of context as I needed them to read so many files.
So I created OCTAVE, which is a structure + compression layer that makes AI coding workflows more reliable and cheaper.
But I’m all alone. Solo developer not in the industry. And because I built this organically and not from a traditional dev background, I seem like a crackpot as have no validation. And just need anyone with AI experience to give me at least some feedback. Bad stuff welcome as friction the only way you grow.
Can anyone look at https://github.com/elevanaltd/octave-mcp and give me any pointers.
It works for me (took a 1m token API manual and turned it into a better API reference matrix totally 100k tokens) so I know it works. But I think my biggest issue is I don’t know what to do with it to help others use it.
Hi everyone 👋🏻
I’ve spent the last 6 months oscillating between "this is a great idea" and "why am I even doing this?" but today I finally hit the launch button on Pala - Productivity Planner.
I built Pala because I felt like most planners were either too complicated or just... soulless data entry job. I wanted something that felt like a calm workspace rather than a stressful "to-do" factory. It’s been a massive labor of love, and seeing it live on Product Hunt today feels surreal.
If you have a spare moment to check it out and leave some feedback (or an upvote if you like what you see 🌚🎀), it would honestly mean the world to a solo dev like me.
Product Hunt link: https://www.producthunt.com/posts/pala-productivity-planner
Thanks for letting me share my "baby" with you all.
I've been building an offline-first app with p2p sync as a side project. The kind of thing where two devices can edit data independently while disconnected and everything merges when they reconnect. I plan to monetize it, so data integrity is non-negotiable.
Last week I had all my tests passing and was feeling good. I was using the app and it was syncing between my 2 laptops on different networks (I have multiple networks in my home, one with Optimum, the other with Verizon @ Home). I noticed something small between the sync. Some minor missing things. Checked the logs, looked clean. I didn't think I was seeing things, so I sat down, planned, then wrote adversarial test suites, 33 tests simulating nasty real-world network conditions and 20 full end-to-end tests that spawn actual P2P processes on localhost.
They found four bugs that would have shipped to users and one that was causing that little itch that I felt something wasn't right:
None of these showed up in unit tests. They only appeared when I simulated things like:
After 20 years of engineering I've seen production incidents caused by similar kinds of bugs. Writing the adversarial tests took a few days. Debugging them in production with angry users would have taken a lot longer. If your side project touches sync, payments, or anything where silent failures mean data loss, write at least a few tests that try to break it under realistic conditions (Even if it means investing time to build up test infrastructure for that specific purpose). It's the highest-ROI testing you can do as a solo dev.
Hello everyone. I recently started using llama.cpp, previously used ollama. I have ryzen 7700x + 64 gb 6400 + 16 gb 5070 ti. In bios I use expo profile so that the memory works with optimal timings and frequency. I also set the infinity fabric frequency to optimal.
I use Ubuntu, the latest version of llama.cpp and the Unsloth/Qwen3-Coder-Next-MXFP4 model with 80k context.
After a recent update of llama.cpp, the token generation speed increased from 35-41 t/s to 44-47 t/s. I check the speed when generating a response inside VS Code using Cline. I open the same repository and ask: "What is this project?".
The command to run is:
/home/user/llama.cpp/build/bin/llama-server -m /home/user/models/Qwen3-Coder-Next-MXFP4_MOE.gguf -c 80000 --temp 1.0 --top-p 0.95 --top-k 40 --min-p 0.01 --jinja --fit on -np 1 --no-webui
I really like the combination of the current speed and the intelligence. But what other settings can I check/change to make sure I'm getting the most out of my current PC.
Thank you in advance for your answer!
Hey r/sideproject,
I'm a dad and a staff software engineer by day, and I just shipped my latest side project — a weekly family planner called Homeboard.
The idea is dead simple: a visual board where you see your whole family's week at a glance. You add family members, drag them onto a time grid, set where they're going and what they're doing. Print it for the fridge if you want. Done.
I built it because planning the family week shouldn't require a SaaS subscription, a shared Google account, or an app that sends you 15 notifications a day. Homeboard is a single HTML file — you open it in your browser, everything stays on your machine, works offline. No account, no cloud.
It's free right now. I'm not trying to maximize sales at this point — I'm trying to get it in front of actual parents who deal with the weekly chaos of school, activities, routines, and two schedules that never quite align.
What I really need is feedback:
- Does this actually solve a problem you have, or is it a solution looking for one?
- What's the first thing you'd want to change or add?
- Is the weekly view enough, or do you need monthly?
- Would recurring events (same thing every Tuesday) be a game changer or a nice-to-have?
Here's where I need your help: I can't post this on parenting subreddits because of self-promotion rules, but that's exactly the audience I need to hear from. If you're a parent and this resonates, or if you know someone who'd find it useful — I'd be grateful if you shared it. Even just showing it to one friend who's always juggling the family calendar would mean a lot.
Link in comments. Thanks for any feedback!
Welcome to week eight of eight of the Reddit Strength program. I hope you've all been enjoying it so far. If you missed any previous weeks, this is something you can jump into any given week you don't need to have been there week one for it to work. This strength schedule is designed to help you progress from one weight to the next (if that is what you want, you are of course free to maintain the weights you use as well). There is a #RedditStrength for those interested in joining it.
Wait, you just said eight of eight, does this mean it is ending?No, it will just be looping back around again. Next week we will go back to week 1's schedule again, with a couple of substitutions based on feedback, any purged classes, and the desire to occasionally mix in a new class.
Each class will list "light" or "heavy" (or "benchmark", see below), this is not to be confused with the Light/Medium/Heavy weights instructors ask you to use for each move. What it means is, lets say you normally use 5/10/20 but you are hoping to level up to 10/15/25. When my class says "light" you would use your 10s when the instructor calls for medium weights and when my class says "heavy" you would use your 15s for the same move. This will help us to progress to where the 15s are your new normal.
Monday:
Wednesday:
Friday
Extra Credit:
Benchmark: Every 4 weeks each body area has a Benchmark class. These are from the Benchmark Strength collections. By repeating these classes you are able to monitor and record your progression from cycle to cycle. You are encouraged to adjust the weights and reps to meet your personal goals.
Extra Credit: These are extra classes for those who want more work than the standard schedule. They are also good substitutions if for any reason you don't like a class or instructor on the schedule.
You can stop reading here unless you want to know why I am scheduling what I am scheduling.
The basis is a 4 week progression. In each body area we will use light weights for 2 weeks then heavy weights for two weeks per the definition of light and heavy above.
It follows this pattern, aiming for 45-60 minutes a day 3 days a week:
Each body area has the schedule staggered, so you're not trying to go all heavy in every area in the same week. While you are going heavy with LB you will go light with UB and vice versa.
FAQ
Feedback
I would love to hear what everyone things. Specifically; Are there any classes you would like me to rotate out (why)? Is this the right amount of strength work for you? Do you have a favorite class you are hoping I work into the mix?
Hello,
I've built a very basic phone to receive calls, make calls, send sms, receive sms, contact book, and settings feature. In settings you can chose from 4 languages: english, german, croatian and romanian and you can toggle on/off the clock.
these are my components:
- Raspberry Pi Zero 2W (Raspbian installed)
- A7670E LTE Cat-1 HAT (with LBS)
- U.FL Antenna
- Solar Power Manager Module (D)
- Battery Tray
- 3x18650 Batteries
- ILI9488 3,5" Touch Display (I took from another project of mine)
- USB-A-to-C Adapter
- An old headset from a Nokia phone with 3.5mm jack (to make and receive phone calls)
For the screen I use ili9488_adafruit.py library. For the LTE module I use serial library and AT commands. For the screen and touch controller I use two SPI busses because I didn't want to use a breadboard to wire them to the Raspberry.
Some tips: There can be XPT2046 or HR2046 (it's a copy of XPT) touch screen controller on your ILI9488 3,5" display. Use a small self-written gpio wrapper class to use ili9488_adafruit.py if you run into display issues. Calibrate your touch screen with a helper script. Disable bluetooth and enable SPI1 in config.txt, also free up the serial port and add the user who executes python entry script to group dialout.
I've also used the A7670E LTE Cat-1 HAT in another project with an ESP and well it can act as a modem if initialized as a modem and if you use it directly with AT commands, you can make direct https calls if you need to (this can be very useful) and much more. It's also a pretty cheap LTE module about 39$.
Looking at these two projects today is a trip. On the left, ELIZA showed us how a few scripts could mirror human emotion well enough to trick our brains. On the right, Shakey was the first "mobile intelligent agent" to reason about its physical surroundings.
We often think agents are a 2024 phenomenon, but the DNA of LLM-reasoning and robotic navigation has been evolving for over 60 years.
Discussion point: If you could give Shakey a modern LLM "brain," or give ELIZA a physical body back then, how much faster would the field have moved?
Lately I’ve been thinking about how weird this cycle has become.
First, we use AI to draft.
Then we run it through a detector.
Then we tweak or humanize it to reduce the AI score.
It’s like we built a system… and now we’re optimizing against our own system.
What’s interesting is this: once you understand why detectors flag text (high predictability, uniform sentence rhythm, overly clean structure), you start noticing those same patterns in your own writing — even when you didn’t use AI.
Out of curiosity, I tested a few drafts and refined them using “aitextools” just to see how structure changes affect detection. After small adjustments in flow and variation, the AI score dropped significantly — sometimes close to 0%.
Not because the ideas changed.
But because the rhythm did.
That’s the part people miss. It’s not just about “AI vs human.” It’s about statistical patterns.
Now the bigger question:
Are we improving writing quality… or just learning how to outplay detectors?
Curious how others here are navigating this.
In 1995, Edward’s Show was pitched as a simple moral cartoon.
“The Height” was supposed to be about overcoming fear of tall places.
The episode starts bright.
The characters visit a tall watchtower near their school. They joke. They laugh. One character says they’re scared of heights, and everyone gently teases them.
Nothing feels wrong.
But halfway through, the colors get slightly duller.
The sky isn’t as blue.
The music becomes thinner.
The character who was afraid of heights climbs the tower anyway.
The animation slows.
When they reach the top, they look down — and instead of a detailed ground, it’s just a flat gray texture.
They turn around.
The other characters are not reacting.
They’re just… watching.
Then the character slips.
The fall isn’t shown clearly. The frame cuts mid-motion.
There’s no sound.
When the camera returns to the group, they’re standing around something just off-screen.
No crying. No screaming.
Just silence.
For the rest of the episode, every character moves slower. Their smiles never return.
The editor said the final frame shows the tower empty.
And for one second, all the characters turn their heads at the same time — toward the viewer.
Production notes labeled it:
“Emotionally inappropriate for broadcast.”
It never aired again.
Who's there?
Jacob Marley.
CRATCHIT!!! Quick, call off the funeral!
I have tried so much online, but this is the one. Just sharing what’s worked. With a few survey apps, I earn $400–$600 every month without doing anything stressful. It’s become a nice side income. Even have proof of you want.
These are the exact apps I’m using: AttaPoll
https://attapoll.app/join/qvkmx
It pays via bank or paypal.
They’re legit, they pay, and you get bonuses for joining, with this link you get 0.50$. If you want to get the most out of them, I can show you what I do. I have proof also if you want with pictures
Hey everyone,
Hoping ya’ll can help.
I remember a creepypasta from when I was younger which I am DESPERATELY trying to find. It involved a room, stairs (old/abandoned i think?) and a group of friends.
Unfortunately I can only remember the ending and I know that probably makes it completely useless but I am HOPING someone else saw this years ago.
The ending sort of went like this:
The protagonist returns to the staircase where he goes down to find the room he was originally in with his friends. The room stunk I believe and when he entered he discovered one of his friends looking absolutely destroyed. He was malnourished and disgusting. The other friend had quite clearly been eaten by the other which caused the protagonist to run out. I think he may have called authorities?
Unfortunately thats all I can remember and even that may not be enough so I apologise.
If anyone can help I would really appreciate it.
I built an open-source MCP server that allows Claude Desktop to delegate heavy tasks to external models. By offloading long-form analysis and research, you can cut Claude token consumption by up to 10x.
The first version I shared earlier used GLM-5 (Z.ai's 744B model). While helpful, it suffered from reliability issues—random mid-session outages and frequent downtime during peak hours. So I decided to switch GLM-5 with more reliable Gemini 3.x
v2 is now live with Google Gemini 3.x integration. Gemini is now the recommended provider for stability and performance.
ask_gemini, ask_gemini_pro, web_search (with Google Search grounding), web_reader, and parse_document.The MCP server exposes Gemini tools directly to Claude Desktop. Claude acts as the high-level orchestrator while Gemini handles the heavy lifting like code generation or document analysis.
It follows a 3-tier priority system:
Opus 4.6 is highly capable but burns through message limits rapidly. This setup stretches your usage cap significantly.
Additionally, many users have reported Sonnet 4.5 degradation since the 4.6 release. By using this MCP, you let Sonnet handle orchestration while Gemini handles the heavy processing. Opus 4.6's parallel sub-agent orchestration is preserved; each sub-agent can delegate to Gemini independently.
The project is MIT licensed. I've included CLAUDE.md templates in the repo to help enforce delegation logic.
Contributions and feedback are welcome.
I wanted to paint this delicious, almost electric feeling when you touch the hand of your date for the first time :-D
Hi everyone. I’m planning to build/buy a PC within the next ~6 months (it’s a gift, so the timing isn’t fully up to me). I want to use it for both gaming and local AI/LLM projects.
I’m currently choosing between:
My environment / goals:
Questions:
Any advice from people who’ve used these GPUs for local LLMs would be appreciated.
With all the technological advancements we've had since 1969, why does the Artemis mission feel so much more complex and time-consuming? I understand the safety standards are higher now, but I'd love to hear from the experts here—what are the biggest technical or political hurdles we are facing today that didn't exist during the Space Race?
You can choose their gender and appearance, the werewolf is not a wolf while you are doing it. Also why?
Hey all,
I’ve been experimenting with building a local-first AI desktop that works with self-hosted gateways and local LLM setups.
Instead of another browser chat UI, this project explores an NPC-style desktop interface where agents, games, and document workflows live together.
Most local LLM tools feel like wrappers around chat.
I wanted to try something closer to a local AI environment — almost like an experimental AI desktop.
It’s still very much a playground, but I’m curious what people here think about the NPC + agent interaction direction.
Repo & demos:
👉 https://github.com/stormixus/openClaw-Desktop
Feedback welcome — especially from anyone running Ollama / local gateways.