myflash13 3 days ago [-]
I've been planning to build something like this for a while now (just for myself). Love the planning workflow, will likely steal that idea.

But code review is more than just reviewing diffs. I need to test the code by actually building and running it. How does that critical step fit in to this workflow? If the async runner stops after it finishes writing code, do I then need to download the PR to my machine, install dependencies, etc. to test it? Major flow blocker for me, defeats the entire purpose of such a tool.

I was planning to build always-on devcontainers on a baremetal server. So after Claude Code does its thing, I have a live, running version of my app to test alongside the diffs. Sort of like Netlify/Vercel branch deploys, but with a full stack container.

Claude Code also works far better in an agentic loop when it can self-heal by running tests, executing one-off terminal commands, tailing logs, and querying the database. I need to do this anyway. For me, a mobile async coding workflow needs to have a container running with a mobile-friendly SSH terminal, database viewer, logs viewer, lightweight editor with live preview, and a test runner. Diffs just don't cut it for me.

I do believe that before 2025 is over we will achieve the dream of doing real software engineering on mobile. I was planning to build it myself anyway.

wjsekfghks 3 days ago [-]
Completely agreed. The first version of our app was on mobile. We implemented preview deployment for frontend testing (and we were going to work on backend integration testing next). But yeah, without a reliable way to test and verify changes, I agree it's not a complete solution. We are going to work on that next.

FYI, our initial app demo: https://youtu.be/WzFP3799K2Y?feature=shared

arjun810 2 days ago [-]
We had exactly the same desire and built it as well, with a nice mobile UI and live app previews. Would love to get your feedback — let me know how to contact you if you’re curious.
myflash13 2 days ago [-]
Would love to check it out! interpreterslog-removesuffix@protonmail.ch
selinkocalar 19 hours ago [-]
The hard part with tools like this is maintaining context across different data models. GitHub PRs, Linear tickets, and LLM conversations all have different information architectures. Are you doing any semantic linking between related items, or just surface-level aggregation?
wjsekfghks 16 hours ago [-]
We use one model across all three surfaces called Task. Those do have different information architectures, i.e. Linear tickets have correspondence or comments, LLM conversations have chat history and code reviews have diffs and comments. But, at the end of the day, all those information are used to output the correct code and we are doing just that.
7thpixel 1 days ago [-]
If you'd like some feedback, I ran this through my algo and analyzed what is unclear and potentially risky for bring this forward:

What's unclear:

Exact AI coding capabilities, free tier limitations, and the revenue model beyond the hosted version.

Risky Assumptions:

- Users will find the UX/UI sufficiently intuitive for immediate adoption.

- Companies will see ROI in reduced dev time/cost when using Async.

- The AI agent can clarify requirements accurately on a variety of tasks.

Hope this helps!

wjsekfghks 20 hours ago [-]
Thank you so much for the feedback! Could you elaborate more on the point about UX/UI. We are trying to see what we can do to make onboarding and issuing the first task as easy as possible. I'd love to hear your insights on that
reilly3000 3 days ago [-]
Thumbs up for dark mode. I really want to love this but I can’t get over the idea of paying GCP to have cloud run clone my repo over and over again every time I interact with Async. I’m still going to try it, but I think I’d rather rent a VM and just have it be faster. This is coming from someone who deals with big fat monorepos, so maybe it’s not that bad for the average user.
wjsekfghks 16 hours ago [-]
we are trying to run execution locally using local claude code
chis 3 days ago [-]
Great pitch, you've articulated the pain point super well and I agree with it.

I have personally had no luck with prompting models to ask me clarifying questions. They just never seem to think of the key questions, just asking random shit to "show" that they planned ahead. And they also never manage to pause halfway through when it gets tough and ask for further planning.

My question is how well you feel it actually works today with your tool.

dbbk 2 days ago [-]
Interesting you say that. My workflow is just to use Claude Code with Opus in Plan mode, have it write a plan, and ask "What clarifying questions do you have for me" and it always prompts me to answer very good questions.
wjsekfghks 3 days ago [-]
Honestly, it's not there yet and I'm iterating to making it better and consistent. But, I've had a few moments where it got questions and implementations right and it felt magical. So, wanted to share it with more people and see how people like the approach.
Terretta 3 days ago [-]
> Show HN: Async – Claude code and Linear and GitHub PRs in one opinionated tool

Sadly, this seems inaccurate. Appears to be Claude Code and GitHub PRs, but not Linear.

It should be Linear, since Linear does an extraordinary number of useful things beyond "issue list".

Since it seems to have nothing to do with Linear, I'm surprised the headline says it it's those three things, by trademarked brand name.

Speaking of tracking tasks:

> Tracking sucks. I use Apple Notes with bullet points to track tasks...

Claude Code seems very good at its own "org mode", using .md file outline and checklists to organize and track progress as well as keep an easy to leverage record.

It is also able to sync the outline level items with GitHub issues, then plan and maintain checklists under them as it works, including the checklist items in commits and PRs, and even help you commit that roadmap outline snapshot at the same time to have progress through time as diffs...

motoxpro 2 days ago [-]
I had the opposite reaction, in that it seemed exactly like those three things. Sans the years of development on each but the idea seems really clear. "Build Linear, but the person who does the work is Claude and the 'state' of the work (code) is git (GitHub)"
mmargenot 4 days ago [-]
I think this is a neat approach. When I interact with AI tooling, such as Claude Code, my general philosophy has been to maintain a strong opinion about what it is that I actually want to build. I usually have some system design done or some picture that I've drawn to make sure that I can keep it straight throughout a given session. Without that core conception of what needs to be done, it's a little too easy for an LLM to run off the rails.

This dialogue-based path is a cool way to interact with an existing codebase (and I'm a big proponent of writing and rewriting). At the very least you're made to actually think through the implications of what needs to be done and how it will play with the rest of the application.

How well do you find that this approach handles the long tail of little things that need to be corrected before finally merging? Does this approach solve the fiddly stylistic errors that need to be made on its own, or is it more that the UI / PR review approach that you've taken is more ergonomic for solving them?

wjsekfghks 4 days ago [-]
hey! that's awesome to hear, thanks for the feedback.

we've tried a lot of things to make code more in-line with our paradigms (initially tried a few agents to parse out "project rules" from existing code, then used that in the system prompt), but have found that the agents tend to go off-track regardless. the highest leverage has just been changing the model (Claude writes code a certain way which we tend to prefer, vs GPT, etc) and a few strong system prompts (NEVER WRITE COMMENTS, repeated twice).

so the questions here are less about that, but more about overall functional / system requirements, and acknowledging that for stylistic things, the user will still have to review.

pjm331 3 days ago [-]
Very cool! I’ve been building an internal tool at work that’s very similar but primarily focused on automatically triaging bugs and tech support issues, with MCP tools to query logs, search for errors in bugsnag, query the db etc. also using linear for issue tracking. They’ve been launching some cool stuff for agent integrations.

And sorry I’m a light mode fan

wjsekfghks 3 days ago [-]
Nice, are you building a linear app? I saw their recent post about integrating cursor, devin, etc into their platform.

And, light mode? I'm sorry, we can't be friends anymore

pjm331 3 days ago [-]
yup was building it as a linear agent https://linear.app/developers/agents
basic_banana 1 days ago [-]
It is pretty similar to async code, but i guess async is more like linear, while async code is more likely a codex cloud for claude code.

https://github.com/ObservedObserver/async-code

wjsekfghks 1 days ago [-]
cool, let me check it out
JoshPurtell 4 days ago [-]
Something I'd consider a game-changer would be making it really easy to kick off multiple claude instances to tackle a large researched task and then to view the results and collect them into a final research document.

IME no matter how well I prompt, a single claude/codex will never get a successful implementation of a significant feature single-shot. However, what does work is having 5 Claudes try it, reading the code and cherry picking the diff segments I like into one franken-spec I give to a final claude instance with essentially just "please implement something like this"

It's super manual nd annoying with git work-trees for me, but sounds like your setup could make it slick

wjsekfghks 4 days ago [-]
Interesting. So, do you just start multiple instances of Claude Code and ask the same prompt on all of them? Manually cherry picking from 5 different worktrees sounds complicated. Will see what I can do :)
JoshPurtell 4 days ago [-]
Yeah, exactly, same prompt.

I agree, it's more complex. But, I feel like the potential with a claude code wrapper is precisely in enabling workflows that are a pain to self-implement but nonetheless are incredibly powerful

frankfrank13 4 days ago [-]
Looks cool, tbh I think i'd be more interested in just a lightweight local UI to track and monitor claude code, I could skip the linear and github piece.
wjsekfghks 4 days ago [-]
Thanks for the feedback. Yeah, that is where we are heading as said in the demo video. We will follow up shortly to release local tool :)
ahinchliff 3 days ago [-]
I second this. I love the flow you are building but I want this to run locally :)
3 days ago [-]
bjtitus 3 days ago [-]
I've really been enjoying the mobile coding agent workflow with [Omnara](https://omnara.com/). I'd love to try this as well with a locally hosted version.
wjsekfghks 3 days ago [-]
you can also give our mobile app a try :)
mgrandl 4 days ago [-]
Your docs on selfhosting are a bit light. Can you use the mobile app while selfhosting? That would be the main selling point for me.
4 days ago [-]
brainless 3 days ago [-]
I love your video, it is very clear. I am building in this space so I am very curious and happy about all the products coming in to help the current tooling gap. What is not clear to me is how Async works, is it all local or a mix of local or cloud since I see "executes in cloud" but then I see a download-able app.

I see a lot of information on API endpoints in the README. Perhaps that is not so critical to getting started. Perhaps a `Getting Started` would help, explaining what is the desktop app and what goes into cloud.

I have been hosting online sessions for Claude Code. I have 100+ guests for my session this Friday. And after "vibe coding" full time for a few months, I am building https://github.com/brainless/nocodo. It is not ready for actual use and I first want to use it to build itself (well the core of it to build the rest of the parts).

wjsekfghks 3 days ago [-]
To clarify, most of the execution (writing code or researching) is happening on the cloud. And we use Firestore as DB to store tasks. The app (both desktop and mobile) is just interface to talk to those backends. We are currently working to see if we can bring majority of the execution to local. Hope this makes it a bit clearer.
brainless 3 days ago [-]
Thanks for the clarification.

Does this mean that my codebase gets cloned somewhere? Is it your compute or mine, with my cloud provider API keys?

wjsekfghks 3 days ago [-]
If you use the app as is, it will be cloned to our server. If you choose to host your own server, it will be on yours
brainless 3 days ago [-]
OK thanks.
rylan-talerico 3 days ago [-]
Super cool. Have been looking for something like this. Nice work!
wjsekfghks 3 days ago [-]
thank you :) let us know how it feels
k__ 4 days ago [-]
I hope it works better than GitHub Copilot Agent.
furyofantares 3 days ago [-]
> Traditional AI coding tools

I love this phrase :)

wjsekfghks 3 days ago [-]
:)
artur_makly 4 days ago [-]
Whats the benefit of cloud hosting it?
wjsekfghks 4 days ago [-]
The main benefit is that you can issue tasks on mobile. And, initially we were just a mobile app. When we decided to build a desktop version, we just reused all the infra we had. Realized for desktop, cloud isn't necessary. So, we are trying to migrate to local now
saaammm 14 hours ago [-]
been working on building exactly this lol
chris_st 3 days ago [-]
thumbs down
wjsekfghks 3 days ago [-]
:( light mode gangs