Teaching Claude to QA a mobile app

111 points by azhenley a day ago on hackernews | 12 comments

devmor | 23 hours ago

Reading through this reminds me of how bot farms will regularly consist of stripped down phones that are essentially just the mainboard hooked up to a controller that simulates the externals.

When struggling with failing to reverse engineer mobile apps for smart home devices, I’ve considered trying to set something like this up for a single device.

maxbeech | 22 hours ago

the worktree discipline failure is the most interesting part of this post to me. when claude is interactive, "cd into the wrong repo" is catchable. when it's running unattended on a schedule, you find out in the morning.

the abstraction is right - isolated worktree, scoped task, commit only what belongs. the failure is enforcement. git worktrees don't prevent a process from running `cd ../main-repo`. that requires something external imposing the boundary, not relying on the agent to respect it.

what you've built (the 8:47 sweep) is a narrow-scope autonomous job: well-defined inputs, deterministic outputs, bounded time. these work well because the scope is clear enough that deviation is obvious. the harder category is "fix any failing tests" - that task requires judgment about what's in scope, and judgment is exactly where worktree escapes happen.

i've been working on tooling for scheduling this kind of claude work (openhelm.ai) and the isolation problem is front and center. separate working directories per run, no write access to the main repo unless that's the explicit task. your experience here is exactly the failure mode that design is trying to prevent.

cmeiklejohn | 21 hours ago

yeah, it's curious. I sometimes ask it why it ignored what is explicitly in its memory and all it can do is apologize. I ask -- I'm using Claude with a 1M context, you have an explicit memory -- why do you ignore it and... the answer I get it "I don't know, I just didn't follow the instructions."

seba_dos1 | 18 hours ago

Genuine question - what else did you expect?

fragmede | 17 hours ago

For it to follow the instructions I had for it. Call me naive and stupid for thinking the 1M context window on the brand new model would actually, y'know, work.

Natfan | 16 hours ago

why would further chance at context pollution be a good thing? i feel like it is easier for data to get lost in a larger context

quesera | 15 hours ago

That's a bit anthropomorphic though.

When LLMs become able to reflectively examine their own premises and weight paths, they will exceed the self-awareness of ordinary humans.

grey-area | 13 hours ago

It doesn’t reason or explicitly follow instructions, it generates plausible text given a context.

hgoel | 9 hours ago

Just dealt with this last night with Claude repeatedly risking a full system crash by failing to ensure that the previous training run of a model ended before starting the next one.

It's a pretty strange issue, makes me feel like the 1M context model was actually a downgrade, but it's probably something weird about the state of its memory document. I wasn't even very deep into the context.

ptmkenny | 21 hours ago

It’s interesting to see the solution that the AI came up with, but WebdriverIO and Appium already exist for this use case, are open source like Capacitor, and come recommended from the Capacitor developers. https://ionic.io/blog/introducing-the-ionic-end-to-end-testi...

mrbombastic | 19 hours ago

Also https://maestro.dev/ is pretty good these days

darepublic | 7 hours ago

Im sorry but just because you got the automation working doesn't mean you're getting meaningful QA from Claude analyzing your screenshots.