Hacker Newsnew | past | comments | ask | show | jobs | submit | BrandiATMuhkuh's commentslogin

Howie.systems (EU VC funded) | Full-Stack (AI) Engineer | ONSITE Vienna, Austria | Full-time | Start date: ASAP

- We’re building Howie.systems, an AI platform for the architecture/engineering/construction industry. Our focus: automating knowledge extraction and retrieval from large document sets (100k+ files, multi-tenant, multi-user, with Supabase/Postgres + pgvector under the hood). We’re fully funded by international investors and are expanding our team in Vienna.

Role: We’re looking for a Full-Stack Engineer with a love for strong typing and AI.

- Must: Next.js, Supabase, very TypeScript-safe mindset

- Super plus: experience with AI frameworks (Vercel AI SDK, Mastra, LangChain, etc.)

- Should have good experience with AI Coding tools like Cursor, Cloud Code

You’ll help us scale our ingestion, RAG, and AI interaction layers into production-grade tools dealing with enterprise customers

What we offer:

- Competitive salary + equity possible Onsite in Vienna (no remote option)

- Small, hands-on team with big international backing Work on a fully modern stack (TS, Supabase, Vercel, AI frameworks)

If this sounds like you, send your CV and a cover letter to: contact@howie.systems Please include within the cover letter, detailing your experience and why you think you would be a fit for the role


Does this also work if I have data on SharePoint, DropBox, etc. and want to pull them (sync with local machine)?

My use case is mostly ETL related, where I want to pull all customers data (enterprise customer) so I can process them. But also keep the data updated, hence pull?


In an ideal world the rclone special remote would support git-annex' importtree feature. Then you could periodically run `git annex import <branch>:<subdir> --from <sharepoint/dropbox-remote>` to "pull" from those remotes (it is not really a pull as you aren't fetching version-controlled data from those remotes, rather you are importing from a non-version-controlled source and record the result as a new revision).

Unfortunately this is not (yet?) supported I think. But you could also just do something like this: `rclone copy/sync <sharepoint/dropbox-remote>: ./<local-directory> && git annex add ./<local-directory> && git commit -m <message>`.


git-annex does support rclone as a special remote iirc

Yes it does, but I don't think the special remote supports the importtree feature, which would be necessary for this.

Chroma looks cool. Congratulations on the Cloud version.

For my client I've "built" a similar setup with Supabase + pgVector and I give the AI direct SQL access.

Here is the hard part: Just last week did I index 1.2 million documents for one project of one customer. They have pdfs with 1600 pages or PPTX files of >4GB. Plus lots of 3D/2D architecture drawings in proprietary formats.

The difficulty I see is - getting the data in ETL. This takes days and is fragile - keep RBAC - Supabase/pgVector needs lots of resources when adding new rows to the index -> wish the resources scale up/down automatically. Instead of having to monitor and switch to the next plan.

How could chroma help me here?


> Supabase/pgVector needs lots of resources when adding new rows to the index -> wish the resources scale up/down automatically. Instead of having to monitor and switch to the next plan.

Many ways potentially - but one way is Chroma makes all this pain go away.

We're also working on some ingestion tooling that will make it so you don't have to scale, manage or run those pipelines.


I'll for sure take a deeper look. Ingestion has been by far the biggest pain and least fun. Those infra parts hold us back from the cool things -> building agents/search

I have started treading everything as images when multimodal LLMs appeared. Even emails. It's so much more robust. Especially emails are often used as a container to send a PDF (e.g. a contract) that contains an image of a contract that was printed. Very very common.

I have just moved my company's RAG indexing to images and multimodal embedding. Works pretty well.


   Location: Linz/Austria/EU 
   Remote: OK (Any timezone OK)
   Willing to relocate: NO 
   Technologies: AI (LLM, RAG, Agents, Voice, AI-SDK), Typescript, React, Next.js, Supabase, Firebase, Postgres, GCP
   Résumé: https://brandstetter.io/Resume_Brandstetter_Jurgen.pdf
   Email: j@brandstetter.io
   LinkedIn: https://www.linkedin.com/in/j-brandstetter
   Salary: USD 150k / year
I'm an AI engineer and senior full-stack developer with an aspiration for a team/product lead or CTO-like position. For ~7 years, I was co-founder and CTO of an ed-tech company called amy.app. My co-founder and I scaled the fully-remote company to about 25 employees. Before that, I earned a PhD in Human-Robot Interaction in NZ and Oxford. I've used the past year contracting, after winding down my company, to learn and apply as much as possible about AI solutions. I'm talking; RAG (Multiple TB), Agentic-Search, Deep-Research, LLM based email automation, AI-tools, MCP, Vibe-Coding,... Currently, I'm designing and working on an agentic enterprise search engine for the AEC industry for a customer (Supabase + Vercel AI SDK, Mastra, Multi-Modal Embedding, etc.)

I'm very product/customer-focused and pragmatic. I like to move super fast (not only because of Cursor :D).

My dream environment is an early-stage, fully remote startup.

PS: I have a family, which means the main compensation via shares isn't an option for me.


Those things are not mutually exclusive. We use RAG and Vector stores to index terabyte of data. Then use tools calls (MCP) to allow the AI to write SQL to directly query the data (vector store).


Thanks for that. Haven't seen it quite done like that, but it makes sense.


I was just building a SharePoint integration for some enterprise customers (I do RAG on their data) and I find it brutal, that now, I have access to all their SharePoint data for all SharePoint sites. Even the ones I don't want to index. And I even use user login over admin/service key login.

AFAIK, the Oauth claims of SharePoint don't allow specifying particular projects only. (BTW: same counts for platforms like ACC/BIM360)


Congratulations on the launch! This is really cool.

I'm not super active in the humanoid robot space anymore, however I did my PhD about 9 years ago in HRI. That was the time of Boston Dynamics, DARPA robotics challenge, and Aldebaran's Pepper and Nao robots.

You mentioned you are building everything open source. What happened with ROS and related projects? Do you build on top of that, or is that all super outdated that a reboot was needed?

Another question I have is: why are you choosing a two-legged human over a four-legged one?

My experiments with two legged robots were mostly bad. Not only did they fall basically all the time but they also had a big drift. So far, I have not seen any large improvements. But again, I might be very outdated.

I always said to my colleagues. The main point stopping robots from picking up is a stable platform. And with the platform I mean walking.


Eh. I think we got a bit nerd-sniped on some things and we ended up trying to build most of our stack ourselves. The control loop is just a pretty simple Rust loop for running ML models. ROS is kind of annoying to work with and the control loop is pretty simple so we just wrote it ourselves.

For two legged - I think it will eventually be quite a bit cheaper, it's mostly a software problem to get them to be stable. RL based control has gotten much, much better. For example, I was talking to Trevor Blackwell a few weeks ago, and he was saying the IMU on the original Anybots robot was over $2k, whereas if you throw a noisy IMU into sim, you can get away with something basically from a cellphone. So yea, personally I'm a big believer in needing to solve the robotics intelligence problem, and once you solve that, humanoids will make the most sense as a form factor.


   Location: Linz/Austria/EU 
   Remote: OK (Any timezone OK)
   Willing to relocate: NO 
   Technologies: AI (LLM, RAG, Agents, Voice, AI-SDK), Typescript, React, Next.js, Supabase, Firebase, Postgres, GCP
   Résumé: https://brandstetter.io/Resume_Brandstetter_Jurgen.pdf
   Email: j@brandstetter.io
   LinkedIn: https://www.linkedin.com/in/j-brandstetter
   Salary: USD 150k / year
I'm an AI engineer and senior full-stack developer with an aspiration for a team/product lead or CTO-like position. For ~7 years, I was co-founder and CTO of an ed-tech company called amy.app. My co-founder and I scaled the company to about 25 employees. Before that, I earned a PhD in Human-Robot Interaction in NZ and Oxford. I've used the past year contracting, after winding down my company, to learn and apply as much as possible about AI solutions. I'm talking; LLM based email automation, AI call agent, RAG, Agentic-Search.

Currently, I'm designing and working on an agentic enterprise search engine for the AEC industry.

I'm very product/customer-focused and pragmatic. I like to move super fast (not only because of Cursor :D).

My dream environment is an early-stage, fully remote startup.

PS: I have a family, which means the main compensation via shares isn't an option for me.


Congratulations on the launch.

I wish you great success.

Focusing on speaking first, and not writing makes so much sense. As a father I could first hand experience how my child learned 3 languages (German, English, Arabic) without reading/writing first.

The hardest part for you will likely be the "curriculum". It's "easy" to make something that works for a couple of weeks. But language learning takes years.

Btw, if you are up for it, I would enjoy chatting with you. -> I co-founded an AI math tutoring company, and focused my PhD on how to influence human language with AI. Hint: Social connections between humans and AI.

What you are trying to solve is something I dreamed about for years.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: