Hacker News new | past | comments | ask | show | jobs | submit | arauhala's comments login

If I'm reading this correctly, a few themes PG touches here are:

1) loss of control when hiring a professional manager without intrusion to sub organization, because you rely on the manager provided information. If the manager is not straightforward, the sub organization may become a black box, where issues can go unseen

2) Lack of access to the frontline people and their understanding, which is opened by the Jobs like key people meeting across the org

3) Id also imagine that if you have a founder with deep domain knowledge, who has worked across all aspects of business, going fully hands off with the details, and replacing decision makers with more generic managers potentially from other industries, means that lot of expertise gets disconnected from the relevant decisions.

Ultimately the outcomes are all about the decisions and the decisions are all about understanding, which is all about the information. As such, it's not surprising if cutting the seasoned founder from both key mid level decisions and the firsthand information flows brings disadvantages.

I'd over all interpret the article to be about how hands on the founder should be about the different aspects of the business rather than the leadership, as implied elsewhere


Author here. The article seeks to answer the question "How to make ML / LMM development faster with higher quality?"

The long format article seeks address the pain points in current tooling (Jupyter Notebook, unittests) used to support intelligent application RnD. It proposes a new approach for tooling and development, which combines the benefits of notebooks (review driven process, caches) with benefits of unit testing (repeatiability, regression testing)

The tool has been succesfully used to support the developmvent topic models, analytics and GPT based functionality. Here's an example of how to create a simple test that both creates a snapshot for results that also snapshots the environment and (e.g. GPT) API calls so that the test interaction can be replayed e.g. in CI.

    import booktest as bt
    import os 
    import requests
    import json

    @bt.snapshot_env("HOST_NAME")
    @bt.mock_missing_env({"API_KEY": "mock"})
    @bt.snapshot_requests()
    def test_requests_and_env(t: bt.TestCaseRun):
        t.h1("request:")

        host_name = os.environ["HOST_NAME"]
        response = (
            t.t(f"making post request to {host_name} in ").imsln(
                lambda:
                requests.post(
                    host_name,
                    json={
                        "message": "hello"
                    },
                    headers={
                        "X-Api-Key": os.environ["API_KEY"]
                    })))

        t.h1("response:")
        t.tln(json.dumps(response.json()["json"], indent=4))

https://github.com/lumoa-oss/booktest/blob/main/getting-star...


Regarding this topic, I found the following paper on TNT (three nightmare traits) interesting and bringing some clarity to the issues/confusions you are mentioning:

https://www.frontiersin.org/articles/10.3389/fpsyg.2018.0087...


Hi,

Author here! :-)

The project feels transformative, because the ML part was implemented an RPA developer using predictive database queries. It wasn't that complex data science project, but it demonstrates that simple ML can be done by (RPA) developers with somewhat tight budgets and schedules

If you have questions, feel free to ask! We@Aito also love feedback!

Regards, Antti & aito


I wonder if you are familiar with the predictive databases?

We at Aito.ai have gotten lot of interest from different RPA/no-node users and providers, and predictive database queries seem like a best intelligent automation.

https://aito.ai/blog/could-predictive-database-queries-repla...

It would be interesting to deeply integrate predictive functionality in your system, especially as it integrates a DB naturally. This could be used to offer predictive functionality from the plarform out of the box.


Isn't this the same as the inverted honesty/humility personality trait from the HEXACO personality model?

https://en.m.wikipedia.org/wiki/Honesty-humility_factor_of_t...


Looks similar to me, but the same group has published on HEXACO and honesty/humility, so they may make a distinction.

https://www.uni-ulm.de/en/in/psy-pfm/research/publications/

from researchgate:

> "In theory, D ... is distinct from the low pole of Agreeableness or Honesty-Humility ... in several defining features, especially the representation of sadistic and spiteful tendencies and the broad inclusion of justifying beliefs (see Moshagen et al., 2018). ..."


Hi,

One of the aito.ai founders here! :-)

We feel that the project was transformative, because the ML project was done end-to-end by RPA developers without data science team help. One of the RPA developer comments about Aito was 'that it was easy to use', which is a big thing in the machine learning space and in the context of ML democratization.

If you have any questions, I'm happy to help.

Regards, Antti & Aito


I feel this is an interesting and a rather stake on the innovation / startup aspects.

I got reminded of this Rand's blog post after PG's blog post and its comments, and also after the why companies stop innovating article.

It seems that there is a contradiction between people who are very high on openness (the personality traits) and people, who are very low on openness and also high on risk aversion and orderliness (as pointed out in Rand's article).

I feel it's still less about good vd bad, but more about explorative vs optimizing & conforming tendencies as in Rands post. The same topic has been also discussed multiple times before, once with pioneers, settlers and town planners article.

While there are obviously risks in the conformism going wild / aggressive, in general it strikes me that Rands is right in the sense that you need both 'explorers' and 'optimizers' in any successful organization.


Preach to the choir! :)

Make does so many things correctly:

- it gives user total freedom to modify the built

- yet it has excellent defaults for most situations

- it is full blown programming language,

- yet it's syntax is extremely specialized for the purpose and familiar at the same time (it's bash)

- it's basic assumptions/structure are extremely simple (timestamps, dependencies)

- yet it's extremely powerful and can take into account most situations.

It's be extremely Unix-style tool with very simple building blocks that combine together in extremely powerful way. Its fast to learn and easy to master and as such, the best kind of design.

Sometimes making a build with it is a small programming project, yet after using all kinds of built tools, I end up just wishing that I could use make instead.


bash is a pretty terrible programming language and it's terribleness increases geometrically with size. This is a large part of why people don't like make I think.


This is in fact another misconception:

> and familiar at the same time (it's bash)

Make is an orchestrator that defaults to sh (not bash, actually). But what I've seen in some really good ones is taking advantage of the "orchestrator" part rather than the "sh/bash" part: One-line recipes that simply call a short script in any other language, including python, that do the desired thing for that recipe. The Makefile is then just used for coordinating partial-running those scripts.

On top of that, you could in fact set SHELL/ONESHELL and then the recipes themselves are python inside the Makefile:

  SHELL := python
  .ONESHELL:

  testfile:
      import csv
      writer = csv.writer(open("$@", 'w'))
      writer.writerow(['one', 'two'])
      writer.writerow(['three', 'four'])
And:

  $ make testfile
  import csv
  writer = csv.writer(open("testfile", 'w'))
  writer.writerow(['one', 'two'])
  writer.writerow(['three', 'four'])
  $ cat testfile
  one,two
  three,four


Whoah! I didn't know about that feature. For most build tasks sh is probably best but this could be incredibly useful for some non build related things.

"Build tools are only for building software" is probably another falsehood to add, make can be a great tool anywhere you've got a dependency graph.


That's pretty neat, but I have never ever seen it used (not even in python projects). I'd be skeptical until I'd seen it used in anger.

It also would need to basically support virtualenvs and pip install to be really compelling, I think. Python without those is a pale imitation. That CSV module is known to be pretty bad, for instance (like many built in modules).


True, but it is familiar and quite useful for it's purpose (running scripts) :)

For build it mostly works fine. Doing anything else with it would be a total pain.


This is a good point, and Aito's inference engine has lot of similarities with search engines. As an interesting, we can provide TF-IDF scored full text search functionality from the same indexes we are also using for inferences.

Still, while there are tons of similarities, I feel that the inferences engines are fundamentally different from search engines. The data structures are different, and I can see them diverging even more in the future. The algorithms and modes of operations are very different, even if there is some overlap.

From the user point of view, there is still a striking similarity between Aito and ElasticSearch. Both act now as auxiliary databases (all thought we would like to make Aito fully ACID with an SQL interface in future) and provide more search engine / inference engine-like functionality than full database functionality.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: