Hacker News new | past | comments | ask | show | jobs | submit | more abra0's comments login

What are the tools people use to draw diagrams? I've tried many things and settled on Miro on an iPad (infinite canvas + pencil), but I still think this space is underinvested in.

The downside of diagrams from code is the loss of the wysiwyg aspect -- I want to be able to manipulate things visually.


I like Gliffy built into Confluence because it's default at work. I also like excalidraw and was hoping to test the collaborative feature for video calls. I think the ephemeral nature of the diagrams is just fine, it's essentially a whiteboard with some QoL features like persistence, pre-population, and undo/redo.



I have been using https://www.yworks.com/products/yed for years. You can import a c4 palette probably. I do not really stick to specific shapes but use what makes sense for the context.


Cell phone camera on a little holder pointed at a piece of paper. Then I join as a second participant, mute it, and turn the volume off.

Or ipad and apple pencil on google docs jamboard using Duet to sketch things out.


The one issue I’ve found is that most services seem to retain _much_ less precision for participant video versus screen sharing. Text can often become really blocky and blurry.

I did something similar, but used OBS. There are a few ways to feed video from a cell phone into it. Gives you the chance to do any zooming/cropping/etc to account for limitations in where you can place the phone. As well as adjust brightness/contrast/white balance if you’re really anal about that kind of stuff.

From there I open the feed in a “projector” window and screen share that.


miro works slightly better than Gliffy, imo.


>It was quite popular for a surprisingly long time.

Hah, that's a blast from the past. One reason it lasted as long as it did was the Knights of the Button, users who collaborated to keep it alive. I implemented the Zombie-presser, 1k+ donated accounts automatically pressing the button when no one else would. We kept it alive for a more then a month before the most embarrassing bug of my career finally killed it :D Good summary here [1].

Fun times! Thank you for the reminder :D

[1] https://www.theguardian.com/technology/2015/jun/08/reddits-m...


What was the bug?


> The final point of failure was even less spectacular: a co-ordinated attempt to keep the button alive by automatically pressing it with donated accounts when it got too low had been working on overtime, but a fatal flaw meant that no-one bothered to check whether the anointed account actually could press.

> The bot queued up the account, attempted to press the button – and found that the account had been registered after 1 April.


This seems like one of the first things they would check for though, right?


> the most embarrassing bug of my career

Though to be honest, if that’s the most embarrassing bug abra0 is doing great.



GPT-4 has 32k tokens of context. I'm sure someone out there is implementing the pipework for it to use some as a scratchpad under its own control, in addition to its input.

In the biological metaphor, that would be individual memory, in addition to the species level evolution through fine-tuning


Yeah, I’m doing that to get GPT-3.5 to remember historical events from other conversations. It never occurred to me to let it write it’s own memory, but that’s a pretty interesting idea.


I'm not sure exactly what the ask here is.

>In contrast, for our own Entity Recognition models we can (and do) calculate probabilities that explain why a certain entity is shown.

>Hence, I think for API users of GPT3, OpenAI should return additional statistics why a certain result is returned the way it is to make it really useful and more importantly compliant.

For LLMs, you can get the same thing: the distribution of probabilities for the next token, for each token. But right now we cannot say why the probabilities are the way they are, same goes for your image recognition models.


The problem in a nutshell, and the one the FTC had pointed out, is Model explainability. I was working in the past of an AI for automated lending decisions. We were asked to be able to explain every single decision the engine took.

If now a news article reaches our AI engine, it will tag, categorize, classify, and rank this news article. All based on models that are explainable.

LLMs, at least how I personally implemented them in the past, create a huge black box that is largely non-explainable.


Well, it should attract less investment, because presumably companies that do not value caution will capture more value.


Reading this thread makes me depressed about the potential for AI alignment thinking to reach mainstream in time :(


It seems that with higher temp it will just have the same existential crisis, but more eloquently, and without pathological word patterns.


The pathological word patterns are a large part of what makes the crisis so traumatic, though, so the temperature definitely created the spectacle if not the sentiment.


Take a snapshot of your skepticism and revisit it in a year. Things might get _weird_ soon.


Yeah, I don't know. It seems unreal that MS would let that run; or maybe they're doing it on purpose, to make some noise? When was the last time Bing was the center of the conversation?


I'm talking more about the impact that AI will have generally. As a completely outside view point, the latest trend is so weird, it's made Bing the center of the conversation. What next?


I have a growing pile of very different theories concerning the layoffs (actual cost cutting, irrational investor pressure, copycat behavior, systemic risk, AI risk, etc.etc.). The article wasn't particularly interesting, it basically argued hurr durr, big tech leadership is incompetent and is just winging it.

Your comment has a new angle, but it only says how, not why. To paraphrase, layoffs are a way to fire white people in middle management in favor of women minorities. But why is that so?


Great link, and a superlatively germaine usance of the locution "apposite"! :)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: