A few times in my career I've been looking for exactly this role, a place that would value sense and design/ux sensibility as well as an understanding of what is technically possible. Larger companies don't hire for this role, because it does not have a name. You could write “Temple Grandin for the web”, but that is closer to magic, something non-scalable, a position that is created for a specific person. I ended up working in small teams, often in experimental research positions, but that too is very special and esoteric.
If the role had a name, not even a theoretical foundation but simply a name, then and only then it could actually exist.
Right, and there are better burgers than McDonalds. My point here is just to bring up an example of instincts getting horribly miscalibrated in a way that highlights the need for thoughtfulness and self-control.
Make no mistake, the term self-control doesn't just apply to food instincts, it applies to people instincts too. Your instincts want you to go around assuming that ugly people are bad and pretty people are good, but if you avoid every uggo you're gonna miss out (especially in tech) and if you trust every handsome salesman you meet you're gonna get rolled. Thoughtfulness and self-control are always warranted.
You can calibrate your gut feeling, though. You do it, every day, as you go through life. You get a gut feeling that a specific person might be difficult, and you can override it consciously.
But I often find that the “gut” feeling is more often right, and the unexplainability of it comes from the fact that it takes hundreds of little things into account and models future interaction outcomes and presents the feeling you will have in the end as ”gut feeling”.
Your own black box of neural networks in your gut.
After having worked with Latour (for two years, on his last exhibition and my artistic research), I got the feeling that he is much more a speaking philosopher, rather than a writing one. I found his speeches easier to grok and, once I started treating his books as speeches, they too became easier to understand.
Still, I remember proposing something more artistic for the exhibition, and he countered by saying that it would make things less clear, harder to understand. He was genuinely looking for ways to express ideas in ways that would make them easier to grasp. It's just that, for some things, the more direct way towards understanding might actually be the winded, poetic way.
If you reach for the top left corner your thumb will naturally come into contact with the bottom right corner of the screen, assuming you are holding the device one-handed (in your right hand).
I’m trying to picture this, but I can’t. If I hold and lock my iPhone with my right hand, I press the lock button with my thumb. If I try to reach the top left corner, I either do that with my left thumb (mostly) or index finger (sometimes), or with my right thumb (very awkward movement on a Pro Max). In none of these cases my right thumb comes in contact with the screen or even close to it. Maybe because I use the backside of my pinky finger to lock the phone in place.
Well its not that they couldn't do it but the iPad has always had bezels and the apps are expecting bezels not thumb rejection. They could be but they're not built with an on screen safe area.
They extract value from an open source project, use the resources/bandwidth of plugin repositories, position(ed) themselves as WordPress affiliated (the branding can easily be understood as WPengine being core WordPress), and contribute nothing back. It is the “socialize the losses, privatize the profits” of open source.
> use the resources/bandwidth of plugin repositories
As does every single other host that offers WordPress, and every user.
> position(ed) themselves as WordPress affiliated (the branding can easily be understood as WPengine being core WordPress)
One: "WP" was explicitly allowed, by WordPress, for use of WordPress. Matt yoinked this after all of this started, in the last two weeks or so. He also tried to make it retroactive.
Two: nominative usage says that if you factually offer WordPress hosting (or MySQL hosting, or whatever), you are allowed to say so. It doesn't mean you are maliciously "positioning yourself" as "affiliated", in any way, shape or form.
> and contribute nothing back
Not true at all, despite Matt's venom. They contribute and maintain several of the most popular plugins, they contribute to the codebase (just not to Matt's liking), and sponsor conferences and community events - this all started around the time that they sponsored a WordPress conference to the tune of $75,000 and then were banned from attending it, which is odd, because supposedly WordPress (the open source project) and the Foundation are independent (per all of their own filings with regulatory bodies and the IRS), but they were banned because they were in a dispute with Automattic (CEO - Matt), so WordPress Foundation (President - Matt) decided so. To add insult to injury, banned, but they decided to keep the sponsorship money.
With fal, you can train a concept in around 2 minutes and only pay $2. Incredibly cheap. (You could also use it for training a style if you wanted to. I just found I seem to get slightly better results using Replicate's trainer for a style.)
$2 for 2 minutes? Can't you get less than $2 for 1 hour using GPU machines from providers like runpod or AirGPU? I found it a bit expensive to use replicate and fal after 10 minutes of prompting.
I have not used runpod or airgpu, and not affiliated.
Yes, renting raw compute via Runpod and friends will generally be much cheaper than renting a higher level service that uses that compute e.g. fal.ai or Replicate. For example, an A6000 on fal.ai is a little over $2/hr (they only show you the price in seconds, perhaps to make it more difficult to compare with ordinary GPU providers); on Runpod an A6000 is less than half that, $0.76/hr in their managed "Secure Cloud." If you're willing to take some risk of boxes disappearing, and don't need much security, Runpod's "Community Cloud" is even cheaper at $0.49/hr.
Similar deal with Replicate: an A100 there is over $5/hr, whereas on Runpod it's $1.64/hr.
And if you use the "serverless" services, the pricing becomes even more astronomical; as you note, $1/minute is unreasonably expensive: that's over 20x the cost of renting 8xH100s on Runpod's "Secure Cloud" (and 8xH100s are extreme overkill for finetuning image generators: even 1xH100 would be sufficient, meaning it's actually 160x markup).
Happy to help! It's a lot of fun. And it becomes even more fun when you combine LoRAs. So you could train one on your face, and then use that with a style LoRA, giving you a stylised version of your face.
If you do end up training one on yourself with fal, it should ultimately take you here (https://fal.ai/models/fal-ai/flux-lora) with your new LoRA pre-filled.
Then:
1. Click 'Add item' to add another LoRA and enter the URL of a style LoRA's SafeTensor file (with Civitai, go to any style you like and copy the URL from the download button) (you can also find LoRAs on Hugging Face)
2. Paste that SafeTensor URL as the second LoRA, remembering to include the trigger word for yourself (you set this when you start the training) and the trigger word for the style (it tells you on the Civitai page)
3. Play with the strength for the LoRAs if you want it to look more like you or more like the style, etc.
I want to make a LoRA of Peokudin-Gorskii photographs from the Library of Congress collection and they have thousands of photos, so I’m curious whether that’s effective for autogenerating the caption for images.
It's funny you should ask. I recently released a plugin (https://community-en.eagle.cool/plugin/4B56113D-EB3E-4020-A8...) for Eagle (an asset library management app) that allows you to write rules to caption/tag images and videos using various AI models.
I have a preset in there that I sometimes use to generate captions using GPT-4o.
If you use Replicate, they'll also generate captions for you automatically if you wish. (I think they use LLaVA behind the scenes.) I typically use this just because it's easier, and seems to work well enough.
In early photography history there was a pictorialist movement where people used special sort focus lenses to introduce more of a painterly quality. These lenses, such as Rodenstock Imagon still exist and are sought after.
It is by the same author who created Ottercast[0] and has apparently been developed during GPN[1], a Chaos Computer Club event in Karlsruhe, Germany that ended two days ago.
I'll repeat my comment from the previous discussion, as ultrasound can be used to extract many things quickly:
Alternatively you could use a (quite affordable) ultrasonic machine designed for gentle cleaning of jewelry, dentures, glasses..
I've used one to extract fragrance from biological material for an artistic project[0], and it worked really well. Instead of having to wait for a few weeks for a tincture to finish, you put the same tincture (alcohol and material you want to extract fragrance from) into a plastic bag for just 15 minutes. Sure, it smells not quite the same, but the speed is often worth it. I've even heard about some guy trying to turn vodka into whiskey with an ultrasonic machine and wood chips.
There are quite a few ultrasonic machines on the market. I've tried EMAG and multiple Chinese no-name machines that are just as powerful but cheaper. Sadly the no-name machines are quite a bit louder - you can't stay in the same room while it's running basically. Still, they all work well for this kind of fast and dirty extraction.
A long while back, I looked into how viable ultrasonic acceleration of the “aging” of Dit Da Jow (https://en.m.wikipedia.org/wiki/Dit_da_jow) would be and had come across a paper looking into it the same.
If I remember correctly, while ultrasonic treatment did a decent job of quickly extracting the various chemicals from their carriers, there were some caveats.
The ratio of chemicals extracted could differ from normal, and it only did a partial job of accelerating the formation of very complex secondary compounds that form when the whole mix is properly aged.
So the difference in smell you found could be some chemicals being preferentially released over others, and/or the lack the secondary chemicals.
Indeed, it is after all more of a mechanical extraction than a chemical one. I remember seeing tiny bits of grass floating in the tincture after extracting the scent of fresh grass - the cavitation bubbles practically shattered the cell structure of the plant, and a dusty green soup was floating around the blades of grass.
https://www.youtube.com/watch?v=YlQT4ptwLKs might be interesting - it's been a while since I watched it, but IIRC this video covers a variety of ultrasonic infusion experiments. According to the autogenerated timeline in search results, it includes coffee as well.
If the role had a name, not even a theoretical foundation but simply a name, then and only then it could actually exist.