The real issue is modern day purchasing is not ownership. It's not even leasing. It's you pay the full list price for a PDF or music file that can disappear anytime from your library if the corpo-rats decide it shouldn't be available to you anymore.
If purchasing isn't ownership, then piracy shouldn't be theft.
I would like to have a trustworthy, privacy-respecting digital asset licensing service. Ideally free from profit motives, maybe run my a government, maybe a non profit. Regardless, i want my right to engage with a work separated from the manifestation of that work. Movies Anywhere kinda does this, but I want it to go way, way further.
Buy a book? You now own a the right to read that book from whatever file. Subscribe to Disney+? You have the right to watch all the marvel movies. Even when those movies are chopped and re-edited into a massive full chronology.
If a digital store ceases operations, you retain the rights to the bits. Maybe you have to do some work to get access to them, but you’re perfectly within your rights to torrent those bits.
Heck, maybe you could like the rights management to distribution. Have a right to the bits? Please seed.
I disagree, I thought the article highlighted the differences beautifully. I'm on a professionally color calibrated 27" monitor that came with one of those color calibration "certificates" at the time of purchase. The second I loaded the article, the differences were just stark. The skin tones alone were a dead giveaway.
It is no secret that Apple does a lot of post processing on their mediocre photos to make them look good - more so than most other Androids - because, it's all software. But, from the article, it is understood that the author is trying to point out that Apple could've done a better job to represent skin tones more accurately atleast. The fish-eye defense for Apple is totally understandeable, but, why are we defending the weak skin tones? Every year, they keep launching and claiming grandoise statements "This is the best smartphone camera out there is".
And no, this is not a limitation of smartphone sensors. In fact, if you look at the latest Xperia series from Sony, they have the same software from their DSLRs translated into the smartphones that addresses the skintones perfectly well.
I hope we can skip past the biases and personal preferences we have towards Apple and treat them neutrally like any other manufacturer. This "Apple can do no wrong" narrative and attacking anyone who points out their flaws is just tired and boring at this point.
It the old days Apple used to somewhat pride themselves with taking more "realistic" photos. While Android had it the other way around and basically post processes a lot of things as well as colouring. Mostly used for Social Media like Instagram.
And then came iPhone X. They started changing the colour of Sky and sharpening a lot of things. To the point of a lot of Photos taken by my camera looks great but also looked fake.
Did the iOS/Android situation actually swap, or was the X an outlier? I have photos from a recent event taken entirely with phones, and the result mirrors my experience for the past many years.
iPhone (11-15 including Pro Max) photos look "normal". Very, very similar to what my eyes saw in terms of colors. Photos taken with Android phones (Pixel 9 Pro XL, recent Oppo or Samsung A series, etc.) look terribly unnatural. The blue of the sky, the green of the plants, the red of the dress, they look "enhanced" and unnatural, nothing like what my eyes saw. I can tell apart almost any iPhone vs. Android picture just by looking at the colors on the same display.
The resolution or sharpness are harder to judge with one look and I wasn't trying to compare quality. But the colors are too obvious to miss.
> And then came iPhone X. They started changing the colour of Sky and sharpening a lot of things. To the point of a lot of Photos taken by my camera looks great but also looked fake.
The phone processing is lagely shaped by social media culture. Camera makers also started to incorporate in-camera editing features on vlogger targeted models.
G100 is introduced in 2020, and is a mirrorless camera with significantly higher processing power. D-Lighting and other similar post processing stuff came at least half a decade before.
Modern cameras like Nikon Z6/III can also do similar processing on camera during shoots to reduce post-processing load after the shoots to accelerate the production pipeline.
Faces became different on different camera phone. I don't think this will stop. I think it will only get worse and everything will be fake like what Samsung did when you are taking a Moon photo it just outright substitute it with a stock photo since they assume the moon looks the same everywhere humans will use their phone.
Its really iPhone 11 when things got crazy with deep fusion. The iPhone X actually has the most detailed screen. When i look at the same photos, somehow the X has better color and details than up iphone 14 max (i have every iphone till 14). If you look at raw image you will be blown away how little has changed since even iphone 4!
But at the end of the day its all about the photographer than the equipment. Just ask chefs. 20 students same kitchen, ingredients, recipe, 20 different tasting dish.
What really drove that in was when my art teacher took a shot of me with my camera and holy shit it was one of the best pics I've seen. He just has the eye at the end of the day.
My hunch is that you'll find more fans of Apple's color profile than detractors. This particular shot may have done it badly (to your eyes, some people prefer the more saturated look) but as a whole I have my doubts.
Color profiles vary per body at the least and are variable based on what post processing you do. I can load up Adobe Vivid and it'll look completely different than Adobe Portrait.
Shoot a Canon, Sony, and Fuji in JPG on the same scene (so same focal length and DOF) and auto white balance. Each body will output a different image.
I get the point… but I would counterargue, perhaps facetiously, that if one needs a professionally color calibrated screen to notice the difference, then it is really not something that would matter for mere mortals.
Without hyperbole I could give people a badly calibrated CRT from the 90s and it doesn’t matter to some. Some people just don’t see anything wrong with pictures, and don’t even know what to look for or what it’s called.
The inverse are the professional photographers who work with pictures day in and out, they see everything.
I am amazed at the eye professional photographers have. A shot of a building that is suddenly really interesting, versus my shot of that building. Colour. Angle. etc.
I just don't have the eye for it, despite having a decent amateur setup.
BUT, yes, lots of people might look at a random photo on their phone and not notice skintones, or the fisheye etc. If you then give them a pile of 10 photos from a pro, versus 10 from an amateurs phone, they'll notice. Particularly if they're blown up a bit on a print or a decent screen.
It might not matter if you are just flicking through 20 shots on your phone, but as the article implies, we have perception of these things, even when it is the subconscious.
> I just don't have the eye for it, despite having a decent amateur setup.
Checkout this book by the late† Bryan Peterson, where he shows photos taken by his students as well as his own of the same location, and explains the differences in techniques/settings:
Way back when, I bought a couple of his books, probably on the recommendation of someone or other in an online photography forum - 'Understanding Exposure' and 'Learning to see creatively'. The latter in particular was wonderful for someone who had the technical aspects of photography more or less sorted, but was - ahem - deficient in the artistic department.
Anyway, I felt his style was incredible - down-to-earth, but not afraid to go into a bit of background if needed - so I sent off a brief letter of thanks through his publisher.
Lo and behold, got a very nice letter back, thanking my for the kind words and encouraging me (I had mentioned that I shot both film and digital, seeing as at the time, a wonderful film camera like the F5 could be had for a fraction of what even an entry-level APS-C DSLR cost) to experiment A LOT using the DSLR, as the instant feedback provided would help my analog hit rate progress leaps and bounds.
I was already thinking a bit along those lines, but became a lot more conscious about trying to improve my skills using the DSLR upon his encouragement - and my photos improved a lot over the following years as a result.
I mean we are used to completely different colour profiles in reality also. You don't perceive the colors compared to the room light, you perceive them relative to the other colors on the screen, so the screen doesn't really matter.
Do those photos look similar to you? Those color differences are huge to me. And some of the stylistic choices the image processing has made make them look like photos of different people.
I don't think the point was to say you need the calibrated monitor to notice, rather that it's _even more stark_, and clearly points to the issues raised in the article.
And to be fair, the thrust of the article was "Why don't you see printed and frame iPhone photos", and these things that might be a bit subtle on an un-calibrated screen are going to be a big deal if you professionally print it.
You’re somehow both reading far too in to my comment (none of my comment is specific to Apple) and not reading my comment enough (because you m missed the point about color profiles)
I’m not defending the default color choices, I’m saying that they’re comparing apples to oranges because they’re comparing an output designed to be opinionated with one that’s designed to be processed after the fact. The iPhone is perfectly capable of outputting neutral images and raw files.
You don’t need to shoot RAW to have neutral images. In Camera JPEGS are still defaulted on most cameras to be as neutral as possible unless you opt for a different picture style.
This is the opposite default to phones where the defaults are to be punchier, but where you can still select a neutral profile.
The argument is basically comparing defaults and claiming it’s endemic to the format.
Even if one is using in-camera JPEG and does not want to spend 1hr/picture in Darktable, they can still play with many more objectives, exposure, shutter time, physical zoom, aperture, etc.
I'd even go the other way around: if you just bought a camera, just use in-camera JPEGs for the first months and familiarize yourself with all the rest (positioning, framing, understanding your camera settings, etc.) before jumping into digital development.
Photography for me is about the physical and optical side of things. Choosing a lens for a situation, framing the shot, aperture, shutter, etc.
When I switched to digital I was seduced by post-processing, partly as a substitute for the look I could achieve with different films, but mostly I suspect because all those sliders and numbers and graphs are naturally attractive to a certain type of person :)
I eventually pretty much stopped taking photos.
Changing my workflow from post
processing RAW photos (and barely ever looking at them again) to using in-camera JPEGs that I can immediately share, print, or whatever was enough to start me taking photos again regularly as a hobby.
More unexpectedly, in addition to the obvious time saving of removing the post processing step (aside from occasional cropping), the satisfaction benefit of the immediacy with which I can now print, share, display, etc. my favourite photos has been huge. It’s so much more rewarding getting photos right after you took them and actually doing something with them!
Now I’m not even sure I’d call all that digital image processing “photography”. Sure, it’s an art in its own right, and one some photographers enjoy, but the essence of photography lies somewhere else. I’d encourage everyone to try a camera with decent in camera JPEG production. You can always shoot Raw+JPEG if you’re scared to go full cold turkey.
When I pick up a camera, my intent is one of two things: the experience of photography itself, or the best quality I can reasonably obtain. Neither of those goals are attained with a smartphone.
Every other time I take a photo, it's with a smartphone. It's easily good enough for the vast majority of use cases.
> Even if one is using in-camera JPEG and does not want to spend 1hr/picture in Darktable,
That's... absurd. Granted I lean toward a more "street photography" style, but it's exceptionally rare that I spend more then ~30s on a photo in Lightroom. Most of that time is spent cropping. White balance, exposure correction, etc. are all done in bulk.
> they can still play with many more objectives, exposure, shutter time, physical zoom, aperture, etc.
Sure - and why wouldn't you want to play with RAW as well? It's not like the profile the camera would have used isn't embedded in the RAW anyhow.
> I'd even go the other way around: if you just bought a camera, just use in-camera JPEGs for the first months and familiarize yourself with all the rest (positioning, framing, understanding your camera settings, etc.) before jumping into digital development.
I don't disagree with this at all. Of course there are edge cases; that's why I said "probably".
To put it another way: if you're shooting JPEGs regularly, you're almost certainly not doing it for the craft. There are very few reasons I can think of to choose a traditional camera if you're not going to take advantage of the improvements in ISO and dynamic range that it offers - and those are two things you give up[0] shooting JPEG.
0: You give up ISO in that you are discarding much of the information that you could use to push/pull process, which is very often preferable to very high ISO.
ETA: I just looked it up. In 2024, I kept 767 photos from my iPhone and 1,900 from my cameras. That includes multiple performances of my wife's dance studio, so the latter is heavily skewed by that. Excluding those, I kept 376. In other words, I appear to be taking my own advice here.
>and those are two things you give up[0] shooting JPEG
No you don't? Good in camera JPEGs will utilise push-pull processing, exposing for maximal dynamic range all for you. You don't lose the advantages of the better optics and sensor just because the JPEG is produced in camera.
How would the camera know if you're exposing two stops below your intended EV because you plan to push it in post or if that _is_ your intended EV?
Furthermore, JPEG supports ~8 stops of dynamic range while my X-Pro3's raw files support ~14 stops. You lose almost half your total DR when you shoot JPEG (with that camera).
Because some will choose the exposure and decide when to underexpose and push for you, eg fuji DR feature. You choose your intended EV for the image and it chooses whether to underexpose and push based on the dynamic range of the scene.
>You lose almost half your total DR when you shoot JPEG
No because the camera is applying a tone curve that compresses that DR when producing the JPEG. You lose precision, not DR, but if you don't intend to process the image further it doesn't matter much.
All that you said is perfectly valid for your usecase. But you can't just make your use case a generality.
Some people have a camera because they want to take better pictures than their smartphone but don't want to bother with post-processing, some have tried manual processing and found that the work/result balance was not doing it for them, some think that JPEGs look perfectly fine, some just don't have the time to do the processing... there are myriads of reason for which people would like to land somewhere between “let iOS do it” and “I systematically chose my ISO according to this Darktable script I developed these last years”.
Cellphones absolutely can produce high quality results. Especially if you add the constraint "best quality I can reasonably obtain" as many consider carrying a dedicated camera all the time to not be reasonable. And this was the case even before the advent of the smartphone. How many people did you see carrying a camera in 1980, or 1990, or 2000? Almost zero.
It very might be and it shouldn’t be easy to find good training pictures in the sea of amateur shoots online with different development techniques and light conditions, the last batch was also produced in 2005. Some previous generation of that film line might have similar skin tones.
I would think that with modern techniques it isn’t necessary to apply the emulation to the whole image but to just the parts desired?
You and most Apple neggers are not really any better by ignoring what all this comes down to, choices and trade-offs. Apple’s primary objective is clearly related to taking photos that provide a positive impact on the user while being as easy to do so as feasible, not accuracy of the image. They likely care more about the most number of users being satisfied, not accurate reproduction of an image.
I would not be surprised if they don’t actually want accuracy in imaging at all, they want a positive impact on the user, and most people don’t want reality. If that means causing “hotdog skin” under some conditions or with some skin tones, or maybe even if most users prefer “hotdog skin”, while having an overall positive photo outcome for most other users; they will likely always choose to produce “hotdog skin”. They are also serving a far greater and, frankly an increasingly less light skinned audience than most understand. Maybe it’s just an effect of “whites” having given away their control over things as ever more “non-whites” become revert increasingly important and an ever increasing number and percentage of Apple’s users. Do Asians and Africans get “hotdog skin”? I don’t know the answer to that.
It is the narrow minded perspective of DSLR purist types that this stuff bothers, largely because they cannot look beyond the rim of their plate. Some platforms are for accuracy, others for impact and user experience.
People should maybe consider stop saying things like “this Apple is an absolutely horrible, awful, no good orange!”
You and most Apple obsessed curmudgeons are not really any better by ignoring what all this comes down to, choices and trade-offs. Apple’s primary objective is clearly related to taking photos that provide a positive impact on the user while being as easy to do so, not accuracy. They want the most number of users to be satisfied, not accurate reproduction of an image. I would not be surprised if they don’t actually want accuracy in imaging, they want a positive impact on the user, and if that means causing “hotdog skin” under some conditions or with some skin tones, while having an overall positive photo outcome for most other users, they will always choose to make you have “hotdog skin”. They are in serving a far greater audience than most understand.
It is the narrow minded perspective of DSLR purist types that this stuff bothers, largely because they cannot look beyond the rim of their plate.
You may want to stop saying “this Apple is a horrible, awful, no good orange.”
Whenever Yahoo! comes into discussion, no one talks about the other side of Marissa Mayer, the former CEO who fucked up Yahoo so bad. I had a friend back then who used to tell me how toxic the work culture had become in there. She apparently openly hated men and discriminated against them in the name of empowerment (and subsequently lawsuits were even filed). She was a glorified senior manager from Google basically who had no idea how to run a company. The Tumblr acquisition was proof. Most of the employees had lost faith in her after that acquistion basically. She, on the other hand, took away millions home while everyone who had invested their sweat equity and trusted her even one bit got screwed royally.
I don't know the finer details of that, but weren't they already travelling towards obscurity by the time she arrived?
Yahoo was a defacto entry point into the wider web and as Google gained popularity I think the average web savviness of people meant people had less reasons to arrive at Yahoo. Having a search deal with Google at least guaranteed their relevance for a while.
Cut the grandoise talk. You stole someone's work and now you just shrug it off as "incorrectly attributed as Apache". That's not a mistake, that's a deliberate action plan. The force push others have mentioned is the proof. Atleast be honest in your apology.
I hope YC takes serious action and eliminates you guys from their cohort if you're still in one. This reflects very poorly on them otherwise.
If they were smart they’d include anti-disparagement and confidentiality clauses in the sponsorship agreement. They aren’t, though, so maybe it’s just a pathetic attempt at bribery.
The point I found myself resonating with a lot is:
>Apple’s App Store policies disproportionately favor the surveillance capitalism business model employed by companies like Meta and Google and therefore entrench an online business model that routinely violates consumers’ personal privacy.
I'm building (and have been for the last few years) an open source high-performance Wordpress alternative on Elixir. It aims to achieve 1:1 feature parity. One thing that Wordpress has built up over the years that will take a little long for me is the plugins eco-system. But, other than that, I think everything else should be on par. IF you're an enterprise, you should easily see over 30-40% in server costs just by switching from Wordpress. This has been tested and proven with one of our enterprise clients who just recorded 500 million requests on a fork of the CMS.
But, I'm determined to see its completion even if there is just one user. I didn't take the Wordpress fiasco and how they handled it, lightly at all and it only fueled my motivation even more. ETA is by end of this year right on time for Christmas.
I don't know if Wordpress has any kind of customizability or scripting, but it's now possible to add Lua scripting, natively, to an Elixir application. If that's handy it's something to consider.
Obligatory note - if you're a backend developer, you do not need MCP. MCP is just tool/function calling. It has been around for a lot longer now. MCP is only needed if you need something to integrate with the frontend chat applications. If you need to automate something with LLMs, you do not need MCP. Right now, it's just the new cool buzzword to throw around.
You don't need to use RESTful JSON to get two computers to communicate with eachother, either. You can just implement your own thing, and if someone wants to interface with it, they can write their own adapter! Who needs decades of battle tested tooling? This can be a Not Invented Here safe-space. Why not.
You think you're being sarcastic, but you don't get the point - implementing 3rd party Saas tools in your Saas backend means one MCP server per service, which can quickly add up, not to mention is bad design. I'm not opposed MCP protocol itself, it's just not needed if you're providing a Saas that talks to many vendors, you don't necessarily need MCP for it.
AI coding tools have been improving/updating like craze over the past six months.
Honest question: what are some of the AI Dev tools (I prefer command line) that have leapt ahead recently, with good tool/function calling? Do you (or others) have a clear preference for Claude Code vs aider vs some other alternative? Or is the meta shifting toward the orchestrators like Taskmaster and Puppeteer?
I meant to say that MCP is just a wrapper around good old function/tool calling, it's not a new superpower by itself. So, if you're designing a Saas, you don't need to use MCP yourself, you can just use good old function/tool calling.
To answer your specific queries, I use the autocomplete in VS Code and I directly chat with ChatGPT-o3 for advanced problems because my background is in Elixir and most models that are hyped up fail badly with Elixir. I'm always a huge fan of o3 as it can solve the most complex problems I throw at it..
Yes, but if I'm implementing "good old function/tool calling" today, why would I not implement it as MCP? I suppose it complicates things? MCP, in addition to being tooling endpoints, would also open up my software to a rapidly growing ecosystem that requires minimal additional work.
AND I can still use those same endpoints as tools. What would be the argument for "you don't need MCP when implementing new stuff"?
I think you're confusing implementing MCP for your own software offering/Saas, which is different and I think you should do it, vs implementing third party MCP servers into a Saas backend, which is what I'm talking about.
Because, to do the latter, the standard design is one hosted MCP server per vendor. If you use more than even 5, that's a lot of servers to maintain in a Saas context.
This is a really nifty CLI tool that lets you fit an LLM into a shell environment. Check out examples like piping context into and out of the `llm` command.
Who cares, it's Adobe. They'll eventually steal your work under some under-the-hood TOS update and even charge you for it. Don't believe me? Ask the existing CS customers. Fuck Adobe. They should be boycotted to oblivion.
>As between you and Adobe, you (as a Business User or a Personal User, as applicable) retain all rights and ownership of your Content. We do not claim any ownership rights to your Content.
If purchasing isn't ownership, then piracy shouldn't be theft.
reply