I don’t think you know anything about the ad business other than being an end user. “They replace worse and spammy ads”? What does that even mean? Your post is a spam.
I know the direct impact is to Intel but think AMD just can’t catch a break. They finally have something going against Intel and now Apple and Microsoft will eat its lunch.
> Microsoft’s efforts are more likely to result in a server chip than one for its Surface devices
In my opinion, the majority of people have too much faith in their ability to predict the future.
I was terrible at predicting the real world performance of the Apple Silicon M1. It is, in fact, much better than I expected. On the other hand, Microsoft has thus far only had slightly modified AMD chips in their Surface Laptop, and poor performing ARM-designed Qualcomm chips in their Surface Pro X. Maybe I'll be bad at predicting the future, but I do not expect excellent performance out of Microsoft's Surface chips in the next 365 days. Probably longer.
In the meantime, more Windows computers will be sold than MacOS, and they will have mostly Intel chips, but an increasingly large number of AMD chips.
AMD has survived with less diverse revenue streams and much worse product portfolios. I'm optimistic for how they'll do over the next several years.
> I was terrible at predicting the real world performance of the Apple Silicon M1. It is, in fact, much better than I expected.
I expected what came out in the end. Apple would never put up the amount of money and the promise to ditch Intel if they were not absolutely certain that they could actually beat Intel performance-wise and have a working Rosetta to keep "old" software running.
The writing was on the wall for a long time, iPhone and iPad processor power has been taking decent shots at moderately powerful PC hardware for years now - the key thing why Apple didn't do it two years ago was software support and developer tooling, they wanted to avoid cloning the Windows RT fiasco which fizzled out because no one had working software and there was no translation layer.
Not sure why you're getting voted down - this is an excellent point.
It's also the case that now Apple has shown what can be done more firms will be seeing how they can try to reproduce that which will spur more investment in the Arm ecosystem.
Anybody remember the Zune? How different is Microsoft nowadays to make this actually work?
Most of their hardware products (not all... but most) end up being good, but not earth-shatteringly so, and either keep kind of moving on without taking any crowns (like Xbox), or slowly wither and die (Zune, Windows Phone).
Surface seems to be a good Halo product (not the game, ha!) so far, but I see very few Surfaces in the wild compared to any other laptop/tablet/desktops (mostly HP, some Dell, etc.).
Be careful with anecdotes! Even your own experiences! The world is a very big place. The few different jobs I've worked over the past, say... 5 years... I've seen lots of places that were Dell from door to door, a mix of Macbooks and Surface, all Macbooks but a few had Surface as personal/toys, etc. At some places, they are popular and I think when you see them all the time you're more likely to buy one yourself. When no one you know has one you're less likely to get one. So it's helpful to dig up sales statistics if you can find them...
Perhaps the closest thing to a Zune that Microsoft makes today is the Surface Duo. It's very expensive with nice hardware, but the initial launch got some bad press from mediocre software. The software has likely gotten a lot better over time, so it will be interesting to see a version 2 here. But no one will be buying a $1400 Zune :)
Microsoft is terrible at marketing anything to the common consumer and they always have been. That Windows dominates the consumer desktop dominant is a side effect.
Microsoft must always 2nd tier any consumer effort to their business of business.
I'm pretty inclined to think any custom CPUs will be for Azure specifically. They already have some custom hardware (e.g. FPGAs, ASICs) for different parts of the stack like storage and networking.
I really liked the Zune. Also, it had a streaming service before Spotify was a thing, and you got to keep 10 songs per month permanently even if you canceled your subscription. I think it was much better than it got credit for.
After M1, I am definitely holding out for Mac Pro rather than going the AMD route. I am sure there will be a ton of professionals thinking the same. While I’ve always cheered for AMD as a company, I think Apple will gain substantial market share in the next decade. And AMD just doesn’t have the resources to fight that wave.
They aren't going to have their lunch eaten. Apple doesn't have nearly the volume to make a meaningful impact, neither does Microsoft. Neither of them make up a substantial segment of cpu sales.
It does put healthy pressure on them, but I think AMD is fine for the near future, they're also working on ARM chips as well.
As a CS major, I can honestly say the ONLY thing of value was a couple of algorithm classes I took. And even then it was from a text I could have read on my own. Unless you’re what I call one-percent talent, CS theory is not going to help you much in your career.
Broad statement, I don't agree with. Different personality types take wholly different things from the same content. I personally feel (and this may sound hubristic, but it isn't) is that the CompSci background raised me to a whole new echelon of depth of meta-knowledge and ability to problem solve large problems.
That really depends. Where having CS knowledge matters is both low-level systems and libraries programming (working on the Linux kernel, embedded systems, building platforms or SDK stuff like AWS, etc.) and really big scaling, like at the FAANGs. Most people in this industry don't work on such things, so having CS knowledge probably does matter less to them.
Isn’t it really the semi industry unwilling to invest because they don’t want to get burned just a couple of years down the road. I just don’t see this demand lasting.
I hate to break this to you but front running happens all the time in finance. In fact that’s the entire business model for bulge bracket trading desks. I know it’s easy to point fingers at RH but what they’re doing is very benign.
I believe this is to punish RH for offering zero commissions. Especially on options. I seriously think what RH offers, for anyone who’s traded options on the retail side, is a game changer.
Sometimes what's good for the 1 person isn't good for everyone.
For example, if a bank starts giving out 0% interest 100 year mortgages (with some fine print that gives them some obscure revenue source to make it possible for them to make such an appealing offer) the bank should still get in trouble in my opinion if the consumer isn't informed about how they're getting such an amazing deal, because you're removing the consumer's ability to make an informed decision.
I believe for this example to be correct, the bank would also be hiding in its documentation that you have to pay an annual $100,000 mortgage paperwork fee for this 0% loan.
The article states that RH cost its consumers more in bad trade execution than it saved them in commissions. How are you sure this didn't happen to you?
I trade in multiple platforms and I put in limit orders (RH doesn’t even allow market afaik). To be fair, I am probably not your typical trader since I’ve been a fintech developer in the past as well as have been involved in analyzing many SEC cases for/against major financial institutions. I will say this: it’s extremely hard to prove execution quality on options trading let alone doing it in 2020 when the market was so volatile.
I have sympathy for that sentiment, but in practice that train has long left the station.
My own comment wasn't a plea for government to interfere, btw. Just a cynical admission that active trading is bad for the vast majority of people.
(My own money is sitting in the most boring ETF I could find, but loaded with about 3x leverage. Given the regulatory requirements of the industry I work in, I wouldn't be allowed to trade actively anyway, but I don't feel like I'm missing out.)
They call themselves "libertarians" while being, in fact, quite authoritarian. Libertarians support criminal justice reform, including reducing the power of police.
Libertarianism changed a great deal in America in the past 20 years. It’s not what it used to be and has been usurped by the right wing and corporate types.
And Github is missing? How can I take this list seriously? There’s a lot of golden nuggets when you dig through github. You also get immediate credibility checks.
I am going to suggest Eizo (and similar high end) monitors. High quality and uniform monitors do wonders for prolong use. You might also want to use greyscale-only on alternate days if your work can accommodate it. Last but most important, train yourself to blink 2x more often.
As a hobby photographer, this is simply amazing and the most intuitive article I’ve come across. This is a must read.
I am curious, however, why we still can’t digitally reproduce bokeh. Apple is getting close. I thought LiDAR would theoretically solve that and could yield indistinguishable renders compared to analog lenses. That would be a game changer in my view and why I would like to see Apple develop a full-frame sensor coupled with their technology.
A large lens captures information over an area, and so to a certain extent can "see around" out of focus objects. A selective blur of a fully focused scene captured from a single viewpoint (i.e. a small lens) can only approximate this effect, because it simply doesn't have access to the same information. Even with a perfect depth map, you still don't know what's behind occluded objects.
If instead of resolving points of light on the image sensor, you use a group of pixels to resolve an entire tiny image you can effectively also see around things. You end up with a picture of many small sections of the large image each at a different angle. The image on the far left of the image would see a different angle than the image on the far right. This is exactly what the Lytro camera did and it's why you can take the picture first and focus later. Of course you sacrifice overall imagine resolution quite severely:
So do I! If I had a bit more photography budget I'd try them out.
Also, I'm excited about cameras with dual/quad pixel AF, that are kind of a hybrid between lightfield and traditional cameras. I wonder what kind of sorcery one would be able to do with the light field data in those cameras!
One of the limiting factors for modeling bokeh and flare-like effects is dynamic range limitation. You need extreme HDR capturing to accurately reproduce these effects, as they often play the largest part with bright, especially colored, light sources. I did work on flare simulation and while many effects can be modeled by a rather simple convolution (in the spectral space of course -- you cannot make a rainbow out of RGB straightforwardly), the problem is that kernels (PSFs, point spread functions) for these convolutions have very long tails and it's the shape of these tails that gives most of the 'natural' artistic feel.
The thing is, these tails become apparent only when you convolve with a very very bright source -- which on a typical 12-bit level linear raw image would amount to something like 10⁵-10⁶, i.e. needing 4-8 additional bits of HDR.
Here are some useful links on the topic of flare simulation, I believe bokeh has mahy similar aspects:
Here's another interesting paper [1] on that topic. It shows that synthetically blurred images are significantly more realistic if they're based on recovered radiance maps (HDR).
Phone cameras are extremely wide-angle so everything is in focus and there is no natural bokeh. To add bokeh, you have to separate the subject from the background, and then also determine how far different parts of the background are. This requires very advanced AI for non-trivial images (see the imperfections in Photoshop's "select subject" tool), which Apple is actually still doing (that's what portrait mode is). But if it's not perfect, it quickly becomes worthless, so in short, they are doing it, but only the most advanced companies can try.
Not sure but - as film e.g. captures actual photons from the scene, probably some kind of information is encoded through that.
Bokeh is a kind of of space representation, similar to how you can basically "see" through hearing a sound stage of instruments separated properly when someone has a really good sound system, or how dogs have "5.1/7.1" sense of smell.
Look into light field photography tech. It is possible to capture a ”volume of light”, within which bokeh & more can be adjusted after the fact. Issue is the amount of data generated and complexity of tech versus getting a ”good enough for most situations” image via simpler means (regular photo). Regular + depth images (Apple LiDAR etc) with help of AI can create something vaguely similar to actual bokeh, but they’re missing a lot of source data.
In the world of 3D rendering (content created from scratch) very advanced & realistic bokeh effects are possible, as an example see http://lentil.xyz for the Arnold renderer.
Wow. I left the CG industry 3 years ago in a sad bout of defeat, involving both an ability to make a decent living and a realization that it would never my standards of creative engagement that were set by my lifetime love of photography and film. But, this project is very cool.