Hacker News new | past | comments | ask | show | jobs | submit login
When the "R" goes missing from R&D (2021) (madned.substack.com)
218 points by kogir 7 months ago | hide | past | favorite | 104 comments



Tangential, but it seems to me that as organizations grow, more and more resources are poured into everything other than what made them successful in the first place. Bureaucracy grows, hierarchies increase, teams upon teams organize, things are envisioned and realized and KPId, volumes of messages and emails shift back and forth, endless hours are met in meetings...

At the same time, productivity is reduced, actual communication diminished, gatekeepers slow everything and everyone down, fiefdoms form with their territorial turf wars, naked emperors run amok fanned by yes-men. On average three people out of a hundred are doing something actually useful, while the company slowly loses its grip on whatever niche monopoly has allowed it to so grotesquely exist thus far.

Everyone else is gradually PTSD'd into a corpo version of Homo Sovieticus, filling out time sheets and RTO attendance records while duly marching towards V17 in the most recent two-year plan, aligned with the corporate values writ large on the HR site's main banner.


Regression to the mean.

All companies start small. Only the absolute best survive, so there is tremendous survivorship bias for the remaining companies to comprise of fabulous people.

Naturally, when the company starts growing, it is insanely hard to be able to keep on recruiting only such fabulous people. If nothing else, it would slow the growth, snd as such is usually not prioritized.

That’s when the regression to the mean starts. Then, you start needing more process and more bureaucracy so that the middling people have any chance of keeping up.

Unfortunately, there is no deus ex machina at the end of this path, like in TFA.


I do wonder if there is a U shape here - ie at some point the company get's so large that the central control breaks down and innovation starts to happen in the cracks.

My experience of very large companies is there is an official way stuff get's done, which is different from how stuff actually get's done - which is through peer networks.


I wonder how much of that is... human nature?

it's like... we go like gangbusters in our lives doing the things that positively need to be done.

Then when we get a moment to breathe... we do the things we want to do more than the things we need to do.

Things work better with a slight sense of urgency, but less than full-on burnout (although fighting along up to burnout is probably pretty ruthlessly efficient)


In my experience, this was likely entirely driven by one person, my guess would be two levels above the author in the org chart. It's sometimes frighteningly easy to convince business leaders that the dev teams are wasting a ton of time, doing the wrong thing, etc. It's even easier when that direction is coming from a consultant (might not be in this case, but I've seen it happen a few times).

Someone who was supposed to be advocating for their team (maybe the author's boss) wasn't, or was being out-advocated by others, and that led to breakdowns. As a manager, I keep a lot of KPIs and do a lot of postmortems (lean), because you need to be able to counter the gut feeling of "development should be faster."


I've worked at plenty of places where the dev team was wasting their time, doing the wrong thing. You know who we blamed?

Someone two, three levels above us in the org chart.


Did that have any effect? Usually people lower in the org chart have less power than those closer to the root, so I'd expect that complaint to be filed away an no-op'ed on.


I think it’s different for developers. They know that for other people (even for their hierarchy), they are just doing mystical, abstract "work". In my experience, a lot of developers learn to use this to convince themselves and everyone else above them that they HAVE TO do this full refactoring.

imo, this all comes down to the fact that management is still not able to assess the quality (or even the real nature) of developers work. Of course there are managers who are developers but they themselves speak a very different language with the people above them so it’s really easy for them to convince upper management that they (and their team) are doing the good thing.

I’m not blaming anyone here, we are just workers that are a little less abused than the others. But as a developer who is not really fond of tinkering things (at work), I’ve seen things. This industry never stopped to promote developers based on their LoC count regardless of the business efficiency.

In fact I’ve seen people become millionaires because they wrote extremely generic frameworks (akin to netcore, angular or bootstrap but "invented here") which then became the company’s SPOF once they left. Good for them, but I’m still amazed that upper management never realizes when they are paying someone during 4 or 5 full time years because he didn’t like Bootstrap (won’t blame him) and convinced his hierarchy that we needed to develop our own competitor.

Oh and don’t read any jealousy in my comment, the only bitterness I have is against myself, because at the end of the day, if management doesn’t care about the real output, they deserve to be served a $500k brand new css framework.


Oh this is such a hard path to tread, and again the same time hard to know without hindsight what the right path is.

I come across lots of situations, with legitimately bright developers, who have replaced a standard subsystem either a home-grown one, usually for performance reasons.

In that moment in time, in that context, the general solution was less performant or had an ugly bug, or whatever, so they replaced it with something designed for the one specific context.

But then the context changed, and now heres this bastard-child that's been narrowly designed, and optimised for performance (so hard to maintain or change. ). And the developer is somwhere else.

But equally I've spent a career writing the "new standard solution", which of course started out as an alternative to "the standard solution". And in our context that -has- made a huge difference. So it eould by hypocritical if me to say "stay inside the box".

If you do need to get outside the box, I would give some advice though.

A) documentation. Nobody likes doing this, but this is the most important part. If it's not documented properly, it doesn't exist. And just the act of documentation will show you where the design is deficient.

B) write your code as clean as you can. Make it easy to follow. Make it easy to read. Try not yo be clever and where you are clever don't skimp on the comments. You'll thank yourself for this later.

C) as much as possible try and build for the general case, not the one in front of you. The more useful this vote is the more it'll be used. The more it's used, the easier it is to identify flaws.

In summary - stay on the main road. But if you do need to forge your own path, make sure it's done properly, not some single lane gravel dirt thing.


At the end of the day, I do agree with you. My point wasn’t against the developers who acted like this and if they were right or wrong to do so. My point was about the fact that the knowledge barrier between developers ("R&D") and the upper management is frequently so tight that nobody is ever questioning those choices because let aside the "story points" the damn thing is costing, nobody understands what this is about and if it is necessary or not.

However, that’s just my personal experience, it’s possible that I’ve only seen the wrong kind of companies.


Sadly the wrong kind of companies are the majority. Which is to say, my experience mirrors yours.


For several bosses, this scenario you describe has fallen under the concept of 'kill your heroes'.

Sometimes the person really is brilliant and you should do anything to keep them. But if it's because they're the only one who understands their code, then they are a liability.


Is it really so difficult to train other people in the code?

Or is the hero deliberately sabotaging such efforts to maintain scarcity, and thus their higher value?


Catharsis, most of the time.

If you a middle manager that's really, really good at upmanagement, they can make things a little better.


Nothing in this article pertains to actual research - development has always included elements of design. Interesting article otherwise though.

I've been in an organisation that was actively winding down the research side of R&D. Lots of chemists and physicists let go, or at least not replaced. Projects that had gone nowhere for years canned; people with no output for years canned. More focus on product roadmaps. What's really weird is that every step seemed pretty reasonable, but the overall capability was much less in the end. It's really tricky.


The opposite hasn't really worked out in Detroit, and I don't think it worked particularly well for Boeing either, but I still think organizing R&D under one human is a huge part of the problem. Too much 'efficiency' in research and effectiveness drops to zero. Lack of urgency or excessive urgency (read: burnout) will lead to team after team running out of fucks to give, and one day someone will look up and see nothing has been accomplished with the last several millions of dollars.

I believe having different people running 'competing' groups that report to the board instead of a tall narrow org chart with clear choke points may not fix the problem entirely, but it would slow down the cycle. Worst case you disband one of the orgs and create a new group to replace it, run by someone from one of the more efficient divisions, or new blood from outside.


> people with no output for years canned

Often I see this happen, and the result is the company loses out somehow. I think maybe metrics for “output” are wrong in many cases, and you’ve just canned someone who had a useful or even critical role you didn’t know about. A lot of people who are important to company operations are invisible!


output doesn't capture accumulated know-how. If you fire researchers you loose know-how, hence you loose potential.


Output also doesn't capture 'enabling' functions: as a researcher I love to help others get to their goals, I love working on tricky problems and using my know-how to feel needed. I think I'm not alone in that.

Unfortunately that means that on paper, I have zero output while the people I help have all the output. Especially in computing/data science it's easy to fall into this trap; it takes a daily check-in to make sure that I'm working on the stuff that counts for me and my group.


You need to document everything you do.


lose*. It's not so hard, man.

Also accumulated know-how can be partially visible by writing out docs, patents, guides and the like. Definitely not enough but can increase visibility.


yeah lose. I don't know why I keep typing this wrong..


One of the challenges is that there are always more smart people outside the org than in - so the chances of the same company out innovating the world again and again is slim.

On the other hand, if your management decide to cut R, and just buy in the innovation at the right time to develop it - if you don't have any internal R people trying to do the same sort of things you will find it really difficult to make the right acquisitions at the right time.

So you need both - and treading that line is really tricky.

I also think that having people with an R mindset is important - people who are interested in pushing the envelop, doing things better etc.

Having people like that in the organisation means you are less likely to be blindsided by a shift in technology that could kill the company.


I know this story is about company scale RnD. It can also be applied at any level. Research lives on a gray scale. At its core is growing understanding of your area so that you can do things you didn't know how to do before. I've always gravitated to the hardest problems to be solved so that I can learn something and make something that no one else had the vision or perseverance to make. So almost all my jobs have been RnD though only a few formally.

The most fun I'd say I've had was recognizing something ineffective and making (software) tools for it. Now that I think about it one of the first programs I made on my Atari 400 as a kid was Room, which let me move/rotate my to-scale bedroom furniture outlines around to see what layouts were possible and may be good to actually move the furniture.


Alternatively one can just use grid paper and some scissors. I bet you learned a lot writing that program as a kid though!


Sometimes a computer is just more accessible than paper.


I remember being laughed at when I cut out coffee filters with a certain angle in order to plan how I would position motion capturing cameras to cover the room optimally. My manager liked the pragmatism though


There’s some incredible things being fabricated on YouTube with “cardboard aided design.”


How so?


Now, where is that pad of paper? Bah, the pencil is broken — who took the sharpener??

My computer goes with me everywhere and is ready as soon as I open the lid. Unless I forgot to charge it.


The Atari 400 would be connected to a separate CRT monitor.


The entire industry of CAD


What does that have to do with implementing a 2D model of a room with movable furniture?

How is paper not accessible? Do people not have note paper? What about junk mail? The last few pages of a book? You can make a scale ruler with any uniform markings.


actually using paper for designing objects, especially complicated objects, is an extremely annoying process. lots of erasing lines causing paper to break, areas with internal complexity requiring sub-diagrams, often causing nesting complexity to blueprints on large projects. It makes it so much harder to simply view and talk about the current state of a design when that means sorting through filing cabinets for revision B12 part A.3.1 only to find the paper has torn. Not to mention making physical carbon copies of things.

It's one thing to draw a rough sketch of a part, and have someone else do the creative effort of turning it into an object. But quite another to specify in exacting detail how that part should be, so that given to several manufactories you would get back parts that are even remotely interchangeable. In that way it's a lot like the difference between a specification for a computer program, and the actual program.

When designing parts I often use pencil and paper when in an initial meeting with stakeholders, to quickly demonstrate ideas. But these sketches are not analogous to the real thing. They simply represent a direction work could be done


We are talking about an application for moving furniture around a room, not designing mechanisms!


This doesn't necessarily apply to every situation, but drawing, cutting out, and positioning paper takes more manual dexterity (and potentially artistic ability) than moving things around on a computer.

Pieces of paper are also likely to shift if you need to move things around frequently.

In a small apartment, surface space could be limited.


And yet I still want nothing more than to see Solidworks burn in the deepest depths of Dante's Inferno.


Wait - isn't the 9th circle of Dante's Hell a frozen wasteland?


He meant the Lake of Fire which is reserved for heretics, non believers, pedophiles, and CAD software


Frozen wasteland? Solidworks would blend right in.


I can use blender, but I can draw or paint to save my life.


For making it really exact to the actual measurements(which presumably was the goal) grid paper isn't going to work, not without large sheets and drafting tools. Illustrative drawing can be done freehand and allowed to slop and distort things a bit, but for a practical interior design you need to get down to centimeters of space.


I have a new one. PM had determined that their work load is diminished if a project is killed. So they deliberately recommend that projects be terminated and or do things that would cause the likelihood of termination to increase.


Sounds like that could good PM taking into account their team's capacity and prioritizing.


Unless mildchalupa was talking about the PM's personal workload!


Even then, understanding your own limits is a good thing.


Research is one of those things that feels like “work” for me. My least favourite part of grad school. I just want to dive in and touch stuff and prototype. I find myself often jumping to the prototype phase as a way to justify skipping research. Maybe I’ll review a few related libraries and some blogs and such.

It’s definitely something I’d like to work on while not losing the practicality of not being caught in research hell like some peers have in the past. Their end products ended up late and no better than my third iteration of the same thing.

There’s a balance I’m still fighting to find.


It's interesting thinking about this. In my career, I would not think nearly anything I've done resembles research. Just pumping out development tasks.

The one thing I can think of that was like research was really enjoyable.

I should think about how to get more of this in my career. Even making personal projects isn't exactly "research".


I was on a moonshot team in a previous role. Research is a lot of fun to get paid for certainly doesn’t necessarily imply academic (being a DS lends to a bit more of this than typical SWE). In my experience it’s big open problems that no one really expects you to solve, and rarely would there be any top down direction on how to do so. And those problems aren’t always e.g. mathematical. It could be figuring out how a new product could enter a market, quantifying demand for some product, testing out a new algorithm, or doing a greenfield rebuild of something that exists but could only be meaningfully improved by starting over.

I think what is satisfying about this is the fact that your day to day is largely self directed and open ended. It’s not the type of thing that lends itself to backlogs and well defined tickets, and typical productivity methodologies like whole/scrum tend to fall flat in teams like this for this reason. You just sort of dive deep on a problem, put together prototypes, figure out how to quantify their utility, and keep trying new things. There also tends to be less pressure on deadlines because of the lack of top down.


This sounds right up my alley. Any suggestion on roles/titles/companies to keep an eye out for? I’ve been a SWE for 20+ years and have a background in mechanical engineering


Research scientist is one common role I’ve seen, but there are often supplemental engineering roles for these as well. Another way to find these is look for moonshot projects at any major company. Basically divisions that are outside of the core product and business operations. Some risk in these though, since they can be the first cut in a bad economy.


Thank you!


What does the acronym DS mean? Digital Systems?


Oh, my mistake for assuming familiarity! Data Science in this case.


In your average job instead of “research” it’s really “discovery”. which is trying to decipher what some business guy at your company or a customer really wants


If people are asking you to do something that doesn't make sense, there's still nothing to discover. There are only a finite number of holistic social needs, simply do all of them.


I've seen a surprisingly low rate of research conducted 'R&D' roles through my admittedly short career. The research segment of any work had been limited to testing of ideas that are highly likely to work, the bulk of the work is product or prototype development. The R&D technologists employed tend to act as rapid response personnel for tasks not predicted by project managers or systems engineers.


"And that was exactly what had happened here. It wasn’t that people were deliberately trying to sabotage progress, they were showing up to work and doing their jobs as instructed. But nothing more."

In labor market conflict situations it is called an Italian strike?


Work to rule!

Do work exactly as specified, not including all the little things needed to actually make work happen. All the glue work needed to make an organization function just doesn't sometimes!


So why would anyone do an actual strike? This sounds better and more convenient.


The point of a strike is often to make a big statement, say “fire me if you dare,” and show that your unit has cohesion and resources. You want something dramatic and noticeable. For example, if you want to kick contract negotiation out of stagnation, you want to do something that the business can’t ignore for a couple months.

Work-to-rule doesn’t really accomplish that.


It's more likely to kill the parent organization than enact change. This may not be a problem for the individuals in an organization if they have reasoned that

1. Personal Growth is limited, or further upward movement is undesirable.

2. They intend to be with the organization a finite remaining time, or would welcome an early exit

A proper strike can be differentiated from a lazy workforce, self-sabotaging work cannot be.


Work to rule can be differentiated from a lazy workforce if it’s done well.

Typically, work to rule is used to highlight specific bad rules, regulations, or enforcement practices at a company.

Say a company expects employees to do non-rule “glue” work to keep the company functioning. But, randomly and capriciously the company punishes workers for doing this “non-rule” work. A union can then announce that they will only be sticking to the letter of the rules until either the rules are changed, or the arbitrary and capricious enforcement of the rule is changed.


It can also be a rational response to a company that follows "management to rule". For example, I was once on a team where almost all of my time was spent coordinating with other teams and helping other developers instead of developing myself. When performance reviews rolled around I was told that none of that stuff mattered; only the number of tickets that I completed matter.

So I switched my focus to completing tickets. A few weeks later I overheard my manager complaining about a breaking change made by another team that I had previously been coordinating with: "Why is this happening so much? We didn't used to get surprised by these sorts of problems."


This is super common in engineering organizations. Any shop with a standard performance review will have a "score card" where your contributions are summarized in a method that can be compared to others.

Being a great team player can't be quantified, and gets dropped.


I often advise teammates to follow destructive rules by management to force management to overrule or cancel rules. The employee has cover for following the rules vs breaking rules set by management to meet goals set by management.


I think they’re typically targeting different changes and different outcomes in an organization.

Work-to-rule is most effective when you’re trying to highlight particularly bad individual rules, or arbitrary punishments, etc. The work to rule action serves to clearly highlight to management why the current status quo rules are broken. This is, naturally, the most effective when there are very specific problems that lead to pretty direct consequences.

Work-to-rule would be much less effective when used for the kinds of things a strike might be used (increased pay, improved benefits, etc).

Basically, they’re just different tactics that highlight different things, and are each best used to achieve different kinds of goals.


>That's the American Way! If you don't like your job you don't strike. You just go in every day and do it really half assed.

- Homer Simpson


Work-to-rule is less dramatic than a strike and thus less effective. If, however, you can't strike for whatever reason it's a good tool to be aware of.


Also Soviet civil disobedience. You can’t break the rules or you’ll get punished but you can do things literally and not get punished. You’re not paid to think after all


"malicious compliance". Do exactly what you're told!


> You’re not paid to think after all

This is an unfortunate issue with most of modern society. It's often compared to communism, yet how many capitalist bosses really want you to do much other than implement their "vision?"

> When I give food to the poor, they call me a saint. When I ask why they are poor, they call me a communist. - Hélder Câmara


> yet how many capitalist bosses really want you to do much other than implement their "vision?"

Most of them? Or, more accurately to my knowledge, most of the ones I've had. I mean, hell, even when I delivered pizza, they didn't really get all bent out of shape when I tried to do things differently as long as it was something vaguely towards the goal they wanted.

In my experience, unless someone is really hardwired for micro-management, people cool of and let you do things your way once you have a couple months at the place and have demonstrated some competence (this includes in some very traditionally corporate environments).


The boss pretends to pay and you pretend to work, it all works well.


Many managers will be happy to have people who are showing to work and just do their job. Isn't it the reason why people are hired? To do their job.

Of course everyone wants to get more for free, that is why do many complaints that people are lazy and don't do any extra work which can benefit their employer.


If you are subject to more corporate performance review shenanigans it feels like anything less than 10x performance is insufficient to upper management. Maybe you will even be subject to something as unhinged as being told not to rate all of your self assessment too highly because it's "not possible" to be high performer in all the meaningless "company values" they put into their performance rubric. perhaps you will even be given the example of "working overtime" as a good example of something to put in your self assessment.


i enjoyed the read and was quite surprised that there was a happy ending. i didn't think that would be possible. probably that speaks to my own personal experience more than anything.

not really relevant, but anyone know where mad ned is at these days? haven't seen any new posts of his in a while, and i enjoyed a bunch of them.


I'm still alive. But I've retired or at least taken an extended hiatus from my writing hobby, which in retrospect was probably a pandemic coping mechanism more than a lot of things lol.

I only came here because my in box is blowing up due to the traffic hacker news is driving to my site, and so then I see that this article is like #3 today. Not bad considering I don't really remember writing it!


Cool. I liked your posts and if you ever feel like getting back to them, I’ll get an email when the next one comes out.


>Various attempts of mine to convince the UX team to meet with us were rebuffed.

I don't know how things must be going wrong that you decide to sabotage / avoid collaboration like that


The classic management-empire-driven-development where each management rung up to the lowest common ancestor wants credit for their own teams "moving the needle". They don't want to collaborate too much with other branches and in fact, want to dominate other branches.

Entirely a problem of deep nested trees in corporate hierarchies that is so easily alleviated with better incentive structures.


I think most bigger organizations have left and right to the RD product manager, architect, program managers and UX groups. The head of that is the real head of RD. The real question is whether you want interdisciplinary teams. And the answer to that is more often than not: no. Why: because the illusion of control by management.


The missing letter in R&D is E (experimentation). You have to validate assumptions and ideas and bridge the gap.


research without experimentation anyone can do. "What if we had lasers strapped to the heads of sharks? what if dolphins could do the work if we gave them waterproof paper?". It almost seems trite to call that research. Of course, you can use previously existing statistics to come to these ideas in a way that feels like work. But its still deeply unhelpful


I haven't seen any research be done with out experimentation so far.


You can do research without experimentation, but then you can't call it science.


Chip design software GUIs are known to be unintuitive and unfriendly. I'm using them daily and it's in the way all the time. So I don't know what changed after these but I'm not seeing it as a user.

Oh, also, when will we get version control support? It's 2023 an no chip design SW has this.


I am scratching my head as to why Cadence Virtuoso is de-facto standard in custom analog design. Barely anyone uses Synopsys Custom Compiler. Having worked at SNPS I prefer schematic entry features of CC. Probing is more efficient, adding stubs to instances as well, lot of QoL stuff like these dynamic alignment guides, and finally most tool options are not hidden under F3 (as in Virtuoso) and show on the toolbar instead. Now I work at a small company which uses Virtuoso and have to re-learn a lot of stuff even if keybinds are 95% the same. I don't know how ADE compares to PrimeWave (formerly SAE), as we at Synopsys did not dogfood it and used custom text-based DSLs for all simulations instead.


There is literally no good reason other than it being the standard I think. Analog design engineers are very averse to change. Everone learned the quirks of Virtuoso by now and nobody dares to try something else because they are not so much better at the end.


This team did it to themselves. “R” has little to do with it.

They worked on the technical bits that they liked, created a terrible UX that sounds user-hostile, and then shocked-pikachu discovered that their jobs got cut in half.

The decision to whisk UX duties to a team miles away was moronic, of course. But that was a reaction to the bad acts this team did - to their customers, to the business, and to themselves.


It seems that the appropriate design skillset was lacking in the R&D group. Also maybe it was a first attempt to make something, and they did not get through a second iteration to improve it.

Why do we expect that skilled SEs are also skilled UX designers? As everything, design requires training. The problem seemed to be such people trained in design were missing from the R&D team, which sounds like management's fault rather than the engineers' in the first place. Then, the management, while correctly identified the lack of design skills, instead of strengthening their R&D team with that missing talent, they put designers in a different group, creating a different set of issues within the company. Seems a case for an overall bad management in my eyes.


Sure, the judgment that was missing is better assigned to the “management” skillset rather than the “technical” skillset.

But everyone needs some of both - the most purely technical engineer still needs the personal judgment to hit dates that matter, show up when others need them, and avoid overinvesting in purely play activities.

In this case, better managers would help but honestly any experienced engineer would know that constant customer complaints mean that something is going to change.


The article does not criticise the fact that "something changed", it criticises what specifically changed. The point of dealing with a problem is not to point fingers and find who is guilty, it is to actually find solutions to the problem. And yeah, frankly, having customers complain about a UX is not the end of the world and nobody needs to be scolded about it. They just have to understand what the complaints are about and make a better UX, which they did. It happens all the time.


> having customers complain about a UX is not the end of the world and nobody needs to be scolded about it

Sometimes that's true. Sometimes it isn't, and the bad UX is the seed that leads to a terrible destructive management overreaction.

Should an engineer be able to tell the difference? In some companies, I think that's a reasonable expectation, but in other companies, engineers are cordoned off and told what to do.

I'm reading the same article that others are, so I don't know. But I do see a lot of engineers get surprised in a way that could be prevented by just a bit of proactive thinking.


This sounds like a classic case of someone (/whole team) mistaking their title for their role.


I think the first mistake is when companies try to diversify when they haven’t quite nailed their first product or service. Perhaps they are forced by investors because of the valuation or perhaps the founders always had a greater vision.



Hope the IRS has seen this blog post.


Our situation at work isn't quite analogous to this, but boy oh boy did this part stand out to me:

> But a larger part of it was that people in the development team were just showing up to work, and not much else. I had a friend once at Digital who gave me this unforgettable advice, right after we were bought by Compaq:

> “When captured by the enemy, it is best to display model prisoner behavior.”

> And that was exactly what had happened here. It wasn’t that people were deliberately trying to sabotage progress, they were showing up to work and doing their jobs as instructed. But nothing more.


2021: When programmers still sourced images for their blog from Getty Images rather than just generating whatever they needed.


Generating = using generative models trained on Getty Images and the like.


Getty-rating?


I mean, I wasn’t making a value judgement with my post. I’ve just noticed a lot more art (as opposed to diagrams) in blog posts recently and I was surprised that this one actually had licensed images in it. Then I noticed the date. That is all.


I did not make a value judgement either; just wanted to point that in the end of the day it is all about searching images in stock websites in one way or another.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: