Hacker News new | past | comments | ask | show | jobs | submit | hfdgiutdryg's comments login

If our reality is built on top of a lattice, there’d be a fundamental coarseness to it, since there could be no details in our mock-universe smaller than the resolution of the simulation.

I'm pretty sure I read an article about some research team starting on exactly that project at least ten years ago. I may even have read about it on Slashdot.


Plank length?


"There were 25,339 homicides in Mexico [in 2017], a 23% jump from 2016 and the highest number since at least 1997, the year the government began tracking the data." (CNN)

Good to see that the Mexican government is focusing on what's important.


They need to stop it before the first memecidio happens.


I'd argue that you shouldn't be driving a car unless you've rebuilt a transmission yourself, but that argument wouldn't go far.


That'd be like saying your end user should understand cache misses, however if you're starting a car design/repair shop you might want to know about how transmissions work.

The primary reason for dropping down to native is to get better performance. If you're going to do that you'll leave 10x-50x performance on the table if you don't understand cache misses, prefetching and the other things that manual memory placement open up.


I'm not going to play anymore metaphor/semantic games. It's nice that you did that project, but it's not at all necessary for someone to engage in that in order to understand performance issues.


You're the one that raised the metaphor, but okay?

I'm not saying that you can't do performance work without having done that. Just that you'll be at a disadvantage since you're at the mercy of whatever your HW vendor decides to disclose to you.

If you know this stuff you can work back from from first principals. With a high level memory architecture of a system(say tiled vs direct rendering GPU) you can reason about how certain operations will be fast and will be slow.


You're the one that raised the metaphor, but okay

And your response was absurd. You don't rebuild a transmission in order to run a shop. You don't even rebuild a transmission as an engineer creating cars, you shop that out to an organization specializing in the extremely difficult task of designing and building transmissions. I wanted to avoid this waste of time, but here we are.

As for the rest of your comment about reasoning about performance, none of that requires the work you did. Again, neat project (for you), but completely unnecessary in general.


It would be valid, though. A computer programmer must understand how a computer works lest she or he write slow, bloated, inefficient software.


Given how many person-years have been saved and how much value has been produced by "slow, bloated, inefficient software", I must disagree in the strongest possible terms. Producing "slow, bloated, inefficient software" is far, far preferable to not producing software at all.


I would rather have no software or write the software myself than waste all the time of my life I've had to waste because of such shitty software, and it is indeed the case I've had to write such software from scratch because the alternatives were utter garbage. So we deeply, vehemently disagree.


A computer does not need to implement a stack

What general purpose computer exists that doesn't have a stack? Push/pop have been fundamental to all the architectures I've used.


Hardware stacks are a relatively recent feature (in historical terms) even though subroutine calls go way back. Many systems did subroutine calls by saving the return address in a register. On the IBM 1401, when you called a subroutine, the subroutine would actually modify the jump instruction at the end of the subroutine to jump back to the caller. Needless to say, this wouldn't work with recursion. On the PDP-8, a jump to subroutine would automatically store the return address in the first word of the subroutine.

On many systems, if you wanted a stack, you had to implement it in software. For instance, on the Xerox Alto (1973), when you called a subroutine, the subroutine would then call a library routine that would save the return address and set up a stack frame. You'd call another library routine to return from the subroutine and it would pop stuff from the stack.

The 8008, Intel's first 8-bit microprocessor, had an 8-level subroutine stack inside the chip. There were no push or pop instructions.


I don't have direct experience, but I believe that system/360 programs have historically not had stacks, opting instead for a statically allocated 'program save area'.

https://people.cs.clemson.edu/~mark/subroutines/s360.html

XPLINK, a different linkage convention, also for MVS, does have use a stack-based convention:

https://www.ibm.com/support/knowledgecenter/en/SSLTBW_2.2.0/...


RISC-type architectures with a branch-and-link instruction (as opposed to a jsr- or call-type instruction) generally have a stack by convention only, because the CPU doesn't need one to operate. (For handling interrupts and exceptions there is usually some other mechanism for storing the old program counter.)


Can you point me to a RISC architecture that doesn't have push and pop instructions?


Nearly all of them?

ARM's pop is really a generic ldmia with update. You can use the same instruction in a memcpy.

MIPS, PowerPC, and Alpha don't have anything like push and pop, splitting load/stores and SP increment/decrement into separate instructions.

AArch64 has a dedicated stack pointer, but no explicit push pop.

In general the RISC style is to allocate a stack frame and use regular loads and stores off of SP rather than push and pop.


ARM64 uses regular loads and stores to access the stack, I believe. It also is one of the architectures with branch-and-link. https://community.arm.com/processors/b/blog/posts/using-the-...


From the perspective of "stack" as an underlying computation structure: Have you ever played a Nintendo game? You've used a program that did not involve a stack. The matter of "stack" gets really fuzzy in Haskell, too; it certainly has something like one because there are functions, but the way the laziness gets resolved makes it quite substantially different from the way a C program's stack works.

If a time traveler from a century in the future came back and told me that their programming languages aren't anywhere as fundamentally based on stacks as ours are, I wouldn't be that surprised. I'm not sure any current AI technology (deep learning, etc.) is stack-based.

From the perspective of "stack" as in "stack vs. heap", there's a lot of existing languages where at the language level, there is no such distinction. The dynamic scripting language interpreters put everything in the heap. The underlying C-based VM may be stack based, but the language itself is not. The specification of Go actually never mentions stack vs. heap; all the values just exist. There is a stack and a heap like C, but what goes where is an implementation detail handled by the compiler, not like in C where it is explicit.

To some extent, I think the replies to my post actually demonstrate my point to a great degree. People's conceptions of "how a computer works" are really bent around the C model... but that is not "how a computer works". Computers do not care if you allocate all variables globally and use "goto" to jump between various bits of code. Thousands if not millions of programs have been written that way, and continue to be written in the embedded arena. In fact, at the very bottom-most level (machines with single-digit kilobytes), this is not just an option, but best practice. Computers do not care if you hop around like crazy as you unwind a lazy evaluation. Computers do not care if all the CPU does is ship a program to the GPU and its radically different paradigm. Computers do not care if they are evaluating a neural net or some other AI paradigm that has no recognizable equivalent to any human programming language. If you put an opcode or specialized hardware into something that really does directly evaluate a neural net without any C code in sight, the electronics do not explode. FPGAs do not explode if you do not implement a stack.

C is not how computers work.

It is a particular useful local optimum, but nowadays a lot of that use isn't because "it's how everything works" but because it's a highly supported and polished paradigm used for decades that has a lot of tooling and utility behind it. But understanding C won't give you much insight into a modern processor, a GPU, modern AI, FPGAs, Haskell, Rust, and numerous other things. Because learning languages is generally good regardless, yes, learning C and then Rust will mean you learn Rust faster than just starting from Rust. Learn lots of languages. But there's a lot of ways in which you could equally start from Javascript and learn Rust; many of the concepts may shock the JS developer, but many of the concepts in Rust that the JS programmer just goes "Oh, good, that feature is different but still there" will shock the C developer.


Why not apply that logic to digital devices?

They did, at least according to the article. But it's a low hurdle, and cops quickly figure out the script to bypass restrictions like that.

It's no different than the mishandling of trained drug sniffing dogs so that they alert at the handler's command, rather than at drugs/bombs/money/whatever. Countless cars are stripped at border crossing and the side of the road because a cop didn't like someone's attitude.


By that logic, if you owned property that was rented by a murderer, you should be banned as an accessory to murder. It's absurd.


Its more like leasing a room knowing it was Dexters kill room. That said I'll go no further justifying US pot laws. But the failure is in the severity assigned to the "crime", everything that follows is rational if you accept the premise that it truly is akin to murder.


i guarentee that TSA is going to check for hidden partitions, patched kernels, suspiciously clean OS, etc.

How would they hire enough manpower smart enough to do that?


They'll just buy an eyewateringly expensive tool from Cellebrite (or their ilk), then let the minimum wage goons loose with it...

"Hey Hank, the machine says we've got another hidden encrypted partition, go get the rubber hose..."


I wonder what the overall adoption rates are for UWP. As someone who's written Windows software since the 90s, I can't shake the feeling that Microsoft just churns out completely new development stacks every few years.


This is exactly what is holding back Win32: there has not been any serious development of the desktop developer story since WPF in 2005 (WPF only received token updates since then, the last major update was in 2010 when they finally added a DataGrid control, lol) - but WPF is not well-suited for many types of software which have to fall-back on “pure” Win32 and the story there is nothing but depressing.

Apple got it right with Cocoa (nee Nextstep) - it’s something they have been gradually building on with incremental improvements, continuously forwards and upwards - whereas Microsoft has been launching new, short-lived and incompatible (and incomplete) platforms for the past 15 years: WinForms, WPF, Silverlight, Jupiter (Win 8.1), UWP (Win10), etc. The story is so bad Microsoft’s own flagship software has to build their own UI frameworks (Office has its own, Windows Media Center had some arcane prototype of WPF, even the Windows 10 Start Menu uses some UI+Graphics framework that’s not exposed to third-party developers).

Win32 needs some serious work - not least because Win32 is “infected” with GDI which has its own issues. WinRT was nice but nowhere near sufficient.


> The story is so bad Microsoft’s own flagship software has to build their own UI frameworks

I think this might be the other way round: if they're not eating their own dogfood, it's much easier for the UI framework to come adrift from actual use cases.

It's really telling that in Windows 8/10 they "modernised" some, but not all, of the control panel UI. It's basically random as to whether a setting you need will be in a Metro-flavoured window or a Win32-flavoured one.

To me, the trouble with everything after WinForms is that it lacks a really compelling reason to upgrade. It's not easier to develop for and it's not nicer to use, and on many desktop systems the font rendering is much uglier in the new system. It does perform better at high DPI, but (catch-22) few people use high DPI windows systems because it's not well supported by even the OS let alone the applications.


"It's really telling that in Windows 8/10 they "modernised" some, but not all, of the control panel UI. It's basically random as to whether a setting you need will be in a Metro-flavoured window or a Win32-flavoured one."

That's really infuriating. Makes me wonder what all these devs they have are doing the whole day.

"To me, the trouble with everything after WinForms is that it lacks a really compelling reason to upgrade. It's not easier to develop for and it's not nicer to use, and on many desktop systems the font rendering is much uglier in the new system. It does perform better at high DPI, but (catch-22) few people use high DPI windows systems because it's not well supported by even the OS let alone the applications. "

WPF had great potential and I think they could have made it a wonderful environment if they had kept improving it. Clean up XAML syntax (maybe something like they with ASP.NET and Razor) , simplify data binding syntax and debugging, make MVVM first class citizen and it would be great. Instead they (almost) stopped development of WPF and cranked out a series of half baked successors like WinRT, Silverlight and UWP.


WPF was really nice from what I remember of it. MVVM was super easily integrated with XAML.


MVVM is great if you want to unit test all your UI code but who wants to do that.

For me the whole thing was cumbersome and nowhere near as productive as WinForms. Not only that but problems it purported to solve like high-dpi support still had issues.


With some more work like better Visual Studio integration and maybe some compiler magic it could have been fantastic.


I do Windows development since 3.0, currently maintaining a couple of Forms and WPF applications, the Forms are the hardest to maintain due event handling spaghetti code instead of proper MVVM with data binding, no use of Table/StackLayout components and still do background handling in BackgroundWorker classes.


How is the Visual Studio WPF designer these days? I worked on a WPF app around 5 years ago, and the form designer in Visual Studio was appalingly slow and quite buggy.

I also wasn't a big fan of XAML - no matter how much time I spent using it, it mostly seemed more difficult and awkward to get components where you wanted them than with WinForms.


Quite good. I always do a mixture of Blend and Studio.

Never had any big problem that I can remember of since Visual Studio 2012.


2012 had big internal architecture changes that did a lot to stabilize the designer.


The designer is fine now.


High DPI support for WinForms has been improved in the .NET Framework 4.8 https://github.com/Microsoft/dotnet-framework-early-access/b...


The majority had stopped at (right before) WPF it seems

> In fact, at a recent conference I was at with a large group of C# consultants. When the audience was polled on which desktop UIs their clients were using, the vast majority were WinForms, a small group was WPF, and almost no one was UWP

https://iamtimcorey.com/ask-tim-is-winforms-dead/

And many did long before WinForms

> Less than a year ago, a manager asked me if I could write a library for a client that would allow VB6 to access a RESTful API.

https://blog.submain.com/death-winforms-greatly-exaggerated/


When Apple introduced Cocoa they also introduced Carbon however, an extension of the legacy MacOS APIs. Apple already slowly killed their Win32.


It's actually kind of impressive how much transitional stuff Apple has introduced and then removed with Mac OS X and how right many of their decisions turned out. I guess Cocoa itself was an extension of the legacy NextStep API too. They also had Classic for running old Mac apps directly in OSX and later Rosetta for running PowerPC binaries on Intel (I guess the same tech was used for the 32->64 bit transition). So many technologies have been smoothly deprecated and removed over the years by Apple.


I don’t think you really can compare Apple’s and Microsoft’s situation here. Think of the extreme bigger amount of software that was written for Windows including all that custom made business software for small companies. Apple was always in the situation that a lot of their 3rd party developers were also „fans“, who are more willing to keep up with their transitions, where however most the Microsoft tech developers were just doing their job and want to keep their stuff only running with the least amount of work necessary.


Apple was always in the situation that a lot of their 3rd party developers were also „fans“, who are more willing to keep up with their transitions,

That's true, but Apple also got behemoths like Adobe and Microsoft to port over their applications to modern APIs (Cocoa and Cocoa Touch).


Because Apple does the "take it our way or leave" approach.

Which is also a reason why they hardly have a meaningful market share across the enterprise world.


Sure they do. It’s on mobile. While MS isn’t going anywhere in the Enterprise, anyone who is focused on the desktop when it comes to MS development instead of web, cloud, or even cross platform mobile development is headed down a dead end.


A little secret, laptops and 2-1 convertibles are desktops as well, while being mobile.


If we want to be pedantic, a desktop could be considered mobile too since you can pick it up.

But in the real world when people talk about “mobile software” everyone knows that people are talking about iOS and Android. No one is chasing after the Windows software market, it’s been a diminishing platform for the past decade.


In the real world people are working on laptops, while using iOS and Android mostly to consume content, play games, browse web and show plane tickets.

When they actually become a match to laptops and 2-1 convertibles, maybe.

And there Chromebooks are no where to be seen outside US school system, iPad Pros are mostly a gimmick in rich countries and Android 2-1 are basically phone apps with keyboard.


I am not saying that people aren’t using desktops to do work. What I’m saying that relatively few companies are putting money into writing desktop software. The jobs are limited compared to writing web based software.

If I ever got back into desktop software, it would be highly specialized low level C/C++ code and not WPF/Sharepoint/UWP software.


Yes and no. They've taken down the documentation, but there's still a number of (non-GUI, non-kernel) Carbon APIs that are still not deprecated (as of 10.11, at least), and still allowed in the Mac App Store.

I'm using AHGotoPage() because NSHelpManager has no equivalent and nobody has been able to explain to me how to make that class do a similar task reliably. (In hindsight, AppleHelp is such a disaster that I should have just avoided it entirely, as almost every other app does.) MAS reviewers have given me grief over many things my app does, but never any Carbon calls.

The last time I saw any Carbon APIs deprecated was 10.8, I think (6 years ago).


They're about to removed (carbon apis) AFAIK with the move to 64bit only no?


What's bad about GDI? (I used to do windows but have not been following, I remember that you need to do GetDC and ReleaseDC on a window handle to get the device context and then to release it - in pairs, else...)


It's a graphics API designed for early-1990s needs. It's missing things we expect today like 30-bit colour support and all-VRAM memory (GDI was hardware accelerated but they gimped it in Windows Vista to ensure compatibility with the DWM).

When working with Win32 it just isn't possible to avoid having to use GDI, for example dealing with painting to a window surface Win32 will give you a GDI device context by default - for example, similarly handling Win32 messages like WM_PAINT uses parts of the GDI API. Finally, each process on Windows has a limited number of GDI objects it can use as well. Oh, and using GDI/GDI+ in a service context (e.g. ASP.NET or a Windows Service, is not supported: https://blogs.msdn.microsoft.com/dsui_team/2013/04/16/using-...)

Direct2D is nice - but setting it up in your code isn't easy - and Microsoft does not maintain an up-to-date C#/.NET library for Direct2D.


SharpDX and SlimDX are fantastic wrappers for Direct2D, last time I was doing anything with it. SharpDX is the more active of the two.


> Direct2D is nice - but setting it up in your code isn't easy - and Microsoft does not maintain an up-to-date C#/.NET library for Direct2D.

They do for Direct2D on UWP, https://github.com/Microsoft/Win2D


It doesn't do antialiasing.


GDI+ does.


GDI+ Is better in a lot of ways, but has the major downside of being limited to software rendering only.


GDI+ is a completely different replacement API, not just an extension/update of GDI.


> The story is so bad Microsoft’s own flagship software has to build their own UI frameworks

The story is so bad because the Microsoft units making flagship desktop apps aren't productizing their UI frameworks (not suggesting that this is their choice), instead leaving the firm with public UI framework offerings not tightly grounded in that kind of use experience.

Good frameworks don't lead to internal flagship use, internal flagship use leads to good frameworks through tight feedback between real-world use and framework development.


remember WinJS, another vapourware


> As someone who's written Windows software since the 90s, I can't shake the feeling that Microsoft just churns out completely new development stacks every few years.

That's the theme behind one of Joel Spolsky's old essays, Fire And Motion:

https://www.joelonsoftware.com/2002/01/06/fire-and-motion/

"Think of the history of data access strategies to come out of Microsoft. ODBC, RDO, DAO, ADO, OLEDB, now ADO.NET – All New! Are these technological imperatives? The result of an incompetent design group that needs to reinvent data access every goddamn year? (That’s probably it, actually.) But the end result is just cover fire. The competition has no choice but to spend all their time porting and keeping up, time that they can’t spend writing new features. Look closely at the software landscape. The companies that do well are the ones who rely least on big companies and don’t have to spend all their cycles catching up and reimplementing and fixing bugs that crop up only on Windows XP. The companies who stumble are the ones who spend too much time reading tea leaves to figure out the future direction of Microsoft."


Very very very low.

Win32 is a lower portability risk than UWP. Genuinely that’s what I’ve heard people say. Most of the UI and event stuff can be moved to wx or equivalent in a few months and deployed on other platforms.

UWP and you’re up shit creek.

Most companies are on the fence with running away from windows. It moved too fast, costs too much money to keep up with and has too much friction.

Take a look on the windows store for the massive uptake. Not. Half of it is still win32 and the rest is written by MSFT, crapware or abandoned.

Everything is in a right state despite the marketing to the contrary.


Like everyone else, the biggest difference is that Microsoft keeps the old tech around, while most other companies are willing to loose customers that don't want to move into the new stacks.

On one side it is a big reason why they have achieved their market size on the desktop, on the other side we reach this kind of situations where new tech improved tech like Singularity, Midori, Longhorn or now UWP gets thrown into the garbage can in the name of backwards compatibility. Quite sad.


Microsoft shot themselves in the foot with UWP. It's actually pretty nice to write apps with it (in my personal opinion, anyway). But UWP only supports Windows 10.

Very few commercial products are able to drop Windows 7 support (and like, literally zero enterprise products can drop Windows 7 support).

So, no matter how nice UWP gets, most companies that would still be willing to write a native Windows app are 5+ years out from being allowed to adopt it, entirely because of how Microsoft chose to release it.


Yeah, UWP not supporting win7 pretty much killed it.

Might not be an issue in 5 years, by then MS will probably has a new new framework .


"Very few commercial products are able to drop Windows 7 support (and like, literally zero enterprise products can drop Windows 7 support)."

You would be suicidial if you based any product you want to sell to business on UWP. Not only doesn't it Win 7 or 8 but I don't see in what way UWP is better than WPF.


I have found one working enterprise use case.

UWP is great if you are exclusively deploying to brand new Windows 10 Embedded devices. (I think this is now called "Windows IoT Core" or something similar). That's what I've been using it for, and that's worked well. It carries a nice side benefit that our sales team all has Windows 10 Pro laptops, and UWP 'just works' there too for sales demos, presentations, and such.

But I agree, selling UWP apps to businesses for traditional desktop use is ridiculously difficult.

> I don't see in what way UWP is better than WPF.

For a long time, UWP was the only way to get a working WebView (one based off of Edge instead of IE). But I think they fixed that and backported Edge to WPF a few months ago. (EDIT: they did kind of fix that - https://blogs.windows.com/msedgedev/2018/05/09/modern-webvie... )


"UWP is great if you are exclusively deploying to brand new Windows 10 Embedded devices. (I think this is now called "Windows IoT Core" or something similar). That's what I've been using it for, and that's worked well. It carries a nice side benefit that our sales team all has Windows 10 Pro laptops, and UWP 'just works' there too for sales demos, presentations, and such."

This would be a great thing for us. But we already have a huge investment into WPF. How do we get this to UWP?


Apparently via XAML Islands as shown at BUILD and this week's Ignite.

Now in face of this announcement, not sure.


You need Windows 10 machines to develop for/deploy IoT Core last I checked, so that killed it for us (corp world still holding onto Windows 7).


One way that it is better are the visual layer and accelerated composition engine.

Naturally this matters little to those enterprises doing plain CRUD applications with default L&F.

One big improvement in UWP is that Visual C++ finally merits the name Visual, although it still fails short of C++ Builder's RAD tooling.

And overall the sandbox model, however from the last Build it appears that Redstone will get an improved sandbox for Win32 apps as well, alongside MSIX support.


Exactly. One of the biggest reasons why Carbon (and Mac OS X) succeeded is that Carbon apps also ran on OS 8/9. Big apps like Acrobat, Final Cut Pro, Mozilla, and many others were able to use Carbon and run on OS 9 and X.


UWP doesn't even support Windows 8, it is strictly Windows 10. Not that 8 is very relevant.


Oops, sorry, you are correct, I got it mixed up with the 8/8.1 SDK. Fixed.


WinForms will out live UWP, WFP and pretty much everything else Microsoft develops in terms of GUI frameworks. It works for the vast majority of programs that businesses need, it's easy to use and so far it's been the best investment in terms of learning a "framework".

If Microsoft want's to move forward, they need to consider moving new stuff into WinForms and expand that with the features of UWP, WPF, XAML and other technologies they've tried to introduce over the years.


"If Microsoft want's to move forward, they need to consider moving new stuff into WinForms and expand that with the features of UWP, WPF, XAML and other technologies they've tried to introduce over the years. "

They could have done this 10-15 years ago. It's too late now.


There is also Electron/Javascript.

Skype/Teams are developed in it and they have invested a lot of effort into TypeScript.


Honestly, if I had to develop a new desktop app for Windows I would seriously think about Electron. It's frustrating that there is no premier desktop app framework for Windows anymore. UWP is too limited, WPF is pretty much deprecated. Win32 has a huge learning curve and finding devs would be very difficult. Pretty sad situation.

Maybe their secret plan is to push people into writing web apps with ASP.NET? Event that doesn't work because a lot of people will use Node instead.


I have been developing new WPF applications during the last couple of years.

Those customers were now thinking of eventually start moving into UWP after the Windows 10 migration was complete, so given this, they will continue using WPF.

Also Forms, WPF and EF 6 support are the major roadmap items for .NET Core 3.0

Node is no match against .NET or Java for those that care about performance.


Why not Qt?


True. A while ago I looked into Qt and one problem was that there weren't too many 3rd party components on the market. For WPF and Winforms you can buy extremely powerful components like data grids and tree views (I have been using Developer Express) but I didn't find any for qt.



Skype has become so clunky and slow lately, so I find myself avoiding it whenever possible. Could this be the reason behind the worse quality?


There's also Qt which they used for OneDrive


Yup, and you don't even need to touch C++ these days. All C# and QML!

https://github.com/qmlnet/qmlnet

(I'm the author)


I’m not sure if it’s true but it seems to me like they replaced their own client with an alternative one they bought from another company, which has parts written in Qt.


I don't even know how to use UWP. I'm learning Rust and writing some applications using WinAPI is my plan. WinAPI usage seems to be fairly straightforward. But I have no idea how to write UWP application with Rust.


Click the hamburger (3 vertical lines) in the top right corner of the screen

I found it humorous that they simply call it 'the hamburger', and that they confused vertical with horizontal.


Could be 3 very wide and short-in-height lines stacked vertically?


That's not vertical lines.


This blew my mind. I'm aware that HN frowns upon such comments, but I just had to take my hat off to you.


They could have confused orientation with arrangement.


I always call it the hamburger menu, especially when doing tech support for everyone.

¯\_(ツ)_/¯


GP notes that they omit the ´menu´ part of it.


I still call them toaster slots.


That button is really called that but it's odd on a professional site


I've heard "hamburger menu", but never just "the hamburger".


I'd love to see a website or app stylize their hamburger menu button as an actual hamburger.


I'm working on a food-related app and this might actually be a consideration.



I've seen the three lines adorned with a bun and some lettuce.


With regular inheritance, you affect behavior from the bottom up.

With the "Curiously Recurring Template Pattern", you affect behavior from the bottom up and from the top down.

To be honest, I don't know if it has value for most tasks people care about (outside of libraries, I mean). But it was fantastic for optimizing OOP-abstracted desktop applications. If you have some subset of functions that need to be called a lot, then avoiding virtual functions really helps.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: