Hacker News new | past | comments | ask | show | jobs | submit login
Lambda: The Ultimate Excel Worksheet Function (microsoft.com)
236 points by elemeno on April 22, 2021 | hide | past | favorite | 109 comments



Funny how the top of the article starts with "custom functions without code" and then immediately shows code.

I get that calling code by its name can make it sound scary, but this whole notion of it being 'easy because it is not code' seems to be a big fat lie for comfort. Same goes for the magic no-code systems where code is replaced with 'expressions' or graphical 'workflows' which essentially is exactly the same thing, only shaped slightly differently.

This makes me wonder if it wouldn't be much better if we could focus on making people be able to code and have more 'coding capacity' instead of having less of that capacity and then reducing it even more by using some of it to create 'let us pretend this is not code'-applications.


There is one discipline which truly allows defining functions without code: Programming By Example[1]. It is particularly useful in data wrangling, at the 'Transform' part of ETL.

Microsoft has some initiatives exploring it[2] (including the limited autocomplete in Excel as a toy version of the concept), but this is not it.

[1] https://en.wikipedia.org/wiki/Programming_by_example

[2] https://www.microsoft.com/en-us/research/wp-content/uploads/...


While formulas are definitely code in a general sense, 'code' the Excel-verse generally refers to VBA code embedded in the workbook.

VBA's potential as an attack vector results in it being unavailable or heavily restricted in many corporate environments through stuff like Group Policies[1]. And I've worked with some clients whose IT goes a step further and completely blocks sending or receiving emails with .xlsm[2] attachments.

Since lambda-defined logic is all formula-based, it's not considered 'code' in that sense and can be used and passed around as a standard Excel file without any of the VBA-oriented restrictions. So you can approach your Excel workbook more like a programming project, centrally defining your complex logic once and referencing it elsewhere every time you want to use it. This is super helpful for audibility and maintenance, while staying within the bounds of what'll be applicable/usable across any Excel environment.

[1] https://4sysops.com/archives/restricting-or-blocking-office-...

[2] .xlsm is the extension Excel uses for spreadsheets containing VBA code


I guess they mean without writing VBA code.


Where’s the immediate code?

I think those are basically Excel formulas, which in context, are not what Excel users would consider “code”.


What do you think is the difference between Excel formulas and code?


Using an inbuilt Excel formula vs creating a formula with VBA?


But they're both code.


If you agree to the statement that any accountant which uses Excel formulas to do bookkeeping is also a software developer, then yes, you and OP are correct.


> If you agree to the statement that any accountant which uses Excel formulas to do bookkeeping is also a software developer

They are as much a software developer as they are a writer because they write e-mails; a presenter because they present the annual accounts to the CFO; and a cleaner because they put away their mugs at the end of the day (usually.. hopefully..)

You don’t have to be someone because you sometimes do something that other person also does.


Why do you think you have to be a 'software' developer to write code though?

Lots of people write code, using lots of applications and devices - spreadsheets, MATLAB, database queries, it's all code.


Great to see Excel adding more features!

The argument I've heard against doing this sorta thing was that they wanted to keep Excel simple enough to not alienate many non-technical users, sorta forcing it to be a simple, accessible environment for everyone.

It'll be neat to see how the user-base adapts to a more powerful feature-set. I mean, it'd seem like a lot of folks will be thrilled, finally having some extra functionality without having to use macros/VBA/VSTO/COM/etc., though how might non-technical folks feel about a coworker sending them a spreadsheet with function-values?


Most organizations I have seen already only have a couple people who can actually make and edit the advanced sheets used by the org and lots of people who use those sheets with a very, very rudimentary knowledge of Excel to generally get their jobs done.

I don't really see the addition of new advanced functionality changing that paradigm.


> The existing Name Manager in Excel allows any formula to be given a name. If we name our function PYTHAGORAS, then a formula such as PYTHAGORAS(3,4) evaluates to 5. Once named, you call the function by name, eliminating the need to repeat entire formulas when you want to use them.

That's the biggest issue with LET / LAMBDA at the moment. Users are terrified of the name manager and simply do not understand what they are for or what "scope" means. On top of that, copying content from one workbook to another leads to names being copied over as well, which is how I often end up with ancient names such as FXRATE1997


I have a BS in Computer Science and whenever I help people pick up programming I notice scope is always one of the toughest concepts for them to grasp.

It could of course be a reflection of my teaching ability, but it always seems to be a tough one.


Strange! Sadly I don't remember the experience of learning about scope myself (I was too young), now I find it hard to see the "mind state" that makes it hard to understand.

Isn't it a feature of natural languages to have the same word assume different meanings depending on where it's used? The concept translates nicely, and in PLs it's completely explicit whenever this happens.


That is an interesting thought: although, it is something that non-native speakers struggle with when learning a new language. I wonder if that is a factor when learning a new computer language/concept too.

I suspect it is also related to the Curse of Knowledge (https://en.wikipedia.org/wiki/Curse_of_knowledge). Once you are past the hurdle of initially learning a concept it makes it hard to imagine not being able to grasp it: especially when dealing with abstract concepts such as scopes.


Exactly what I was trying to express (the Curse of knowledge), thank you for bringing it up, wasn't aware of the phrase.


This is almost what I always wanted, but it feels a bit bolted on to be honest. I don't like that you have to put a complex nested function in one cell, rather than using multiple cells with temporary results. While this allows you to concatenate Excel functions, I think it doesn't allow you to write a function in "idiomatic" Excel. If I had to implement functions in Excel, I would use one of two strategies:

- You have a special area or special kind of sheet, where some cells are inputs, one is output, and all others are used for temporary calculation

or:

- You define your calculation as usual, in B5: = 10*A5 - then in C5:

    =MYLAMBDA(B5; A5)(3)
Meaning: Take the formula in B5, treat A5 as an argument, and return a function. Then call this function with the argument 3.

The benefit of this? You can have an area in your sheet where the user can enter formulas and multi-cell-calculations, not just numbers, and they are applied elsewhere.


There is, perhaps surprisingly (or not), already a relevant XKCD mentioning this feature while poking fun at the computational abominations that were already possible in Excel:

https://xkcd.com/2453/

On that note, TFA claims that the introduction of LAMBDA finally makes Excel Turing complete, unlike the kind of Turing machine simulators the stick figure is referring to in the XKCD comic...

> (In contrast, Felienne Hermans’s lovely blog post about writing a Turing machine in Excel doesn’t, strictly speaking, establish Turing completeness because it uses successive rows for successive states, so the number of steps is limited by the number of rows.)


To be fair, no computer is an actual TM since memory is finite.


It can pause and ask you to temporarily attach another hard disk. Could be infinite as long as one can afford buying disks.


But the amount of raw material on the planet needed for manufacturing more disks is finite.

The amount of raw material in the universe is finite at a given point in time (it could be infinite over time, we don't know if time is infinite either).

I think we've already established (especially over the past year) that fiat money is infinite.

The Turing machine is a mathematical model. Infinity only exists in the world of mathematics. The physical world is by definition finite.


A paper clip machine scenario but instead it wants to turn all atoms into hard drives so it can keep running itself. A machine who's purpose is itself. A stupid orobus that will kill us all.


> The physical world is by definition finite

This isn't obvious to me, would you elaborate?


Is the universe finite?

https://answers.yahoo.com/question/index?qid=20200123104919A...

Sadly that thread will soon be gone due to the approaching Yahoo Answers apocalypse. Hopefully Internet Archive will save it!


Take everything in the world. Every physical piece. Break it into the smallest slice you care to: (atoms, quarks, whatever). The count of those things is bounded. It's a huge number, but it's finite. Infinity is not a real concept, it's imaging that there is no number that can be bounded.


This demonstrate countability, not finiteness. If by "world" you mean universe, there's no guarantee that you can "take everything in it", because it might be infinite.


Maybe, there are hypothesis’ that the Universe is infinite. The observable Universe is finite.


If the universe is infinite, Then the observable universe is only as finite as the length of your life and capability to traverse through space.


Not necessarily. The observable Universe could also be reducing over time. Distant galaxies are accelerating in their travel away from our observation position because space itself is expanding. This means that over long periods of time, objects at the periphery of the observable Universe will red shift out of view.


Classic HN: from article on new Microsoft Excel feature to semantic debate on the definition of the universe in record time.


Right, all computers have a bounded amount of state, so they can only parse regular languages.


This is cool. I work on a lot of spreadsheets at work and am religiously against dropping into VBA for anything. It would often be easier, but it then destroys the portability of the spreadsheet (now it has to be .xlsm, people get scary warnings, etc.).

Also, does this:

> With LAMBDA, Excel has become Turing-complete.

sound like a threat to anyone? :)


The blog post's title is an homage to a series of papers from the 70s, which formulated the Scheme programming language and the beginnings of what we now know as "functional programming".

https://en.wikipedia.org/wiki/History_of_the_Scheme_programm...


The authors of the post have both been central to Haskell's development, among other PL things – I'm guessing they've read them all :)


Funnily enough the two examples they provide in this article (reverse string and factorial) were the exact two first problems I encountered in the Scheme class I took in college. I'm guessing both of these problems are defined in those papers.


Specifically the "Lambda: The Ultimate" bit.


Which is one of my favourite PLT websites:

http://lambda-the-ultimate.org


Where I heard about this development nearly three months ago:

http://lambda-the-ultimate.org/node/5621


Naive but serious question: couldn't Excel create a special / hidden sheet for lambda functions, thereby allowing them to be easily written on multiple lines, with some kind of standard formatting, then calling a lambda from a cell would be of the form:

=LAMBDA(global_function_name, [cell_input_1, cell_input_2, ...])

Wouldn't this be a cleaner design? Trying to deal with cells whose formulas are way too long to be put in a single cell is Excel's Achilles Heel (and a footgun that you are nearly guaranteed to enounter sooner rather than later). This LAMBDA proposal as written seems to exacerbate that problem, not improve it.


You can create a multiple line formula. Alt+Enter when editing the formula.

If I'm writing a longer formula that's going to be tough to read, I make it multiple lines and add spaces at the start of the lines for indentation. Makes readability so much better!


A good tip there for folks to help readibility. I also love to paste in really crazy formulas here to really see them in a well formatted mode in the cell.

https://www.excelformulabeautifier.com/


One small recent thread:

Lambda: The ultimate Excel worksheet function - https://news.ycombinator.com/item?id=25923628 - Jan 2021 (4 comments)


Am I the only one who read the title and was immediately reminded of http://lambda-the-ultimate.org/?

(Is the title a deliberate call to that site, or something even older?)


The common parent is "Lambda: The Ultimate Imperative" (1976) https://dspace.mit.edu/bitstream/handle/1721.1/5790/AIM-353....


And more generally, the Lambda Papers by Guy Steele and Gerald Sussman, which also include "Lambda: The Ultimate Declarative", "Lambda: The Ultimate GOTO", and "Lambda: The Ultimate Opcode".

https://commons.wikimedia.org/wiki/Lambda_Papers


And it's being discussed there (... shallowly, so far) as well :D

http://lambda-the-ultimate.org/node/5621


Writing a lambda like that would require documentation. I wish Excel had a mode to look kinda like Jupyter except reactive (or observablehq except a desktop app), so I could easily add documentation for any cell and fold/unfold as I wish.


If you save it as a named function you can add a comment and that will appear as the tooltip while you're working on calling the function. If you're just wanting to comment how/why the function works in the cell you're defining or instantiating and immediately using it Excel has comments and notes you can attach to the cell or a group of cells.


I've used Excel to automate things like splitting spreadsheets into separate files, and I write functional code with lambdas as a programmer. The way they phrased this makes it sound like their goal is to allow you to do hideously complex things, not to let you do more powerful things simply. I started out excited and ended up scared.


This is great and all, but it struck me that only Microsoft could come up with phrases like "LAMBDA is available to members of the Insiders: Beta program" and "LAMBDA complements the March 2020 release of LET". It sounds like something someone might write in a parody press release on comp.lang.scheme 30 years ago.


So they are building LISP inside Excel.... It is now a corollary of Greenspun's Tenth Rule of Programming...


This is great, there were a few network functions I used a lot (validate IP format, find network address, find broadcast address, check if address is in subnet, etc) and would always just stick a "scratch" sheet in a workbook duplicated for each time I needed to perform an operation and nobody was ever going to be able to decode after I hit "send" on the email. The additions they've added make this so simple and ubiquitously available I wouldn't even call it ugly for that use case anymore.


Now we have a legitimate use case for the Y combinator.

What's next? A full implementation of scheme? Common lisp?


Check out this paper from my friend Ronen, and Prof. Fateman, putting Lisp into Excel:

"A paper written with Ronen Gradwohl on Lisp and Symbolic Functionality in an Excel Spreadsheet: Development of an OLE Scientific Computing Environment, August, 2002. (code available on request) "

https://people.eecs.berkeley.edu/~fateman/algebra.html


Implement Emacs Lisp and throw in evil-mode. Now you can run the world's most extensible editor with Vim bindings INSIDE EXCEL!!

Also, Emacs's org-mode implements some basic spread-sheet utilities, so...


If you want to stay on top of what is going on in Excel, the team's Excel Blog is worth checking out now and then.

https://techcommunity.microsoft.com/t5/excel-blog/bg-p/Excel...


I really like what these guys are doing https://numbrz.com/ You can build data stores that can be published to other numbrz users. They can augment your data and republish. As the source data changes, everyone's stores are updated.


I always thought the Excel computational model was elegant and useful as it didn't allow for non-termination.

Now they lose termination for the use case where someone knows how to program but can't or won't program in some "normal" programming language.


Oh no - can't they just integrate M instead? It's a great (skeleton of a) language with a similar basis but far nicer to write.



What is M?


M is the language of Power Query.

https://docs.microsoft.com/en-us/powerquery-m/


I wonder how a programming language “extension” to Excel works together with the fact that the excel language is localized. Will the lambda function names be similarly localized?


> lambda function names

I'm not sure what you mean by this.


I was under the impression that it would ship with a standard library with functions, and not require the user to define common ones in every workbook (HEAD, MAP, REDUCE, ...)

I can see the definition of HEAD in their example but having to do that in my Swedish Excel would be bloody terrible.


The PhD work of Sruti Ragavan appears to lie behind part of this development. She hasn't defended her thesis yet, but she's been recruited by the MSR Cambridge team behind this work, where she interned in 2018, IIUC just before she began her PhD work.

https://sruti-s-ragavan.com/research/


I don't understand the point of this function, when Excel already has Power Query. It doesn't seem like anyone who is literate in functional programming would want to use this, and anyone who isn't up to it wouldn't either.

One of the most annoying things about Excel is it has so many parts apparently designed by people or groups that didn't talk to each other and didn't have a grasp of all the rest of it, let alone the world of the (various groups of) users.

Who ordered another Turing-complete system in Excel? One that is, like all the others, a pain and a half to debug or analyze? Has anyone figured out how to turn this into a security vulnerability yet?

Saying "yay people are making videos" only makes me think of all the horrific tutorials on Power Automate. And this: https://xkcd.com/763/


The fact that this is part of the main formula language and not some bolted-on things means that is it somewhat incremental. Apply more of the incremental data log type research, and this really could be something neat. (The "grid calculus" would seem to indicate better linear algebra and maybe even tensor things are on the way, too!

I hope this gets implemented in libreoffice too; I will certainly tell non-programmers to stop using python or whatever and go back to spreadsheets!


> I don't understand the point of this function, when Excel already has Power Query.

Because Power Query is not a spreadsheet application, and has some much more severe performance cliffs than Excel proper does.


To you and easton, my point is that even if Power Query has shortcomings, it's clearly the best thing to build on and improve, assuming VBA is dying a slow death and can't be revived. Even if, like, you wanted to make another separate language, it should still resemble Power Query, only better.

I don't think people at Microsoft are looking at Excel as a whole, like lost souls squatting in a mansion and building sand castles in the room that they live in that have no relationship to the actual building and what it needs to keep from falling down.

I'm not sure what you mean by performance cliffs. Can you give an example of where and how you would better accomplish something without Power Query? Are you talking about processing data in the range of a few hundred megabytes?


PowerQuery isn’t a replacement for excel though - it’s a data preprocessing tool for analysts.

It’s not going to replace functionality of core spreadsheet-based excel for accountants, for instance, who typically won’t have a use for PowerQuery as their data is structured differently.


I am not an accountant per se, but I work for an accounting organization. Could you give an example of what you mean by "structured differently"?


PowerQuery assumes tabular data where each column is the data type and each row is a data element / entity. It is structured similar to a database.

In a spreadsheet the data is much less structured which is where a lot of the power comes from - for instance PowerQuery doesn’t really support things like subtotals easily, or doing scratch-calculations, or building quick financial models. It is closer to a paper-ledger with calculations scribbled into the margins than a big-data database.

PowerQuery is more about ingesting lots of data and cleaning it, while finance is often about working stuff out and playing with numbers to see what happens - and playing with numbers is easier in a less-structured-loosely-typed environments.


>PowerQuery doesn’t really support things like subtotals easily,

Subtotals? I was used to using GROUPING SETS with Oracle SQL, and found I could roll my own in Power Query. It's a good example of exactly why I like it.

Also, Power Query doesn't prevent you from using the regular table total feature or a pivot table based off of the Power Query output.

That is, even if Power Query doesn't provide all the subtotaling features you'd like in the way you'd like, it doesn't restrict you from anything, does it?

> or doing scratch-calculations, or building quick financial models

I do use it to do all sorts of ad hoc calculations - for instance, it can ingest PDF files or HTML with tables.

It sounds as if you're saying it's too complicated for really trivial calculations?


> It sounds as if you're saying it's too complicated for really trivial calculations?

I'm saying it's not the right tool for some classes of calculations.

For instance I work in designing warehouses, and use both tools. Here are some use cases where Excel doesn't do well and I would use PowerQuery:

* Ingesting millions of historical orders

* Handling relational data

* Data cleaning and aggregations

Here are some example use cases where PowerQuery doesn't work as well, but Excel is perfectly good:

* What height should the pallet racking bays be in this warehouse, and how many pallets am I likely to fit in the building envelope? (considering my other space requirements)

* What's the likely transport impact of opening a new distribution point?

* Running lots of scenarios or sensitivities.

Why are these better in excel? Well there are just some things PowerQuery doesn't do well, for instance excel can take into account any other arbitrary cells value into it's own calculation, while in PowerQuery you generally have to use an intermediary table and joins to handle this.

Can both tools physically do it? Yes, it's just some problems suit one rather than the other, and identifying the right tool for the right problem saves you lots of time. One thing that makes Excel better for scratch calculations for example is the fact that it's a live environment (with PowerQuery you have to run it after changes to get the results back, and this can be really slow compared to excel).


Power Query's streaming semantics for tables and lists can lead to severe performance issues with even modest data volumes. Table.Buffer and List.Buffer offer some small amount of control, but it's likely that you have a pipeline that creates a series of intermediate table and/or list values. Every single table and list function (with the exception of the buffer functions mentioned above) creates a new lazy stream.

Accumulation patterns perform abysmally even with data in the 100Ks of elements. Say you have a table of inventory movements and want instead a snapshot table of inventory at point in time. You can do an O(n^2) self-join of a table with itself to all records with a lesser date, summing all movements to derive a total quantity at that time.

If you want to use an accumulation pattern, you can sort and cast your table to a list of records and then use List.Accumulate to iterate over each list element, deriving a new field with the running total of inventory amount. If you do this, you will find that it falls right over even with 1Ks or 10Ks of records. This is because the intermediate list that you're appending to through the accumulation is itself a lazy stream. Thus, you have to use List.Buffer at each step. Even with List.Buffer at each step, this solution falls over at high 10Ks or low 100Ks of records.

Incredibly unintuitively, you can use List.Generate with an already-buffered input list to derive a new list that can then be cast back to a table, though this still struggles with 100Ks of records.

If your snapshots can be aggregates, then you can happily throw out the idea of such an accumulation pattern and just join to a date table at the appropriate grain with all movement records less than or equal to the date in that date table.

I'll note that I regularly speak with several of the people whose blogs you will inevitably come across when performance tuning Power Query. The approaches above are the current state of the art in PQ for iteration and accumulation patterns. This is not an appeal to authority or a brag. This is to highlight the difference with the Excel spreadsheet formula approach below, which even beginners can derive from first principals.

In an Excel spreadsheet, for the same challenge, you just define a new column with a special first row formula, and each subsequent cell referencing the row above. This will happily run right up to the spreadsheet row limit with no performance concerns. If you really want, you can spill over to multiple spreadsheets, which is clunky to manage, but still performs just fine, and degrades slowly. The M approaches above hit a cliff and start hanging.

Excel formulas make it trivial to reference arbitrary cells. M is a nearly-general purpose language. PQ uses M, but as a framework for writing M, it has a strong emphasis on a query/table paradigm. A table-based operation model cuts against the grain of a spreadsheet, because a spreadsheet is a collection of arbitrary cells. A tabular approach is a collection of similarly shaped records stacked one upon the other. These two paradigms have a fair amount of overlap, but are not isomorphic. There are things trivial to express in one that become difficult bordering on impossible in the other.


As someone developing essentially a competitor to Excel-and-PowerQuery/M, I find all this very interesting.

My language is strict and statically typed. However, after arrays (tables are arrays of records conceptually) exceed a certain length, rather than processing them in-memory as arrays, they will be offloaded to storage and processed (transparently) in a streaming fashion.

I’m surprised that this doesn’t work well in PowerQuery. I would have thought that 100K would be peanuts for it.

Mine is a SaaS however, so the user’s laptop isn’t a constraint, and I can transparently throw a million records in BigQuery or some other data warehouse and use its aggregates if needed. Although at the 100K scale you can use SQLite and it can handle that scale of data trivially on commodity laptops.

So your experience is interesting indeed.


Feel free to reach out via email if you want to follow up. My address is in my profile.

I'll note, as I did to a sibling reply of yours, I made observations about a specific pattern that showcases performance issues in PQ/M. PQ/M easily scales beyond 100Ks of records, but not for arbitrary processing patterns.


I'm skeptical. I want an example, because my experience differs.


Power Query falls over with thousands of items?

That's not my experience. At work, the data usually isn't very large, but I have experimented on my own time with, for instance, a public covid data file that I think was several GB.

I also thought lazy semantics is a good thing, not a fundamental flaw.

Rather than debate, I would be interested enough to spend some time on a sample problem, if you could provide one, where you believe Power Query inadequate, and at the same time have an alternate solution to provide a benchmark of what is adequate.


Not "PQ falls over with 1Ks of items," but rather "the M language does not do well with accumulation patterns on tables; naive approaches can hit significant performance issues in the 1Ks of records and sophisticated approaches struggle with 100Ks."

These are two very different statements. I've happily used PQ to ingest GBs of data. Its streaming semantics are fine to great for some types of processing and introduce performance cliffs for others. There's no binary judgment to be made here. Laziness is neither a fundamental flaw not an unmitigated good.

I've already shared one specific pattern above. I can share some mocked up data if you need me to, but that might be a day or two. Also, feel free to reach out via email (in my profile).


>I've already shared one specific pattern above

If you mean this:

"Say you have a table of inventory movements and want instead a snapshot table of inventory at point in time"

Then I can make my own data to play with - I only want to be clear about the constraints. Would 500K records be enough to obviate the distinction between naive and non-naive approaches? Can you quantify (not precisely) "struggle"?

I have used Table.Buffer, but I probably don't thoroughly understand its use yet.

(I belatedly realized your problem is something I've done with Sharepoint list history recently, but not that many records, so I'm going to look for a public dataset to try)

P.P.S. I guess it also makes me think - I frequently am getting my data from an Oracle database, so if something is easier done there, I'd put it in the SQL. Analytic functions are convenient.

P.P.P.S. Aha! I found a file of parking meter transactions for 2020 in San Diego, which is about 140MB and almost 2 million records. This seems like a good test because not only is it well over the number you said was problematic, but it's well over the number of rows you can have directly in one Excel sheet.

https://data.sandiego.gov/datasets/parking-meters-transactio...


Ok, I agree that PQ is slow. It is possible to calculate a running total of a column in a million row table before the sun burns out though.

I am very not an algorithm person, but I got a huge speedup from a "parallel prefix sum" instead of the obvious sequential approach or the even worse N^2.

I translated this to M by rote and trial and error (page 2): https://www.cs.utexas.edu/~plaxton/c/337/05f/slides/Parallel...

Implementing the parallel, recursive solution got me a million rows in about three and a half minutes.

Fill down (which I had to do anyway to compare) was about 10 seconds.

So...probably not the first choice in this scenario but could be worse?


Nobody at MSFT talks to their counterparts. I cringe every I have to copy and paste between Teams, Outlook and OneNote. There have to be internal customers there with the same problems.


One that threw me for a loop recently was the spell check dictionary and suggestions.

Mess up in Outlook, you right click and it gives a couple suggestions. It'll call out the typo right after you finish the word.

Mess up in Teams? It'll wait until you finish the next word (charitably, giving you a second to figure it out?) then will suggest a different word than Outlook would.


It seems like they are long past the days of a CEO being able to say “Make this stuff work together.”

I’m still impressed with their renewal as a company. Rare for a stagnant tech firm to come back.


The truth is you can basically already do this in excel without involving any more complicated things like VBA etc, purely in spreadsheet. You just need a lot more cells and longer formulas to set it up. Lots of Vlookups. This will provide the people that are already doing this stuff some much appreciated shortcuts.


You definitely don't want to write anything complicated with it, but I think it is a nice intermediate that is more secure than macros and also is closer to writing Excel formulas than implementing custom macros.

One thing that would greatly improve the experience would be to allow for formulas to contain just a lambda and then reference that lambda from another cell as a cell reference. Currently you have to use manage lambdas under Formulas > Name Manager. This would make debugging a lot easier in my opinion so that you can freely mix data entry with computation. Not sure why they haven't done this already, but I suspect it is because of assumptions baked into Excel.


> allow for formulas to contain just a lambda and then reference that lambda from another cell as a cell reference

My pet project from a couple of years ago[1] had cells-as-functions. I think it works really well. I also think names are important, but yeah they should either be easy or optional. Glad the Excel folks liked my rad idea though, even if they didn't quite hit all the high notes :-).

1: https://6gu.nz/, IMO worth watching the first minute of the video to see it in action


Power Query is only a thing in Excel for Windows though, it’s likely it will eventually be replaced with something cross platform. And if you don’t want to make .NET load every time you open your workbook, you have this.


That doesn't sound like a reason to me. What in the fundamental nature of Power Query prevents it from being cross platform? Or, conversely, what prevents something new that is cross platform from working basically the same as Power Query if it must be strictly incompatible?

It's possible we're not talking about the same thing. Microsoft has slapped "Power" on so many different things. When I google "Power Query" I get a lot of "Power BI" stuff and I try to avoid that like the plague. In my limited experience, it's flaky, unstable, and adds negative value to my reports.

From my perspective, Power Query appears to be similar to the scripting language in something like Qlikview. Except much less painful (for me). I also think "grokking" Power Query could lead to improving SQL, even. The split between SQL and things like PL/SQL or T-SQL always felt wrong to me. Just having functions as a seamless part seems like the thing that was always missing.



I think it’s mostly Power Query (the thing in excel that pops up when you click “Get and Transform Data” is what I’m talking about) requires .NET 4 at the moment, which means that they’d need to get it on .NET Core to get it on the Mac, and then would still have no way to get it on the web and on mobile. It’s a serious pain point for cross platform compatibility in Excel at the moment, but I concur with you that we need to keep it or something similar around (especially since Microsoft keeps dragging their feet on adding first class support for a scripting language that isn’t VBA).


I just clicked on the XKCD link and chuckled a bit. Then I thought "Hey, I wonder if there's a new XKCD".

https://xkcd.com/2453/ Wouldn't you know it?


This is going to make writing spreadsheets to do basic engineering calculations far more clear.


This has to be an April Fools, it's just posted early and we're talking about it late. It's not real right? right?


Coming up next: Excel Lambda back-end experimental support on LLVM. (Just a joke) Excel is a very useful tool for many non-programmers employees. With LAMBDA they can have a more expressive system without the need to touch VBA, JS, PQ M language, nor jump to Python. Are we able to generate side-effects with it? Like poking the value of a cell?



Recent and related:

Lambda: The ultimate Excel worksheet function - https://news.ycombinator.com/item?id=25923628 - Jan 2021 (4 comments)


The "Lambda" paper Steele and Sussman never wrote!

http://lambda-the-ultimate.org/papers

Including:

"Lambda the Ultimate Imperative"

"Lambda the Ultimate Declarative"


Doesn't Sussmann's propagator model work kind of count as "Lambda the Ultimate Spreadsheet"?


Came here to say this: my first thought was that this can provide a nice interface for implementing Radul & Sussman's propagator model

https://dspace.mit.edu/handle/1721.1/44215


Just bringing the Visual Studio code control into Excel is a plus. This will remove several steps when creating and deploying addin.


Wow, that recursive example is a nightmare. I can't imagine being the person that's asked to debug what's going wrong here a few years down the line.


I assume you don't mean the string reversal example. The fixed-point fibonacci isn't meant to show the way you'd actually write fibonacci; rather it's intended to show the debatably interesting property that you /can/ write a recursive function without giving it a name.


The Calc Intelligence project at Microsoft Research Cambridge has a long-standing partnership with the Excel team to transform spreadsheet formulas into a full-fledged programming language.

Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should.


"Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something."

https://news.ycombinator.com/newsguidelines.html




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: