The change of focus is perhaps hard to understand without more context. Basically, there are two really different modes of programming:
Most of what we see on HN is about building applications, servers, websites etc. Big, monolithic things that take weeks to years and are deployed to somewhere else and used by lots of people. Most programming tools are built for this kind of work, where the time between writing the code and actually using it is days or months.
But the people we want to make programming accessible to are mostly knowledge workers. Their work is characterised by a mixture of manual work and automation, throw-away code and tools rather than applications. It's better supported by Excel, SQL, shell scripting etc than by the big languages and IDEs.
We realised that we can do much more good focusing on that kind of programming.
I am currently working as an Actuarial Analyst, but I have also worked several years as a programmer.
As an analyst the tools I use are excel, access, SAS Enterprise guide and Oracle SQL developer. One of the big problems I face is that we have no good way to abstract away a process and really make it reusable.
My general work flow is using SAS to pull data from multiple sources, combine and run the data through some series of logic/calculations. Then take the resulting data, copy to excel for some additional analysis or report. This might be for a monthly/quarterly report or an analysis that needs to be update with the additional runout of data.
But these steps are all tightly coupled together. If I want to rerun the same logic on a different data set, or an updated data set I will copy and paste all of the files, update the queries. I have no way to bundle them together so that I can easily reuse with different data sources, or refreshed data.
Really want I want is someway to encapsulate different sets of data transformations/calculations into to functions to reuse them in different contexts and among different people.
Look at my EasyMorph (http://easymorph.com). It's a visual replacement for scripted data transformations. People use it to replace SAS and Visual Basic scripting. It also allows creating reusable modules. Contact me at <hnusername>@easymorph.com if it looks interesting to you.
Hey, I clicked through, read the tutorial, got excited about your examples.. tried to download and found out it was windows only! I would have totally evaluated it further if there were an os x/linux option.
Speaking of Tableau (which was founded on the concept of VizQL), how is this different? Doesn't tableau basically enable knowledge workers to create data-centric web applications?
Does this have an API that can be called from .Net?
I'm really liking some of the things Microsoft is doing with Power Query, but I don't like how it is (afaik) only callable from Excel or PowerBI online. I'd like similar capability, but more open, and could be called via scripting, from SQLCLR, etc.
Another big hitch: Microsoft has not published proper API's for manipulating PowerPivot models in Excel, and I don't think they intend to - I've heard one 3rd party has reverse engineered the API's (you can decompile the .Net binaries, but I haven't had the time to look at it yet).
Would you perhaps have any info/reference about where I could learn more about this reverse engineered PowerPivot API? This sounds pretty exciting. :D
To expand on my question, I'd heard (from Rob 'PowerPivotPro' Collie, former product manager on the project IIRC) that the core had been written in 'unmanaged code' (probably C++?), so I believe reverse engineering it would be a significantly larger effort than just opening some of its DLLs in say DotPeek, at least from as far as I've been able to tell.
The PowerPivot engine itself I imagine is in unmanaged code, but the code that just writes datasets and whatnot to the model is (from what I've heard)managed code, and indeed you can decmpile the libraries and see all sorts of things, I've only looked around for about 10 minutes or so. And I had just read on some obscure thread that someone had successfully found the undocumented API call to write to the model, which is what I'm wanting to do - but I don't even know what the product name is that supposedly does this, sorry.
Right, makes sense, I'll try and check out what's available then. :)
By writing to the model, you mean programmatically adding new measures or the like?
My interest is in programmatically querying models using DAX, though to this end I'd also look to look in the direction of Microsoft's DirectQuery mode in SQL Server which supposedly did DAX-to-SQL conversion.
If one could use such a conversion plus MDX to start querying models on an Apache Spark cluster through pivot table/chart interfaces...
Not even measures, I'm just wanting to be able to create tables, define relations, etc, with the accompanying sql or m script. I'm hopeful they'll let us do that some day, but I still don't quite believe they've changed their stripes entirely.
Currently EasyMorph supports integration through command line only. We do not plan having API for the desktop client, but we will definitely make EasyMorph Server API if we reach that point.
We use Pentaho Kettle for those kinds of transformations. It's FOSS, and connects to a whole bunch of programs and formats.
It's a graphical tool - you drag-n-drop modules, then configure and connect them, though it can also run scripts (it has JavaScript, Java, Bash and Ruby support, besides SQL, of course) - but after configuring the transformation/job, you can also run it on the terminal, which is useful for periodically re-running it.
I've been doing a lot of work with Kettle as well, and it is a handy tool (albeit with a few warts).
What I think would be handy for use in an organizational setting, where "business users" might want to use some of the transforms, would be a way to publish transforms somewhere, making them discoverable and accessible to others. I don't want to make it sound like I'm talking about UDDI or anything (although, thinking about it, maybe you could use that), but just an easy way for a Joe Business User to get a list of available transforms, some explanation of what they do, what input they take, what they output, etc. And maybe a way to make changes to the "small stuff" (like the input and output path, for example) without having to load up Spoon and edit the ktr that way. Since transforms can be parameterized, that should be doable...
You could also picture combining this with something like a Yahoo Pipes like web interface, to let you define your own chains of transforms and operations as well. And hell, a web-based interface for editing ktr files would be a pretty interesting thing as well, if somebody would build it.
The databricks platform should solve exactly your problem - reusable data pipelining/transformation. I saw a demo of it last night and it was extremely slick. Their product is amazing, it makes data pipelining incredibly easy compared to setting up a hadoop cluster and running hive/etc. (I don't work for them - but if any databricks employee sees this, please hire me!) It runs on a spark cluster over AWS, which is much more modern and powerful than SAS/excel/sql. Since you know how to program already, it shouldn't be too hard to pick up spark (even has python bindings)
@rgoddard - May be a bit overkill but check out Immuta (www.immuta.com). Its a data platform, built for data scientists, that enables you to query across many disparate sets of data using familiar patterns such as SQL, file system, etc. Our SQL interface allows you to hook to Excel, Tableau, Pentaho...so you could write your abstracted logic and connect to many data sources or mashed up analytic results. contact me at matt@immuta.com if you're interested after reading through the site.
I'd divide it as essentially "User of Packages" vs. "Writer of Packages", and of course, the dichotomy is not actually a clear one. But the choice was to suggest that there are some people for whom the programming language and its libraries are not an end of itself, and "Just crack it open and write your own thing for X..." is essentially a non-starter.
I think it's useful to distinguish between the two groups, because not only do they have different skill sets, but they have different motivations. For example I will never directly be evaluated on the performance or style of my code in the way a programmer might be - only the paper that code helped me write.
Was there a problem inherent to building large applications that you found intractible, or is the shift solely due to focusing on entry-level accessibility?
We built a Foursquare clone recently and the BOOM guys built an extended version of HDFS and Hadoop (http://db.cs.berkeley.edu/papers/eurosys10-boom.pdf). It works out pretty well. The shift is not really about accessibility either - I've watched people do some pretty advanced data work in fields like physics and biology.
It's more about making computers into personal tools. If you look at the tools the average person uses - email, excel, google etc - they all work really well individually but they are really hard to extend or compose. Each application is a world unto itself and doesn't play with the outside world. What would really help people work is not the ability to build their own applications but the ability to move data around and glue tools together. It's kind of like applying the unix philosophy to office suites.
The shift is primarily due to the fact that relatively few people seemed to want to build large applications, including ourselves.
There are definitely some differences between building large apps and these more communication/analysis tasks, but we think the foundation itself applies to both. The language is an adaptation of the Dedalus[1] semantics, which the BOOM lab did some amazing things in distributed systems with [2]. If it can build a clone of hadoop, chances are it can build most things. We've built a number of our compilers in Eve, several of our editors were bootstrapped, we've built numerous examples, most recently a complete clone of Foursquare. Before we'd want others to try and do that though, we need our tooling to get a bit better. We expect that the Eve editor will get there eventually kind of out of necessity - we're going to bootstrap a lot of it this time too, starting with the compiler.
My take from the abstract of Dedalus is "and adds an explicit notion of logical time to the language"
If you're interested in building a tool that relies on distributed communication and data flow it makes sense to bake a notion of logical time into the the system. Boolean logic has no notion of time. If you propose x != y say, you could be saying that throughout the lifetime of the system x is never equal to y or you could be comparing x to y at this instant in time. It depends of course if these are constants and/or variables.
Type theory shows that different logics map to different type systems so what may be holding programming back is that the logic of a system is not _dynamically_ selectable as the system evolves. Most (all?) programming languages have a simple boolean logic, mutable state, and just tons and tons of syntactic sugar on top of that. Obviously languages like Haskell and Clojure are more advanced (algebraic data types in the former and immutable data structures in the latter) but they still have a fixed/static way of being in the world if you know what I mean.
Natural language shows us that humans use many different types of logic contextually. Logic is not monolithic, maybe Eve is an admission of this?
Sorry if this makes no sense, it's just a hunch that's been percolating for a while.
One of the things we are still working on is expressing non-monotonic logic nicely (things like "birds can fly, but penguins can't, but Harry the Rocket Penguin can"). It's unpleasant in standard datalog but I think we can provide a nicer interface.
10 minutes into the Rich Hickey video and I can see why you responded to me with a link to it. This is indeed what I'm getting at; so many languages use fundamentally the same underlying logical and state model. Rich mentions single-dispatch, stateful OO. To that I would add boolean logic. Our systems or so riven by it we don't even see it. And I reckon it doesn't have to be that way! I can totally see why Eve is written in Rust from the 10 minutes of this talk that I've seen, and I can see that it is the incidental complexity of managing the lifetime of objects in your head in C++ that have forced this shift. Or, as per Hickey, Clojure-wards.
Still though, both Rust and Clojure, both presume an omnipresent bivalent atemporal logical discourse. If you're working on a different logic (or sets of logic?) in Eve then why not make them dynamically user-selectable at run-time in an intuitive manner :) Granted, I have _no earthly idea_ in practise what this means but when you reflect on how humans manipulate concepts internally you see that we have the machinery for this built into us -- or learnt somehow at a very early age. Tapping into this fluid logical apparatus would be ever so neat.
Is the endgame here making eve applications automatically distributed or parallelized?
I ask because the monotonic logic that Daedalus excels at expressing is quite limiting. Unless you are in an execution environment where operation ordering/synchronization is expensive (i.e. among a set of distributed processes) - the nice order-independent properties that CALM analysis gives you don't really buy you much.
I can't see us using CALM for anything in the near future. The focus right now is just on making the basic programming experience smooth. We chose Dedalus because the discrete, synchronous model of time makes it easy to separate things which are truly stateful from things which are not and to handle both in a live, interactive environment. CALM is just a bonus.
It seems like this might be a good fit with what Sandstorm is doing (making personal servers easy to use). Even if I'm writing a program for myself, I still want to access it from multiple computers and share the results.
Granted, shell is useful for concise interactive one-liners. Granted, Java
is a terrible replacement for shell. But there's no kind of programming where shell is better than Perl, Python or Ruby - "big languages" which, unlike Java, were designed to be useful for scripting by people who aren't full-time programmers. Because they are "big languages" they also have the property that you can build bigger things on them, starting from your messy prototypes. It's better not to even try that with shell or Excel.
That is a much better and saner explanation than these "revolutionizing programming"-grandiosity talks.
Eve will always tend to be missunderstood at Hackernews or Reddit. It simply is not aimed at professional developers and intended to build production systems.
My opinion is that programming is literally defined by editing text files with difficult to comprehend source code. Anything that is easier or deviates from that is by definition not programming and therefore something users do.
The rationalization is that anyone using an easier tool to program computers without as much code must not be able to use code. Therefore us programmers are better than them.
Its very similar to the earlier era of punchcard programmers scoffing at assembly language programmers. Or hand-tool craftsmen sneering at mass-produced component-based manufacturing.
That infantile belief system will persist until super-intelligent AIs revise it or (maybe) the next generation wises up.
So after they didn't exactly revolutionize the IDE with Light Table, and pivoted to revolutionizing programming, they seem to have kind of come around to revolutionizing... groupware? Which at least seems more plausible, as it ought to be possible to make a better groupware application than Lotus Notes.
I think the larger implication is: Programming tools are actually pretty good, and the larger process of programming is fairly solid, and while improvements are possible, they're probably going to be evolutionary improvements that build on what we have, rather than throwing everything away in favor of a brand-new approach.
I get the opposite message, which is that programming tools and the process of programming are so bad, it's hard to get a computer to do a simple task for you without tearing your hair out, never mind building a UI. A better IDE (basically a better text editor) doesn't even scratch the surface.
The thing is, it's easy to say something is hard. Programming is definitely hard!
The question is: Is it hard essentially, or is it hard accidentally? That is, can you remove unnecessary complexity from programming and suddenly it'll become easy?
That was the proposition behind first LightTable, and then Eve as originally conceived. But neither of them really found a satisfying answer, a way to say "hey, for making your web app or whatever, if you throw away your existing stuff and use this tool/process, now it's super-easy."
That implies heavily that a lot of the complexity and difficulty is essential. Not all of it -- things will get easier and better over time, as they have over the last ten years -- but enough so that blowing it all up and starting from scratch isn't likely to lead to wins.
> But neither of them really found a satisfying answer, a way to say "hey, for making your web app or whatever, if you throw away your existing stuff and use this tool/process, now it's super-easy."
Actually, we think we did. We're just not choosing that as the primary focus of the workflow in Eve. Now with a better version of the editor and with a bit more work on the UI builder, I suspect we could rebuild the entire foursquare clone in under a week. If we had VCS so that multiple people could work together on it, it might only be a couple of days. This foundation for programming has lots of implications for building "real software" - it's actually based on research for making distributed systems much easier to build. [1]
We'll see more of that as we go since we're bootstrapping bits and pieces. One of the first things that will transition over is the compiler, if that gives you any indication of the level of sophistication you can achieve with this programming model.
There's certainly a lot of accidental complexity that comes from lack of standardization. For example, think about how complicated character sets were to deal with before Unicode. Then compare the mess of different kinds of Unicode encodings to standardizing on UTF-8. Or take file formats before XML/JSON/Protobufs. Or, going further back, floating point before IEEE 754.
I agree that blowing it all up is not a win, which is why accidental complexity goes away only gradually, and it's partially a process of hiding it rather than removing it, with lots of politics along the way.
In its essence programming is transforming data (and code is data too). Everything else is incidental. But if I think of the things I do in my day-to-day work as a software developer it is 99% logistics (getting data in the right place and in the right form) and 1% related to actual meaningful transformation. If my understanding is correct (I only had a cursory glance) eve attacks this problem from the promising angle by making all data available in a ready-to-query database.
Still I think there must be some hard lower limits on amount of incidental complexity. Nature just can't allow you to get rid of all of it (impossibility results from distributed systems theory come to mind) just as in manufacturing transport costs can't be zero (goods and materials can't be transported between factories faster than the speed of light after all). It will be interesting to see how eve team works around these issues.
> But if I think of the things I do in my day-to-day work as a software developer it is 99% logistics (getting data in the right place and in the right form) and 1% related to actual meaningful transformation. If my understanding is correct (I only had a cursory glance) eve attacks this problem from the promising angle by making all data available in a ready-to-query database.
Reminds me of the old:
"It is better to have 100 functions operate on one data structure than 10 functions on 10 data structures." —Alan Perlis
> Is it hard essentially, or is it hard accidentally?
Not exactly either, but certainly a bit of both. Programming is hard primarily because it is so poorly understood. The entire field is in its infancy. Comparing it to art, I'm pretty sure we haven't even reached the "stick figures scrawled on a cave wall" stage yet. As Alan Kay pointed out we sure didn't invent an arch yet: http://squab.no-ip.com/collab/uploads/61/IsSoftwareEngineeri...
It's nice to see the Eve team trying to do something at least slightly different from the same-old, same-old. Even if it looks a lot like some horrors of yore (FoxPro) when I squint.
I think a lot of it is essential. At its core, programming is about designing processes and procedures, and I'm not aware of any sphere in which the state of the art inspires much hope. Certainly not law, business or accounting.
But I also think there's a lot of value in continuing to try new things. This complexity is so expensive that the payoff from a successful project could be enormous. And even when projects fail, we can often learn a lot from understanding how they fail. See that recent HN discussion on literate programming for an interesting example.
I've also concluded that it's essential: Programming is the solution for the broadest space of problems in computing.
If you can narrow the problem space to some finite number of goals and workflows, you have a viable application. Application environments with sufficiently many workflows always reincorporate programming as a way to let the user build their own solution - and most things are implicitly programmable, whether through a defined API or through tricky reverse-engineered methods.
What I think muddies the picture is the line between "design" workflows and "engineering" workflows - in the first, you're piecing together the existing technology in a different way, while in the second, you are transferring math and science knowledge into original technology. As individuals we experience personal bias as to which side of programming is more "necessary," which is reflected in the resulting choices of tooling, code style, and preferred problem domains.
Sometimes you want design-heavy programming - e.g., you add some business logic and a UI on top of a database. Other times you want to add engineering to an existing design - you're writing hardware drivers using a common protocol, a data formatting plugin for an application, etc.
Library code acts as a way to expose units of engineering, while a framework defines a broad, but still configurable design space. Sometimes you have overlapping design spaces - you can have client code that works with a GUI framework, but also talks to an internal model and remote data sources.
One of the things that is exciting about programming's evolution is how much it is based on an ecosystem of technologies. Outside of some embedded fields, the era where you are given a hardware manual and are told to come up with your own development environment is over. Successful technologies tend to act parasitically on prior ones. This leads to a lot of compromises, but the general direction remains toward "better fit."
Yeah. They said for doing simple things rather than building complex systems the current tools are not very convenient.
I find it really useful to use Ruby's built-in CSV library to process data for spreadsheets then visualize it with a graph in Numbers. I pull data from copy-paste tables, extract from Sqlite databases embedded in applications, etc. But for something like scanning Facebook friends as they suggested I'd have to first figure out the API or how to scrape the data. Once I figured that out it probably wouldn't be too hard to write a script for it. I could probably even put it in a crontab (LaunchAgent plist on a Mac).
Anyway, I think there's a huge opportunity to empower more people who are inquisitive and maybe somewhat technical but are not programmers. Make it easy for these people to solve problems rather than trying to teach them to "program". The end goal in my mind is the StarTrek programming model where you discuss with the computer what you want to do to solve a problem. Those crew members are scientists and engineers but only a few of them are "programmers". (StarTrek is a useful yardstick because we seem to like copying technology out of it. E.g. StarTrek communicator which is the late 90s cell phone and StarTrek PADD which is the SmartPhone/tablet.)
Perhaps, but my intuition is that the bottleneck between most people and programming isn't the tools. It's that programming requires the programmer to conceptualize the problem formally, and then to formally define what he/she wants the computer to do.
We can come up with better and better formal languages and ways for editing these languages (and probably should), but I would guess even a visual formal language, will still pose many of the same difficulties for users that existing formal languages do.
1. Modern programming tools have a decades-long head start
2. Why denigrate group ware?
While this may not succeed, I have little reason to believe future improvements will all be evolutionary (I know you hedged there, saying "probably"). A lot can be learned from shedding assumptions. If the result of the project is inspiration for building tools on top of traditional programming models, the originators may be disappointed, but I'd call that success in basic research.
> they're probably going to be evolutionary improvements that build on what we have, rather than throwing everything away in favor of a brand-new approach.
Probably, yes -- but working on a problem starting from the beginning is something more people should do.
It is riskier, for sure. And I am glad their team is doing it.
This seems like more of a competitor for something like WebMethods -- a simplified way of programming complex business logic. With most business logic code, the hard part is actually figuring out what the business wants the logic to be and communicating with business stakeholders; and those are tasks poorly suited to most developers.
There is nothing technically difficult about these problems; and it's work that's really more suited to a business analyst anyway. While developers should obviously check their logic to make sure it's sound; gathering the requirements is 90% of the work in these situations.
Anyway, that market seems a lot easier to compete in than the straight up dev tools market. Dev tools are so personalized, with every person/team/project/company having different needs and requirements that it seems like the only way to succeed would be with a niche product (which naturally limits the scale of your success).
> In order to accomplish that, we do need a way to describe processes. We need a way to "program." But switching the goal from building applications to analyzing and communicating information changes everything. Our current programming tools are awful thinking tools. Instead, they were designed to build complex systems. How much effort does it take to write a program to scan through your facebook friends and check to see if someone who usually isn't in your area currently is?...People aren't really tring to build the next Facebook, they're trying to use the information from it in a different way.
The example given here by the OP strikes me as a good example of how and why programming is complicated, and what people generally want their programs to do is unlikely to be doable without knowing how to program.
Case in point: why can't a layperson just make a little app "to scan through your facebook friends and check to see if someone who usually isn't in your area currently is"? The ease, glib answer is: well, Facebook's developer API requires several hoops to jump through, including OAuth of clients and so forth. So that's why there's no drag-and-drop-plug-and-play module system for such a feature.
The bigger answer is the answer to the question of why does Facebook's API have to be so complicated? Well, besides business reasons...FB's API is a public-facing abstraction over a system in which a billion people have agreed to (semi-)authenticate themselves and communicate a variety of real-time things about themselves. As annoying as it is to program your own little FB apps...it's complicated because the system it interfaces with is overwhelmingly and amazingly complicated.
I don't see much room for improvement in making programming easier in this regard. It'd be like making Shakespeare more digestible to people who don't want to learn to read (OK, ignoring oral storytelling, for this limited analogy)
> If Facebook had an "export my friend list to Excel" button, then plenty of non-programmers could perform this task using existing tools.
The task was to send alerts whenever a friend was nearby. Excel is amazing for non-professional programmers, but it doesn't deal well with changing data, unless you want to click the "export my friend list to Excel" button every five minutes. There are tools that will act as real-time data sources in Excel but it's not a natural fit. If we did nothing but make "real-time Excel" it would still be incredibly useful for a lot of people.
An excel power-user is indistinguishable from a programmer. If you are an excel power-user, and don't think you are a programmer, learn javascript, http://eloquentjavascript.net/.
I've known many Excel power users who are technically minded and experts in their domains, but nevertheless are not programmers. (Edit: obviously they're programmers in the sense that they make the computer compute things, but not via a general-purpose language.) They have no interest in learning Python or JavaScript or even VBA—it doesn't fit with how they like to think. Instead, they lay out complex calculations in Excel using long chains of intermediate columns (zeros and ones and COUNTIF, anyone?).
There are definitely some symmetries there, but also differences. These users don't think so abstractly. They get their computation working on one set of numbers and then, if they need to reuse it, copy-paste and modify.
The only thing a super complex formula is missing is the notion of a for loop. In lieu of that, I used to pull a column down as many times as the loop needed to run. Then I discovered macros, then I discovered javascript.
Some spreadsheet users discover programming and take to it, which is great. Others have the opposite reaction—writing scripts doesn't fit how they like to think or work. It sounds like you are the first kind of user; the ones I was thinking of are the latter.
Excel is a very powerful tool, but it reaches a utility plateau very quickly when you start going outside its intended purpose of being a spreadsheet application. You're correct that a power-user can do pretty much anything in Excel, but when you start talking about actual database operations (like Join), state, and UI, then you're at the point where the tool is working against you.
The so-so way: use MATCH in one column to get the row numbers, INDEX in one column for each column you want to pull in to actually grab them using that row number.
Alternative: join the tables using Power Query so you could have it refresh and give you the combined version even after adding more columns.
I mean, point taken, but in case you were wondering, then yeah.
VBA also does SQL operations on Excel tables, but that's, well, worse.
Yeah, it could be done using the ADODB interface on data in the workbook itself rather than from some actual DB. See this link for an example: http://stackoverflow.com/a/26678696/1502035
That being said, when I encountered someone doing this I was pretty surprised as well.
Pretty wild - it would be nice if MS would put a little effort into their cash cows now and then so people wouldn't have to resort to such things though.
I would argue that an API is easier to learn by several factors regardless of how complex it is compared to the abstract reasoning and modelling skills to actually transform the faced problems into a precise model.
I respectfully disagree. People who are not programmers use abstract reasoning and modelling skills all the time in their daily lives. They more than have the skills to solve this problem: You have a list of friends and where they are, you have your own location. Find the list of friends that are near your location, and send an alert (text, e-mail, whatever). How hard is that?
There is absolutely no inherent complexity to this problem.
I frequently use software specifications as a counter argument to this. Non developers can't write specifications without leaving out major key bits in the information, my point then would be that since they are not constrained by limitations of programming language but only their own expressions in written text that should not be a limiting factor.
Your example is extremely simple and could work for a new coder but once things get just slightly more complicated things get messy. I can already find exceptions in your very simple example: how exactly are user supposed to be alerted? Do all users have a mobile number so they can be texted? If not do you try email? What if they are online, do you send an im-message? Should you send to all available addresses? In which order do you prioritize message if you only send to the first available destination, email first or sms first?
This can be solved by having sensible defaults. So instead of having to specify everything, there's a baseline behavior. Most may be acceptable, some is not. Fine, then the user can change what needs to be changed.
The most important part is that the user is not spec'ing in the dark, instead they are modifying existing behavior to suit what they want.
The spec is a bit more complex than that. It's not "Friends who are in your location" but "Friends who are not usually in your location but are now" (prodigal sons?)
You cannot just give a smart person with common sense some pages of API docu and suddenly they are software architects. This takes years of experience to do. What you can do is be a code monkey and solve a very specific small problem space that an architect has assigned to you.
You are supporting cmontellas point, not negating it.
His point is that the inherent complexity of the problem is fairly trivial, and he stated it concisely.
And you are correctly pointing out that to realize the solution to the problem requires a great amount of work and expertise. Meaning, the actual programming has a great deal of additional (accidental) complexity.
Maybe some non-programmers manage to solve the problem itself. But only a tiny part of them will be able to write down what they actually did (step by step) in a precise way suiteable to tranlate it into a computer program.
At a glance, it reminds me of what MS Access was (is?)... or could have been if MS hadn't ignored it to death.
So many people (myself included) were/are incredibly empowered by that program, and I still have a fond place in my heart for Access, as it was my bridge from Excel macros to "real programming".
Hopefully Eve doesn't get DabbleDB'd... the world really needs a modern MS Access!
I work here, so disclaimer, etc, but we're working on a modern MS Access at Airtable [1] -- it's very similar to DabbleDB.
It's early days both for Eve and for us, but it kinda feels like we're approaching the same problem from different angles. Eve's more focused on the programming experience while we're starting by focusing on the data.
This looks quite interesting. Do you have plans for a pricing model to support building a public website on top of it? Also, will it only be (your) cloud based?
We're still figuring out pricing, but each base ("database") has a custom API (the endpoints correspond to your specific schema) that you can use to build a website: http://airtable.com/api
If you haven't heard of DabbleDB, this video is a good introduction: https://www.youtube.com/watch?v=6wZmYMWKLkY. It shared some goals with Eve, like letting people easily enter and manipulate data, though as far as I know it never tried to support general-purpose programming.
So every time I've seen something like this (ETL tools, LabView, Scratch, pd/max, etc) I've noticed a common problem. They're dead simple to create simple things in, but often times simple things grow into complex things over time and once things become complex things implemented in graphical programming languages they become nightmarish to maintain. Subtle logic ends up buried... Simple processes like a full search of a project or diffing between two versions become impossible or clunky and you end up with the one person who knows how to maintain X.
Is there anything here that addresses this problem?
The short answer is we have lots of tools in mind that will help with this, but programming this way just creates a very different kind of system. We've built some complex things and they've remained fairly flat and we made sure that it is both readily apparent what is contributing to your current query and easy to navigate into it if you want to see more. For the most part these problems boil down to navigation and debugging issues, both of which we have really powerful ideas for. For example, we want our debugging story to be what's called a "why? debugger" where you can click on any value in the system and Eve will show all the data that went into calculating it and every query it went through to get to here.
There's assuredly going to be lots more experimentation needed here, but we have every intention of making this handle more complex things. We're bootstrapping the compiler and eventually the editor, we've built clones of websites, and we'll continue to push the edges of what we can do with it :)
Well, I guess this is as good a comment as any here to say this... I feel morally obligated, as someone who has been frequently skeptical about visual programming on HN, to point out that what I personally really meant is that...
"Along the way to version 0, we tried everything from a Mathematica-like notebook built on functional programming to a purely spreadsheet-like model. We built dataflow languages and madlib based editors and read through papers from the foundations of computing. One thing that ran through all of this, however, was to make sure we never drank our own kool-aid too much. We dug through research and created postmortems and landscape summaries of all the projects that have come before us. We tested our ideas with actual people and against real projects. That meant that we "threw away" most of what we did to get here. It was the best way to keep ourselves honest."
... that is the minimum effort required. I have long advocated that people picking this problem up hit up what's already been done and hit the research to make sure they're not going down a known failure path.
If anyone's going to get to stick my skepticism about visual programming back in my face, it's someone who's doing the stuff in that paragraph, not someone who goes (basically) "Programming sucks, by extension you all suck, visual programming is obviously the solution because, visual! And look, guys, here's my 2-week solution that proves it out!".
I won't lie to you, I remain skeptical, but, well, I'm just generally skeptical about things that don't exist yet. I wish you all the best, and I promise you that if you do succeed I won't do that thing where I pick nits to claim it's a failure anyhow. And I also promise you that I'm happy to say you've been successful in some niche, not set the bar at "REPLACES ALL PROGRAMMING, EVERYWHERE!" or something equally silly.
It depends also very much on the individual psychology and experience of the human in question that is using the tool to develop, and the kind of thing that is being developed. For example, to lay out the gui in a gui tool, you'd rather locate it visually, while to specify the logic, you do it now textually. The reason the second is textual is because previous experiences made it so that specifying logic textually is easy for you (the experience here being practicing programming).
The question is, when someone is thinking "this is what my app should do," what is the model of the "this" inside their mind? What form does it have, and can we make computers be able to read something closer to that form, rather than having the human have to add more layers on that form before handing it to the computer.
If development becomes a conversation between the computer and the human, it might be more interesting, satisfying, fun and spawn some directions that the human would not have come to so quickly with earlier forms of development tools. Thus, this whole thing being about a search for something 'cool and fun' rather than being like 'X is bad, Y is a solution,' like a kid walking out in his/her backyard and looking for cool things with not much preconceived notion of what to look for. Exploration. Not replacing programming everywhere or things like, that, but "hey, here's a thing we made, why don't you try it and see if you like it."
Do you have any thoughts on how automated tests would work in this world? You mentioned email filters and responders in the OP, and I remember that everytime I start creating filters I quickly hit a point at which debugging or changing them becomes fraught.
We do! One neat thing about our architecture is that literally everything is just data and side-effects happen outside of the system itself. This means if you disconnect the "watchers" that do things like send email based on the presence of rows in the email table, you can safely do anything you want. This resolves a bunch of issues around testing for us.
The overall plan is to provide a bunch of really interesting generative testing facilities. For example, we can generate data based on your integrity constraints that tries to sneak values into the system that would break them. Being a live language also helps a lot here - you're inherently testing as you build things. We want to make it easy to just capture that while you're doing it.
That resonates. I've been trying to teach programming by teaching testing first: http://akkartik.name/post/mu. My basic idea is that the hardest thing about programming is learning to consider all the different situations your program might be thrown in. Static code gets 'uncoiled' into many different dynamic 'traces' at runtime. I try to focus on that hard thing right from the start.
I have to ask: are you familiar with Blizzard's GUI script programming? It introduced me to programming and allowed 10 year old me, terrible at logic (and never before programmed!), to make games without reading much external references (it is completely self-explanatory).
For example, by reading those I think you get immediately an ideal of what it does:
You can build those from a predetermined set of Events (if you're advanced you can create custom events), so just by reading from a list of events/actions you an figure out how to do whatever you want. I think it's similar to Scratch but it does away with the clutter -- it's not exactly trying to make programming "visual", it's more just merging documentation and code, so you have all the building blocks needed to build any application right in from of you.
I think this kind of rethinking you're doing can truly change how programming is like for beginners or people who want to make specific tools/application (and not learn a plethora of things they'll never use)!
Yep, I've had some good experiences with the old StarCraft editor. It had a surprisingly good first user experience because it was immediately useful (terrain editing was very intuitive, and stock games could be played on custom maps) and had a gentle learning curve (new game features could be added in progressively to the base game).
One of our past examples worked on a similar principle to the SC trigger system [1] but we unfortunately found that it didn't scale as well to larger systems. While it was very easy to write in, it was hard for a stranger to read and intuit the implied data flow. We've had a lot of fun experimenting with different query editors, and I hope to find the time soon to do a more in depth write up on our research.
Yes but have you had a marketing and a sales person and a business exec throw random logic and rules into the mix? That's the hardest part; integrating seemingly random or subtle business rules into the reporting or applications.
On this note it seems incredibly important to make sure the medium has something analogous to refactoring in code: ways to tranform the system into a better form in discrete, safe steps that do not change the behavior.
Version control, refactoring, code reuse, etc. are big items on the roadmap for the next release. We have some interesting ideas, including the realization that refactoring in this sort of system can take the form of graph rewrites.
The way I understand it Eve is aiming to be both a better Excel and a better Lotus Notes. I think its creators are still on to something.
These tools are often ridiculed and their use by non-programmers for creating business tools is often frowned upon but they allow business users to quickly create flexible, makeshift solutions to their problems. Not every business problem needs to be solved by a complex, cumbersome JEE application and an expensive application server.
While Lotus Notes apps certainly look awful and feel clunky most of the times there is a certain elegance to being able to quickly whip up a solution to a business problem or an urgent information need without having to go through a lengthy collection of requirements and approval process first. The same applies to Excel spreadsheets: They're a great tool for iterating quickly and getting a certain class of jobs done. Something like a REPL for non-programmers.
> These tools are often ridiculed and their use by non-programmers for creating business tools is often frowned upon but they allow business users to quickly create flexible, makeshift solutions to their problems.
They do, which is both great and terrible. They're powerful and easy to get started with, but therein lies the danger.
When used to quickly whip something up they're great, but when those things grow or end up being relied upon they're no better than the hack that the CEOs kids friend who is 'good with computers' produces.
So, since they're both useful and dangerous, are there things we can do?
Perhaps a spreadsheet that allows some form of testing? Are there simple tests we could start to encourage people to use? When I've used spreadsheets for some financial things, I know that if I increase one cell, I expect another to increase (for example). I know certain combinations of inputs that should result in certain outputs.
Also, perhaps a clear path from spreadsheet -> application? Often intermediate values are displayed somewhere, so could a spreadsheet app lead someone to naming them all (typically they'll have a 'variable name' just to the left of them).
edit - I should really have read the article first, but I think a focus on making things testable is important.
That's definitely something we spend a lot of time thinking about. We have to be able to support easy, exploratory programming but also be able to nail stuff down if it ends up being used a lot.
We have a bunch of ideas queued up. One of the simplest is generating fake data as you write code so that you have a better chance of noticing edge cases. We are also planning to proactively hint about integrity constraints (types, unique/foreign keys etc) eg if a column only contains integers, show a button that fixes the type to integer. There is an optional typing system in the wings too.
> but when those things grow or end up being relied upon they're no better than the hack that the CEOs kids friend who is 'good with computers' produces.
Ronald Reagan allegedly once said "Nothing lasts longer than a temporary government program.". The same applies to throw-away code and 'prototypes'.
> Perhaps a spreadsheet that allows some form of testing?
For complex, interrelated calculations this definitely makes a lot of sense. From my experience the most common problems with Excel sheets are collaborative editing and version control, though. I'm not sure if more recent versions of Excel and Office 365 in particular solve these issues. Google Spreadsheets sort of does (the UX is lacking, though) but then again hardly any company will even consider putting sensitive data in the cloud and rightfully so.
Even convincing users to use complementary tools that improve Excel-based processes might be difficult because due to prolonged, habitual (ab)use many users don't even see the problems that arise anymore ("Why change this? It's always been done this way."). Implementing such tools in a way they're accepted by users is hard. You can't just add Git or some acceptance testing framework to the flow and expect users to be happy with that. It'd have to be something that seems very intuitive and natural to use for the average Excel users.
A clear path from spreadsheet to application would be an interesting approach, too. Spreadsheets are a conglomeration of model, view and controller logic. If you could somehow separate those semi-automatically and generate a boilerplate application from that this might by a viable approach.
> While running in the browser is a requirement for Eve, it's always been clear that using javascript directly was not a long-term option. So many of our implementation problems come down to lack of control over data layout. For Eve we need to implement:
> New types (like intervals) - but there is a space overhead of 24 extra bytes per object
Polymorphic comparisons - but dispatching on typeof is slow
Cache-friendly indexes - but it's hard to store multiple js objects sequentially in memory
Radix tries - but converting strings to bytes is slow
> We also want to be able to distribute native code for mobile devices and use real threads on servers. Lastly, there is some benefit to using reference-counting for the indexes so that we can avoid copying nodes when we know we have sole access.
> We ruled out C++ and D on aesthetic grounds - we have a preference for small, simple languages that we can understand completely. Rust wins points for safety and abstraction but the toolchain is not nearly as mature and there are issues that currently prevent compiling with Emscripten. C gives us less support in the language but is much more future-proof at the moment.
We needed control over memory layout and some path to running in the browser. I considered writing the data-structures in C and the rest in Lua but it would require more manual memory management than Rust and there is no clear path to compiling mixed Lua/C projects into Javascript.
We're currently testing a feature that lets users unkill comments that shouldn't be dead. Once we roll it out to everyone, this will hopefully be less of a problem.
> Imagine what we could do just with a version of office where every bit of information was sourced live from a database, where instead of Power Point presentations of status you could throw together a dashboard and send it to everyone in the organization.
The thing is... you can. Not a presentation / PP, but if you want to create a self-updating dashboard, you can do it in Excel. It's not going to be super easy, but you can do pretty much all of it by clicking.
But if you want to do a "dashboard proper", you can use the Power BI tool: https://support.powerbi.com/knowledgebase/articles/471664 (which looks super amazing by the way for an office product, is free, and I want to have a reason to actually use it...)
It's strange that they didn't even mention those possibilities in the post. I see how they could try to improve a lot in those approaches however.
The new value proposition is reminiscent of DabbleDB, which was a groundbreaking product that didn't seem to get enough traction to sustain a business. Is this comparison apt? Any ideas on why Eve has a better chance of succeeding?
Dabble is the main reason I don't have much hope for Eve.
Dabble was amazing, it did everything in the Eve Tutorial much better than Eve does, it didn't require installing anything on your computer, it looked beautiful, it was very easy to use, it had great collaboration, and it had a TON more features than Eve, and it had excellent customer support. Dabble is one of my all-time favorite apps and I think it's one of the best webapps ever. And it didn't succeed.
I guess it's hard to know what might have happened with Dabble had they not created a separate analytics tool that was attractive to Twitter. Would someone else have bought Dabble and continued to support it? What if it had been built on a more mainstream architecture? I always thought Dabble would have been sustainable as a small business. Something about recent history (i.e. LightTable) tells me these guys aren't likely to toil away supporting this for the next 10-20 years to earn a modest income.
I can think of at least three limitations Dabble had that some future version of Eve might not have:
1) There were only so many primitive operations (conversions, calculations, filters, etc) and so there were some "programs" that you couldn't create.
2) For creating a UI, you basically had forms and reports. They had a great implementation of both of those, but they were the only tools you had.
3) You couldn't see the code. Just as Excel has the code hidden in a cell somewhere, in Dabble if you had a derived field (some calculation) you had to click on the field to see how it was calculated. It appears Eve has or will have some way of graphing these calculations so a program can be "read" without clicking on fields to see their formulas.
I like to try to experiment with this stuff as game development tools, because games are highly realtime / graphical / interactive and that's hard. It's easy to write an A -> B transform (like a compiler is a lang1 -> Either error lang2 transform for example) functionally, because ultimately that is a function. Doing interactive compute this way is hard, and that's where FRP, FRelP (functional relational programming) and a bunch of stuff could be used.
I was trying some interactive game dev stuff with this: http://ludumdare.com/compo/2014/08/27/reminisce-post-mortem/ (allows live coding and live edit) but some issues popped up as highlighted in the paper about it. I think a prototype-based approach like Self could be a good way to go. Also doing it in lispy languages to abstract the language upward while abstracting the problem downward.
The Out of the Tar Pit paper is really a good one.
Doing things this way allows more immediate connection to the creative spirit, as on the other side of the more 'logic'/'rational'-based one, which is sort of like static typechecking in human thought -- it prevents error, but to move forward you some times have to make leaps of faith/intuition. Like between two paradigms (check out Kuhn on scientific revolutions, or Science, Order and Creativity by Bohm). Need to be in and about the artwork. Sorry, been reading a bunch of Nietzsche / Psycho Cybernetics / Prometheus Rising type stuff and this is on my mind (http://www.paulgraham.com/top.html) right now haha.
Yeah games are a fun example to work with for such tools. Almost as if game development tools are an 'instance' of the prototype of 'development tools,' and rather than simply working on the abstract class it's nice to work on a prototype then expand. :)
Games also have an aesthetically-minded end point, rather than a solution-minded one, which keeps the focus on aesthetics/human values in focus.
OT: I hope that is the case. I've been responsible for building query-like tools for end users, and all the current stuff out there completely sucks.
For example, no non-programmer I've ever talked to can correctly explain the difference between:
A and B or C
A and C or B
And to be fair, it's only because of arbitrary precedence rule choices that those are different at all.
I've personally found that dealing with groups and instead of having "AND, OR" you have "ALL, ANY" and always group rules (even if they're groups of one rule).
But even when you have that, you then have to deal with nesting rules, and nests of nests.
The actual implementation of the backend of such systems is easy, the composite pattern / delegates pretty much deals with the implementation.
But the front-end side? They tend to then be forgotten and universally suck, to the point that either it gets handed off to a developer or query-tool expert to use, or some horrific mistake such as accidentally mail-shotting everyone[1] which causes them to never try to have automatic query rules again.
Graphical query building for the end-user is a really difficult area which hasn't seen enough research.
[1] I want to mail Visited Yesterday And are either Men Or Under 30. Instead of "Visit > Yesterday AND (Men OR Under 30). They forget the brackets. Whoops, that's half their clients hit.
I really enjoyed building data applications in MS Access before it was sunset. In fact, I finally started understanding SQL joins by using Access's graphical query builder so that it made me much better at my PHP web programming job. (That was in 2003-2005 time frame. I was also still in college.)
Maybe that's because they are trying to target 'non-programmers', so scale probably is not an issue for them. But that may hurt them if they want 'serious' programmers to take note.
For me, I want to try it out because there are a million little things I would like to do just for myself, not at web scale...
Wow. This is something that I had pondering over the past. A major frustration I notices with people using excel is Data-Entry coupled with horizontal scrolling. I was sketching a app with cells that are like MongoDB like collections and computations that can defined elsewhere.
Although, people have called making a generalised CRUD software, nearly impossible task, but I bet that, in that domain lies a room for an innovative conceptualisation of the problem. Maybe, in the next decade, we can see software with which people don't have to look for freelancers just to build a Data-Entry App. Good luck, Chris!
Honestly I thought from the thumbnail this was the very first iteration of Eve Online - Main thought was "Holy damn they put a nice GUI on that spreadsheet"
Eve looks incredible. I love seeing the the UI stuff that these guys put out as its a non-trivial problem to wrap a complex programming model with an interface (graphical) that can expose both powerful as well as generic functionality. The crew at Eve/LightTable are taking on a huge project and are clearly extremely talented. We would all do well to follow this project as it matures. Keep up the great work.
I think the major problem with this approach is trying to solve the general Problem.
These graphical abstractions can be excellent when tailored to a domain. We are shipping a product right now that abstracts the database away through a little graphical graph editor like this one to transform it into the domain language of the people (non programmers) that are consuming the data.
It is actually quite simple. There is a graphical designer that connects to a data warehouse. There someone defines a graph like structure with nodes that contain a collection of data rows and relations to other nodes of data rows. This results in a library that you can interop with in existing software tools of the customer to query that graph model naturally (Stuff like Location('airport')=>Car('sedane')=>RentalHistory). It's basically a glorified ORM mapper with a graphical programming language for the codegen of the objects to produce a kind of data DSL.
What is the domain in question? Have you run into issues where the power users who are non programmers want to do more programming like tasks over time to exert more control?
The domain is (non technical) government data analysts.
Yes we ran into this a lot with people wanting more control. There are several hooks and extension points in the software. Basically every graphical element has the equivalent of being replaced with an actual code file that you can substitute at runtime. This gets never used by the customer, usually we just provide extension libraries that abstract the concepts of whatever they want to do.
The quick instructions didn't look really quick to me, so I created a vagrant setup for those who don't want to install TypeScript, Rust Nightly, and multirust in its local machines.
The above uses a plain precise 32 box and install eve and its dependencies in the provisioning phase, I've also created a modified box (583MB) with eve dependencies hard-coded, which could serve better those who don't have precise32.box anyway.
Light Table is dead or alive at this point? I hear so little about it I can't help but feel it is on death's door or dead. I'm sad about that. It seemed like it could have been something really cool.
Not sure what you're talking about. I use it almost every day for clojure/js and sometimes python when I just want a quick script. (PyCharm for actual dev).
I have it installed on my machine too. I just hear nothing about its future and it seems like folks are focused elsewhere too. E.g., this topic about Eve. Especially for something YC backed. I guess I was just expecting to hear more and such.
IIRC the next release (version 0.8 I believe) is focused on porting to use Electron for its GUI. It doesn't appear to be dead, though I'm sure the devs wouldn't mind a little help (a willingness to try ClojureScript is probably a big plus if you were interested). ClojureScript is a Lisp so it should be fairly easy to pick up the basics.
He had a blog post about this, how early design decision to make absolutely every feature in the editor a plugin over-complicated the design to the point that he was the only one who could maintain it, and maintenance was all the more he could do. So yes, LightTable itself is done. He also said he planned on returning to addressing the goals of LightTable in the future.
I kind of did an eye roll at the breathless language in the announcement. I'm trying to unpack that reaction, because I actually really enjoy the first-principles approach that they have taken with their work.
I think, perhaps, it's because there's a fair amount of praxis already out there in the enterprise and small business sectors about this, and I didn't really see reference to any of that, Lotus Notes notwithstanding. I know it's nitpicky, but it was a strong reaction so I thought I would share it. I'll try to unpack it more:
Basically, creating data processing tools for humans is such a fundamental application of computer science that we even have a name for it: Information Technology. And, ever since the Mother of All Demos we have been trying to make a kind of "omni-tool" for data processing, and pretty much falling on our faces.
This is perhaps because a "general purpose tool" usually turns out to be a particular kind of "special purpose tool". The question is whether a large enough segment benefits from general purpose tooling, which entails taking on the overhead of learning how to use this "tool that makes tools" in order to accomplish their many tasks. In other words, there's a layer of indirection. Or, are most people's problems disjoint and specific, so that they benefit more from using a few special purpose tools that can then be loosely coupled together.
For example, let's say for my job I have to manage the generation of reports, etc, and post them on a company website that I maintain. I can use a document editor to edit documents, a communication service to send links to the documents, and a web-based CMS tool to post the final reports to the website. It's not clear that I would be better served, or even could be served (due to the network effect), by an all in one tool-builder tool. Three special purpose tools, which can guide you effectively in each task, might be easier to deal with than one general-purpose tool, all user grousing aside.
This particular type of omni-tool could be described as a"Distributed Filemaker." Filemaker-like tools are definitely popular, but tend not to unseat other special purpose tooling. And they have a problem shared by all powerful data modeling tools: they provide you quite a bit of rope to hang yourself with. If you want to really improve in this space, I suggest you focus on providing a data modeling and coherency paradigm that is appealing to non-technical users, yet successful for managing long term data that changes meaning over time. That would be profound, as poor data modeling is pretty much how all of these projects eventually crash on the rocks. Your users will not ask you for such a thing though, as they don't understand it.
Just from looking at the pics on the blog post, it looks much less simple or intuitive than writing a few lines of python. There's a lot of money to be made selling such things to enterprises that don't know better though.
So let's get this right...they couldn't write a stable IDE (reaching version 1.0) without re-writing it several times, inventing crazy tools to get around their platform limits (Lighttable runs on NodeJS?!?). And now they expect us to think they can revolutionize computing not just an IDE?
Sorry, but this project is way to ambitious and run by people who have way to weak of a track history to inspire confidence. Go back and finish what you started on, and what you took money for, instead of taking more funding for an even bigger project.
Reminds me of half the Kickstarter games these days: "whelp that didn't work, good thing we don't have to give you your money back!".
The game is famous enough to create real confusion. Especially in geek social circles I guess a very high percentage of people immediately think about the game when seeing "Eve".
I don't even play the game, and I was immediately confused by the HN headline. I thought it was going to be a blog post about the history of the game and its first version.
There is a Python library called Eve for creating REST endpoints based on your database schema. I thought that was what this was since that is also in version 0.
> Eve is our way of bringing the power of computation to everyone, not by making everyone a programmer but by finding a better way for us to interact with computers. On the surface, Eve is an environment a little like Excel that allows you to "program" simply by moving columns and rows around in tables. Under the covers it's a powerful database, a temporal logic language, and a flexible IDE that allows you to build anything from a simple website to complex algorithms.
Thanks for taking a read. Chris stated in the blog post that this release is basically "a database with an IDE". Our vision for Eve is that people will use it as a tool for thinking and then communicating the results of that thought process.
Take a look at Excel, for instance. Excel is the most widely used programming language by non-programmers simply because of its dead-simple programming abstraction: a grid of cells than can hold data and reference each other with formulae. History has shown anybody can get this.
People have taken this surprisingly far, but there are still drastic limitations to what you can build this way (how do you manage state, UI, external access to APIs and data?). Trying to make programming in its current form simpler is a dead-end; as a task, programming has built up too much incidental complexity over the years. Instead, we are approaching it from the Excel angle of trying a completely different abstraction, which (we hope) will attract the same kind of attention from non-programmers as Excel has.
It is good that we keep trying at this problem. Does it feel that as an industry keep trying to solve this problem and never "quite" get it right? I have read about the old "CASE" tools or old VB approaches, for instance. Even Microsoft Access now can generate applications on Azure backed by SQL Azure (SQL Server for the cloud), then there was some Intuit tool (can't remember the name). One can argue Excel itself was quite good at some of these apps. More recently seems like Dabble or Popfly were a thing at least for a few months.
Every time something like this happens, "real developers" feel threatened, while business users love them for their side projects when the "real developers" are too busy to care.
Maybe these types of tools are always destined to come up again (since we reinvent the platforms all the time), and then a few users use them, while real programmers for the most part simply yawn once more.
Are we ever going to push programming to a level of maturity where we can use building blocks and be real productive, yet have the flexibility to create real, sophisticated applications?
Seems we never quite get there.
But it is positive to see that new generations of developers don't stop trying.
> what it seems like we need is something more akin to the original vision of Lotus Notes - an environment full of information where communicating that information to people or even other systems is a fundamental primitive.
Oh boy, here we go again. A more direct admission of not knowing what to build is hardly imaginable.
Most of what we see on HN is about building applications, servers, websites etc. Big, monolithic things that take weeks to years and are deployed to somewhere else and used by lots of people. Most programming tools are built for this kind of work, where the time between writing the code and actually using it is days or months.
But the people we want to make programming accessible to are mostly knowledge workers. Their work is characterised by a mixture of manual work and automation, throw-away code and tools rather than applications. It's better supported by Excel, SQL, shell scripting etc than by the big languages and IDEs.
We realised that we can do much more good focusing on that kind of programming.