When I read the title I couldn't help but think "did everyone forgot about hard disks?"
I'm sure Tim Berners-Lee is much smarter than me, but I kind of feel there are some parallels between the idea of "owning" posts you made in a platform and the ludicrous idea of "owning" game items as NFTs in a blockchain. The latter promises interoperability that games would never deliver. I wonder about the former.
At least I feel the major dealbreaker with this technology is just that it's not worth it for both parties involved.
Right now, Facebook hosts all the posts and monetizes them with ads. So long as they are making money with ads, they have no reason to delete the posts they're hosting, as the posts are their money maker.
But what happens if Facebook no longer "owns" the posts?
So now your posts are in your "personal cloud", which means that unless they are encrypted any website or local app can display them, even without any ads. This means Facebook is no longer making money off the posts. Why would they accept this?
On the flip side, who is paying for the hosting? Facebook? It's no longer their servers hosting the content, so I don't think so? Is Facebook supposed to pay the cloud service for metered API access? Can a cloud service offer different rates to different companies? Is the user supposed to pay for their cloud storage? So you're going to make users pay money to use facebook?
What happens if a post violates the ToS? Can facebook delete my post in my cloud storage against my will? What happens if content that is legal where facebook operates is illegal where the cloud servers operate?
Can I manually edit the data in my cloud storage like I'd be able with a file and then facebook has to treat every post as if it were untrusted input?
What happens if my cloud storage closes my account? I just lose everything? Will I be able to back up my cloud to my hard disk and reupload it to another cloud so facebook can access it? How is facebook going to handle a single user with 2 clouds that have different content?
I feel like this is a very complex thing and there are infinite questions that we can have about how this would be implemented in practice, while it's presented as simply "you own your data."
Why not just bookmark it in your web browser? Or create a Tumblr blog and make a new post for every cool article you find. You can set and edit the tags later if you want for searchability.
It is really easy to code on, supports lots of different platforms (including embedded and mobile devices). Some times established projects can be superseded with newer ones build with more modern tools. I get that feeling in this space and it's huge what has been built already.
I feel like this is disingenuous. I have never used F-droid, but it seems they only publish open source apps and they take the initiative selecting them.
This isn't a good app store for the majority of app developers, since they wouldn't be able to publish there out of their own accord.
It isn't an invite only club. Anyone can submit an existing application[0] and an app author can provide a metadata pack to speed up the process. They have some requirements to accept but it isn't a situation where a developer is just waiting around for the letter of invite to arrive[1].
>Understanding the target audience for your product results in very different design decisions
This is an excuse. Just add an option to sort both ways. It isn't hard.
There is no target audience in this planet that benefits from less options or less features. Even if you had the features under an "advanced mode" UI that's still a better software than not having the feature in first place.
Have people forgotten the 80/20 rule? Most features will be used by only a small slice of users, that doesn't mean they're out of scope.
Sorry, I'm just kind of exhausted of software not being able to do the most obvious things because it didn't align to some perfect vision of how the user should be.
> There is no target audience in this planet that benefits from less options or less features.
I'm currently involved in UI design and, to my frustration, adding more options or features seems to send a vocal minority of the user base into a foaming-at-the-mouth violent rage. It's like any change resets the entire contents of their brain, and it's our fault we're making things so confusing for everyone...
And let's not get started on how we're wasting time adding things that they don't personally need, and therefore no one could possibly need, ever. No, clearly by adding this sorting method, we must have directly stolen development time from the feature they want, which is a personal attack directed at them and every member of their family going three generations back.
> any change resets the entire contents of their brain
That's because it does. Consistency is incredibly important.
The problem isn't that you're adding a feature, the problem is that you're adding a feature in an obtrusive way. Add as many features as you like (while preserving performance), but keep the day-to-day UI as stable as you possibly can. Place entry points (buttons) for new features in menus first, and make sure they're both used frequently and by many users before moving them to a crowded toolbar (and then give good thought about where it belongs on said toolbar/menu). Don't remove features unless they're truly problematic, and don't change UI.
I have the same problem on Nemo. More specifically, I had made a small app that displayed files of a directory in alphabetical order, and then when I look at it in Nemo it isn't the same order because I didn't implement their smart algorithm.
I fail to see how this is a "problem"? You implemented a sorting mechanism that was useful to your application, while Nemo implemented another which as this thread demonstrates seems to be much more useful and intuitive for the average user. This is also of course not specific to Nemo, as no 'modern' file manager on Linux sorts filenames like it's 1980 and all you are able to feasibly do is step through the bytes.
Typescript requires a compiler to produce valid Javascript. Python 3 shoved types into Python 3 without breaking backwards compatibility I think.
You would never have typing.TYPE_CHECKING to check if type checking is being done in TypeScript, for example, because type hints can't break Javascript code, something that can happen in Python when you have cyclic imports just to add types.
My love for python was critically hurt when I learned about typing.TYPE_CHECKING.
For those unaware, due to the dynamic nature of Python, you declare a variable type like this
foo: Type
This might look like Typescript, but it isn't because "Type" is actually an object. In python classes and functions are first-class objects that you can pass around and assign to variables.
The obvious problem of this is that you can only use as a type an object that in "normal python" would be available in the scope of that line, which means that you can't do this:
def foo() -> Bar:
return Bar()
class Bar:
pass
Because "Bar" is defined AFTER foo() it isn't in the scope when foo() is declared. To get around this you use this weird string-like syntax:
def foo() -> "Bar":
return Bar()
This already looks ugly enough that should make Pythonists ask "Python... what are you doing?" but it gets worse.
If you have a cyclic reference between two files, something that works out of the box in statically typed languages like Java, and that works in Python when you aren't using type hints because every object is the same "type" until it quacks like a duck, that isn't going to work if you try to use type hints in python because you're going to end up with a cyclic import. More specifically, you don't need cyclic imports in Python normally because you don't need the types, but you HAVE to import the types to add type hints, which introduces cyclic imports JUST to add type hints. To get around this, the solution is to use this monstrosity:
if typing.TYPE_CHECKING:
import Foo from foo
And that's code that only "runs" when the static type check is statically checking the types.
Nobody wants Python 4 but this was such an incredibly convoluted way to add this feature, specially when you consider that it means every module now "over-imports" just to add type hints that they previously didn't have to.
Every time I see it makes me think that if type checks are so important maybe we shouldn't be programming Python to begin with.
There's actually another issue with ForwardRefs. They don't work in the REPL. So this will work when run as a module:
def foo() -> "Bar":
return Bar()
But will throw an error if copy pasted into a REPL.
However, all of these issues should be fixed in 3.14 with PEP649 and PEP749:
> At compile time, if the definition of an object includes annotations, the Python compiler will write the expressions computing the annotations into its own function. When run, the function will return the annotations dict. The Python compiler then stores a reference to this function in __annotate__ on the object.
> This mechanism delays the evaluation of annotations expressions until the annotations are examined, which solves many circular reference problems.
Please ignore my first assertion that the behavior between REPL and module is different.
This would have been the case if the semantics of the original PEP649 spec had been implemented. But instead, PEP749 ensures that it is not [0]. My bad.
> that isn't going to work if you try to use type hints in python because you're going to end up with a cyclic import. More specifically, you don't need cyclic imports in Python normally because you don't need the types, but you HAVE to import the types to add type hints, which introduces cyclic imports JUST to add type hints.
Yes, `typing.TYPE_CHECKING` is there so that you can conditionally avoid imports that are only needed for type annotations. And yes, importing modules can have side effects and performance implications. And yes, I agree it's ugly as sin.
But Python does in fact allow for cyclic imports — as long as you're importing the modules themselves, rather than importing names `from` those modules. (By the way, the syntax is the other way around: `from ... import ...`.)
If the type is a class with methods, then this method doesn't work, though adding intermediate interface classes (possibly with Generic types) might help in most cases. Python static type system isn't quite the same level as F#.
> Well, these complaints are unfounded.
"You're holding it wrong." I've also coded quite a bit of OCaml and it had the same limitation (which is where F# picked it up in the first place), and while the issue can be worked around, it still seemed to creep up at times. Rust, also with some virtual OCaml ancestry, went completely the opposite way.
My view is that while in principle it's a nice property that you can read and and understand a piece of code by starting from the top and going to the bottom (and a REPL is going to do exactly that), in practice it's not the ultimate nice property to uphold.
I ran into some code recently where this pattern caused me so much headache - class A has an attribute which is an instance of class B, and class B has a "parent" attribute (which points to the instance of class A that class B is an attribute of):
class Foo:
def __init__(self, bar):
self.bar = bar
class Bar:
def __init__(self, foo):
self.foo = foo
Obviously both called into each other to do $THINGS... Pure madness.
So my suggestion: Try not to have interdependent classes :D
Well, at times having a parent pointer is rather useful! E.g. a callback registration will be able to unregister itself from everywhere where it has been registered to, upon request. (One would want to use weak references in this case.)
> If you have a cyclic reference between two files,
Don't have cyclic references between two files.
It makes testing very difficult, because in order to test something in one file, you need to import the other one, even though it has nothing to do with the test.
It makes the code more difficult to read, because you're importing these two files in places where you only need one of them, and it's not immediately clear why you're importing the second one. And it's not very satisfying to learn that you you're importing the second one not because you "need" it but because the circular import forces you to do so.
Every single time you have cyclic references, what you really have are two pieces of code that rely on a third piece of code, so take that third piece, separate it out, and have the first two pieces of code depend on the third piece.
Now things can be tested, imports can be made sanely, and life is much better.
Using the typical "Rust-killer" example: if you have a linked list where the List in list.py returns a Node type and Node in node.py takes a List in its constructor, you already have a cyclic reference.
On the other hand, I tend to take it as a hint that I should look at my module structure, and see if I can avoid the cyclic import (even if before adding type hints there was no error, there still already was a "semantic dependency"...)
You're actually missing the benefit of this. It's actually a feature.
With python, because types are part of python itself, they can thus be programmable. You can create a function that takes in a typehint and returns a new typehint. This is legal python. For example below I create a function that dynamically returns a type that restricts a Dictionary to have a specific key and value.
With this power in theory you can create programs where types essentially can "prove" your program correct, and in theory eliminate unit tests. Languages like idris specialize in this. But it's not just rare/specialized languages that do this. Typescript, believe it or not, has programmable types that are so powerful that writing functions that return types like the one above are Actually VERY common place. I was a bit late to the game to typescript but I was shocked to see that it was taking cutting edge stuff from the typing world and making it popular among users.
In practice, using types to prove programs to be valid in place of testing is actually a bit too tedious compared with tests so people don't go overboard with it. It is a much more safer route then testing, but much harder. Additionally as of now, the thing with python is that it really depends on how powerful the typechecker is on whether or not it can enforce and execute type level functions. It's certainly possible, it's just nobody has done it yet.
I'd go further than this actually. Python is actually a potentially more powerfully typed language than TS. In TS, types are basically another language tacked onto javascript. Both languages are totally different and the typing language is very very limited.
The thing with python is that the types and the language ARE the SAME thing. They live in the same universe. You complained about this, but there's a lot of power in that because basically types become turing complete and you can create a type that does anything including proving your whole program correct.
Like I said that power depends on the typechecker. Someone needs to create a typechecker that can recognize type level functions and so far it hasn't happened yet. But if you want to play with a language that does this, I believe that language is Idris.
And, as you heavily imply in your post, type checkers won't be able to cope with it, eliminating one if the main benefits of type hints. Neither will IDEs / language servers, eliminating the other main benefit.
I don't believe Typescript (nor Idris) type systems work like you describe, though? Types aren't programmable with code like that (in the same universe, as you say) and TS is structurally typed, with type erasure (ie types are not available at runtime).
I am not that deeply familiar with Python typings development but it sounds fundamentally different to the languages you compare to.
THe powerful thing about these languages is that they can prove your program correct. For testing you can never verify your program to be correct.
Testing is a statistical sampling technique. To verify a program as correct via tests you have to test every possible input and output combination of your program, which is impractical. So instead people write tests for a subset of the possibilities which ONLY verifies the program as correct for that subset. Think about it. If you have a function:
def add(x: int, y: int) -> int
How would you verify this program is 100% correct? You have to test every possible combination of x, y and add(x, y). But instead you test like 3 or 4 possibilities in your unit tests and this helps with the overall safety of the program because of statistical sampling. If a small sample of the logic is correct, it says something about the entire population of the logic..
Types on the other hand prove your program correct.
def add(x: int, y: int) -> int:
return x + y
If the above is type checked, your program is proven correct for ALL possible types. If those types are made more advanced via being programmable, then it becomes possible for type checking to prove your ENTIRE program correct.
Imagine:
def add<A: addable < 4, B: addable < 4>(x: A, y: B) -> A + B:
return x + y
With a type checker that can analyze the above you can create a add function that at most can take an int that is < 4 and return an int that is < 8. Thereby verifying even more correctness of your addition function.
Python on the other hand doesn't really have type checking. It has type hints. Those type hints can de defined in the same language space as python. So a type checker must read python to a limited extent in order to get the types. Python at the same time can also read those same types. It's just that python doesn't do any type checking with the types while the type checker doesn't do anything with the python code other than typecheck it.
Right now though, for most typecheckers, if you create a function in python that returns a typehint, the typechecker is not powerful enough to execute that function to find the final type. But this can certainly be done if there was a will because Idris has already done this.
The syntax is a monstrosity. You can also extract a proven OCaml program from Coq and Coq has a beautiful syntax.
If you insist on the same language for specifying types, some Lisp variants
do that with a much nicer syntax.
Python people have been indoctrinated since ctypes that a monstrous type syntax is normal and they reject anything else. In fact Python type hints are basically stuck on the ctypes level syntax wise.
That's horrible. Nobody needs imperative metaprogramming for type hints. In fact, it would be absolute insanity for a typechecker to check this because it would mean opening a file in VS code = executing arbitrary python code. What stops me from deleting $HOME inside make_typed_dict?
TypeScript solves this with its own syntax that never gets executed by an interpreter because types are striped when TS is compiled to JS.
>VS code = executing arbitrary python code. What stops me from deleting $HOME inside make_typed_dict?
Easy make IO calls illegal in the type checker. The type checker of course needs to execute code in a sandbox. It won't be the full python language. Idris ALREADY does this.
Are there really productive projects which rely on types as a proofing system? I've always thought it added too much complexity to the code, but I'd love to see it working well somewhere. I love the idea of correctness by design.
No too my knowledge nothing is strict about a proofing system because like I said it becomes hard to do. It could be useful for ultra safe software but for most cases the complexity isn't worth it.
But that doesn't mean it's not useful to have this capability as part of your typesystem. It just doesn't need to be fully utilized.
You don't need to program a type that proves everything correct. You can program and make sure aspects of the program are MORE correct than just plain old types. typescript is a language that does this and it is very common to find types in typescript that are more "proofy" than regular types in other languages.
Typescript does this. Above there's a type that's only a couple of lines long that proves a string reversal function reverses a string. I think even going that deep is overkill but you can define things like Objects that must contain a key of a specific string where the value is either a string or a number. And then you can create a function that dynamically specifies the value of the key in TS.
I think TS is a good example of a language that practically uses proof based types. The syntax is terrible enough that it prevents people from going overboard with it and the result is the most practical application of proof based typing that I seen. What typescript tells us that proof based typing need only be sprinkled throughout your code, it shouldn't take it all over.
I want to say it (or something similar at least) was originally addressed by from __future__ import annotations back in Python 3.7/3.8 or thereabouts? I definitely remember having to use stringified types a while back but I haven't needed to for quite a while now.
It turns them into thunks (formerly strings) automatically, an important detail if you're inspecting annotations at run time because the performance hit of resolving the actual type can be significant.
I prefer Google Lens because the voodoo it does is far more than just image search, and is genuinely useful to me. I use it to translate text in images multiple times a day (I live in a country where I'm not a native speaker). I've used it to identify birds, plants, and even furniture - it found me the local shop for a table in a cafe that I liked, which was pretty amazing.
I'm not a Google fan, in fact I actively try to choose alternatives where possible, but they do make some good products not matched by anyone else, and it's not useful to pretend that isn't the case. Lens is one such product, Maps is another... Ok maybe that's it, since I use LLMs for most translation tasks these days.
I'm sure Tim Berners-Lee is much smarter than me, but I kind of feel there are some parallels between the idea of "owning" posts you made in a platform and the ludicrous idea of "owning" game items as NFTs in a blockchain. The latter promises interoperability that games would never deliver. I wonder about the former.
At least I feel the major dealbreaker with this technology is just that it's not worth it for both parties involved.
Right now, Facebook hosts all the posts and monetizes them with ads. So long as they are making money with ads, they have no reason to delete the posts they're hosting, as the posts are their money maker.
But what happens if Facebook no longer "owns" the posts?
So now your posts are in your "personal cloud", which means that unless they are encrypted any website or local app can display them, even without any ads. This means Facebook is no longer making money off the posts. Why would they accept this?
On the flip side, who is paying for the hosting? Facebook? It's no longer their servers hosting the content, so I don't think so? Is Facebook supposed to pay the cloud service for metered API access? Can a cloud service offer different rates to different companies? Is the user supposed to pay for their cloud storage? So you're going to make users pay money to use facebook?
What happens if a post violates the ToS? Can facebook delete my post in my cloud storage against my will? What happens if content that is legal where facebook operates is illegal where the cloud servers operate?
Can I manually edit the data in my cloud storage like I'd be able with a file and then facebook has to treat every post as if it were untrusted input?
What happens if my cloud storage closes my account? I just lose everything? Will I be able to back up my cloud to my hard disk and reupload it to another cloud so facebook can access it? How is facebook going to handle a single user with 2 clouds that have different content?
I feel like this is a very complex thing and there are infinite questions that we can have about how this would be implemented in practice, while it's presented as simply "you own your data."
reply