Meta doesn't do that. For one thing, "Patch Tuesday" is a Windows thing, and 0% of production traffic is served from Windows there. For another, they are constantly and always redeploying.
More likely that someone bungled a deploy of user auth. No doubt they are rolling back as we speak.
The local first approach is the primary reason I use Obsidian. I trust that I can _depend_ on Obsidian because of this.
On the other hand, this has also caused some headaches around using it on mobile.. but so far this has been a worthwhile tradeoff. Thanks for all the hard work!
Syncing bytes is easy, many solutions exist (and syncthing / syncthing-fork is good at it).
Syncing by merging changes and resolving possible conflicts is a much harder task. Theoretically git has all the right bits, including the pluggable diffing and merging. In practice, I haven't seen it seriously used in this capacity.
This is to say nothing about files you only want on one node but not on another (heavy stuff lives on server and laptop, but not mobile, etc.)
This is why special-case syncing tools that know how to sync semantically are indispensable.
Most apps aren't built to use it, especially on mobile. Think of the use case of your grocery list--you want one tap, open the list, type type type, and done. Anything else--having to tap save, sync, write a commit message, etc... anything, is a fail in my opinion. Git is great to use behind the scenes but I don't want to see it in the UI or slow down my workflow.
Indeed, this can be done, but usually isn't. And when it is done, it looks like another proprietary syncing protocol.
The thing is that you should not expect a user to explicitly host a git repo somewhere to for a grocery list app. Most apps are designed for users who are unwilling to do that, and are actually ready to pay to avoid whatever technical hurdles.
OTOH I see a niche for an app geared towards more technical users, chich would, among other things, allow you to point at a git / hg / whatever repo to use as the synchronization point.
Does it not ultimately have the same problem? i.e. when you open obsidian, there's no guarantee the files are up to date as Android may have killed the third party sync program. And on iOS, there's no way for the sync program and obsidian to share the same filesystem short of the obsidian devs explicitly integrating
Android does have Content Providers [0], basically apps can provide a "filesystem" which isn't locally stored on your phone and act like Network Shares. Caveat is that you need an internet connection.
Good question, because it is indeed the default behavior.
But you can always tweak settings to run the 3rd party sync app always in the baclground, and override the battery optimization setting for that particular app.
Syncthing works well on Android for photos, music, movies, and downloads. Not so much for notes. You'll end up with conflicts, and there aren't many great ways to merge changes on mobile.
Obsidian Sync is by no means cheap, but I've never used a better syncing service. I'm on my second year and can't think of a single issue I've had across laptops, desktops, an Android phone, and a Chromebook.
I can think of a number of other notes syncing that's better -- probably even Evernote's. As a happily paying Obsidian Sync customer, I'll drop some reality, so new people aren't caught off-guard.
- Obsidian Sync is pretty slow.
- Obsidian Sync doesn't happen in the background, at present. That means, if you just made a bunch of updates in Obsidian, or you haven't opened the Obsidian mobile app in a while, you're in for a wait.
- Obsidian Sync occasionally has sync errors that involve manual interaction.
That said, it's fine and the overall Obsidian experience makes it worth it (well, if you can swing a discounted price).
I use Microsoft Word’s multi-editing feature at work. The sync is essentially real-time (setting aside other opinions in Microsoft Office). You can see every change that your co-editors make as they make it. You can work on one file on two different devices at the same time. That is the kind of sync that I’d like.
More realistically, I used to use a custom sync setup with a WebDAV server I set up and Goodsync software. You can set it to sync in file change, and it was fast, with changes replicating in a few seconds.
As it is, the Obsidian sync takes a few minutes. And if you edit the file on another device before sync goes through, you’ll lose the changes from one device or the other.
> As it is, the Obsidian sync takes a few minutes. And if you edit the file on another device before sync goes through, you’ll lose the changes from one device or the other.
Clearly we have had very different experiences. I have mainly markdown notes, PDFs, and screenshots and it syncs everything continuously as I work. As for "losing" the changes, I'll have to push back on that. You have full version history, so while you might have to look at an old version, you won't lose anything. There's certainly nothing unique to Obsidian with respect to conflict resolution. If version history isn't working, you'll have to talk to the developers because there's a serious bug.
> "As it is, the Obsidian sync takes a few minutes. And if you edit the file on another device before sync goes through, you’ll lose the changes from one device or the other."
This has been my biggest fear using sync. So far I haven't had any issues, but I just get a" feeling" (maybe it's the lag between syncs) that this could definitely happen.
Can you explain in a little more detail how it might actually occur? Maybe so I can prevent it from happening.
Usually, I've had it happen when I try writing stuff on a device that's been offline and I haven't brought it online to pull in the latest changes. We essentially have a merge conflict. Thankfully, they're not a pain to resolve.
OK, got it. Thanks. So, to prevent from happening in the first place, it'd make sense to give the recently sleeping device a moment or two to catch up.
It significantly improved for me a few months ago. The syncing seems to start much more quickly after I open the app. Not perfect but much better than it had been when I had to keep the app open by constantly touching my screen and hoping it would even start the syncing process.
I love Obsidian Sync as well but to be devil's advocate, it doesn't "just work" as a lot of people claim. It's still a bit rough around the edges. For example, it doesn't sync settings or starred files immediately. I've also noticed it dropping some text if I edit the same file on multiple devices simultaneously (or even in quick succession before sync is able to catch up). I'm sure these issues and more would exist with a 3rd party syncing solution but Obsidian sync still needs some work before it's perfect.
When I say simultaneously I mean typing something on a file on my laptop and then putting that away and making more change on my phone a few seconds later (sometimes the gap is even longer than that). Sync is pretty slow especially since it doesn't sync in the background.
It depends on your workflow. I use git to sync my obsidian vault. There's plugins to automate this, but doing it manually isn't that bad either. I use mobile mostly to read notes, and occasionally I'll write down a short line or two which I can sync over and edit and organize on desktop.
I went for the paid syncing because I want it to "just work" while still having the futureproof way of storing the data locally in an accessible way.
So far it has worked absolutely flawless. If I change a file it's changing on my connected device in seconds. Not exactly like working on a shared google doc but close enough that I would even use it as a hack to quickly share links between my mobile and my desktop
So simple, they combine the best of all eras: local first, open, published formats and pluggable/byo multi-device sync/backup – Cloud if you wish, but not required. It gives me hope for the future, I wish more software these days followed this model.
Caveat: not an obsidian user (although I am a big step closer after this)
Same! And that's why I'm now a DevOps engineer. In my role, DevOps primarily means automation and pipeline creation, working with teams to build and release their apps reliably and effectively. For me, it really scratches the itch of "setting things up".
Gosh I love dev ops. Smaller company so I have many hats. I saved a whole week of “clean up CI/CD and make the integration tests 3x faster” as a “treat” for the last week before vacation this year.
People like you are heroes in the workplace to me. I hate anything devops related because I feel like it takes time away from the stuff I'm trying to do, so I appreciate anyone who does it!
I am beyond astounded. I was able to run a Docker image, utilize the fs inside of the container, and exit the container. Docker system commands work as expected (`docker ps` shows no containers, `docker ps -a` shows the exited container)
A few little things are weird (I can exec into a stopped container for example) but I was able to start another container and persist files.
Wild. This is unbelievable. Can anyone please explain to me why this isn't as wildly groundbreaking as this seems?
What I struggle with in terms of how impressive to find something like this is: there's an awful lot of "here's the command" "and here's the output" examples and explanations for all this stuff out there, in man pages, in tutorials, in bug reports, in Stack Overflow questions and answers, that presumably went into the training data.
Obviously what's happening is much more complex, and impressive, than just spitting back the exact things it's seen, as it can include the specific context of the previous prompts in its responses, among other things, but I don't know that it's necessarily different in kind than the stuff people ask it to do in terms of "write X in the style of Y."
None of this is to say it's not impressive. I particularly have been struck by the amount of "instruction following" the model does, something exercised a lot by the prompts people are using in this thread and the article. I know OpenAI had an article out earlier this year about their efforts and results at that time specifically around training the models to follow instructions.
I've been playing with it since yesterday. I was able to ask it for output that literally had my crying with laughter (e.g. "Write a country song about Sansa Stark and Littlefinger" or "Write a sad song about McNuggets"). That scared me for a minute because it's giving me what I want, mentally anyway, beyond anything else I've seen recently. I'd be worried it's addictive. But it seems like it has an ability to enhance my own mind as well, because I can ask it things about what I'm thinking about, and it generates a certain amount of seemingly generic ideas but I can expand on it or get more specific. I can take the ideas I want from it into my actual life. I've come up with several insights, realized certain ways of thinking I've been stuck in, and even based on its examples realized things about generating creative ideas for myself. Maybe I'm over-reacting but it's really something new. I haven't cared that much about AI but now that I have access to it it's another matter. In comparison, I also played around with DALL-E just now but that's not really achieving anything special for me like that.
I'm wholeheartedly confused why so many people are only just now learning of OpenAI/GPT-3 and its chat mode, I guess presentation truly is everything. Nothing here is particularly new, it's just a better model than before.
Statements like "the people haven't realized it yet" confuse me because "the people" is two groups. People in the know, and people not in the know. Everyone in the know realizes where this is headed and what the potential is.
Those not in the know simply lack the technical background to have followed the incremental developments up till now which have led to this moment, for them it's a parlor trick because even today they cannot grasp the potential of existing technology. I could similarly lament about how people treat the Internet.
It's like with Dalle-2 and StableDiffusion, so many people were just not understanding how it was even possible, some people even going as far as calling it a hoax in some way.
But for anyone paying attention, it's been easy to see the progression. I'm not even an ML person but I could give you a map from every paper to every other paper for how this has all been happening faster and faster, basically starting with AlexNet in 2012.
That said, this chatGPT is different than GPT-3's first demos earlier last year or the Codex interface in that it is implementing a consistent memory and seems to have a token length capability much, much longer than before. This is having a huge effect on what you can coax out of a chat with it. You can tell it to act a certain way and then continuously interact with that entity- with GPT, you got the one prompt, but once you tried again with a new prompt that memory was gone. You could attempt to feed the entire output back in as input, but at least initially the token length would cut things off eventually. Meanwhile, with chatGPT, I just had a 20-minute conversation with a "girl from Reseda, CA' who's a barista and like, totally is going to go on a keto diet like her sister. " because I told it that is who it should act like it is and under all circumstances it should respond to my chat in that way.
BTW she says that "bangs are totally in style right now" and she really likes "exploring new hairstyles like ones from the 90's"
I feel very much at odds with this - it’s not going beyond a couple commands, this is exactly what I’d expect a language model to be able to do today based on the past three years progression. It’s not actually executing anything ofc, it’s finding the output quite literally a well formed amalgamation of all the learned examples online of which there’s be tons.
It’s something like novelty * length * complexity with * accuracy that impressed me, of which it’s not far beyond simple tutorials or snippets you’d find online.
But isn't it just predicting text patterns? It doesn't really know about Docker, just that after running commands X,Y you usually get output Z (of course with the stateful AI magic to make things more stable/consistent).
I mean not to veer to far into the philosophical side of this, but what does it actually mean to know or understand something?
Did you see the demo the other day that was posted here of using stylographic analysis to identify alt accounts? Most of the comments were some form of "holy shit this is unbelievable", and the OP explained that he had used a very simple type of analysis to generate the matches.
We aren't quite as unique as we think was my takeaway from that. My takeawy from this, as well as the SD, DALL-E stuff is that we're all just basically taking what we heard from the past, modifying it a teeny bit, and spitting it back out.
…but people are getting the mistaken impression that this is an actual system, running actual commands.
I can also emulate a docker container. I’ll just write down the commands you send me and respond with some believable crap.
…but no one is going to run their web server on me, because that’s stupid. I can respond hundreds of times a second and maintain the internal state required for that.
Neither can this model.
It’s good, and interesting, but it’s not running code, it’s predicting sentences and when you’re running software it was to be accurate, fast, consistent and have a large internal data state.
Trying to run docker in gpt is fun. Trying to use docker in gpt to do work is stupid.
It’s never going to work as well as actually running docker.
It’s just for fun.
Models that write code and the execute that code will be in every way superior to models that try to memorise the cli api of applications.
It’s an almost pointless use of the technology.
Gpt may have “learnt” python; that’s actually interesting!
Docker is not interesting.
If I want to use the docker api, I can type `docker` on my computer and use it.
It's pretty sad that the thing that excites people the most about an amazing new language model is that it can do trivial command line actions, that you can do without the model.
Spending millions of dollars to produce a model that can do what you can already trivially do is very seriously not what openai just did.
> I can also emulate a docker container. I’ll just write down the commands you send me and respond with some believable crap.
Right. The thing that is impressive is that ChapGPT can do this effectively. This means that it has some "understanding" of how `pwd`, `ls`, `apt`, `docker`, etc all work. In some sense, this is an AI that knows how to read code like a human instead of like a machine.
> In some sense, this is an AI that knows how to read code like a human instead of like a machine.
It's literally spitting out responses like a machine. Isn't that the opposite of what you wanted?
> The thing that is impressive is that ChapGPT can do this effectively.
? What is impressive about it?
Forget this is an AI model for a moment. Lets say I give you a black box, and you can type in shell commands and get results. Sometimes the results don't make sense.
Are you impressed?
I am not impressed.
I could implement the blackbox with an actual computer running and actual shell and the results would be better. Why would I ever use a LLM for this?
It's like discovering that the large hadron collider can detect the sun. Yes, it can. Wow, that's interesting, I didn't realize it could do that. I can also look up at the sun, and see the sun. mmm... well, that was fun, but pointless.
There are so many other things GPT can do, this... it's just quite ridiculous people are so amazed by it.
It is not indicative of any of the other breakthrough functionality that's in this model.
It's impressive because if it can learn enough about how shell scripting works, how filesystems work, and can translate from human language, then we can feasibly stop learning to code (or at least outsource a lot of it). It's mostly not there yet, and I'm not sure how long it will take to actually be useful, but it's not insignificant that a language model can write code that works and manipulates filesystems.
I was prompting it along this line of thought earlier. What I found was that it doesn't seem like it can do anything novel, which is to be expected, but I can see myself working with it to discover novel things.
Sure, I agree there - but the point is it cannot understand code. It can try to describe it, but it isn't able to reason about the code. You won't be able to coax it to the correct answer.
"It’s never going to work as well as actually running X. It’s just for fun." You must realize that X was also built by some kind of neural networks, i.e. humans, and the only reason we can't run an entire Linux kernel "in our heads" is mostly due to hardware, i.e. brains, limitations. Although, I do remember Brian Kernighan saying in an interview how he was able to run entire C programs "in his head" faster than the 1980s CPUs.
The point is that the future programming language will probably be the human language as an extremely high-level specification language, being able to hallucinate/invent/develop entire technological stacks (from protocols to operating systems to applications) on the fly.
> what does it actually mean to know or understand something?
I think it means that you're able to apply the information to make predictions about the world. For example, you'll encounter something novel and be able to make accurate guesses about its behavior. Or, conversely, you will have high likelihood of inventing something novel yourself, based on the information you acquired (rather than through brute force).
I think there is an element of it producing reasonable results because it is trained on largely seeing canned example output. In tutorials, the command that includes ‘hello world’ always outputs ‘hello world’, right? So it doesn’t take a genius to guess that <long blob of golfed code that includes the string ‘hello world’> should produce some output that includes ‘hello world’
Similarly in my explorations of this ‘pretend Linux’, it often produces whatever would be the most helpful output, rather than the correct output.
Yeah, and everyone that wants to succeed makes an effort to utilize as few humans as possible. I don’t think that will be different for AI, even though they have the benefit you don’t have to pay them.
All i can say is i told you so. Over and over and over again. But no one listened - worse I was actively mocked. These language models will be GAI and indeed to a larger and larger extent already are.
All the comments here are about drawbacks and limitations. The upvotes on the submission might be explained by quality of the product, but the comments not so much.
Meaning can be also inferred from context. Even in your example, the conversation context and follow-up statements could home in on the context.
Sure, maybe it would be better if everyone just wrote in a non-ambiguous way, but you're on an international forum where many people don't have a native understanding of the language (me included).
I understood what he meant immediately. I also don't agree with the comment, but that's another subject.
PyPI identifies a package as critical and asks the maintainer to enable 2FA.. but allows them to simply delete the package to get around this requirement?
I dunno, I think if you publish a copy of your code to a registry then it would be both desirable and reasonable for that copy to be immutable. Allowing the deletion of published libraries can have huge downstream impacts and ultimately makes the registry less trustworthy.
Edit: to be clear, not trying to shame the author here - it sounds like they tried to avoid this situation: "what i didn't consider is that this would delete old versions. those are apparently now gone and yet it's apparently not possible for me to re-upload them. i don't think that's sensible behavior by pypi, but either way i'm sorry about that."
I think this is a bad design on PyPI's part though.
Apparently when the 2fa requirement is actually implemented (this was just an announcement which triggered this) deleting a package would require 2fa as well.
I use Fira too, currently, but IBM just won for me, with JetBrains as the runner up. I'm actually surprised by that, and learned a lot from this game(?).
There's still a ways to go but folks are actively contributing.