I really like how they approach to the detection. But I am worried that this is something the community can only use effectively once. There are too many ways to bypass this detection once you know how it works.
Same here, what ever tools I tried, I keep going back to my txt files. Now I use cursor to edit these txt files and get some amazing auto suggestions given the rich context!
Feeling nostalgic about the days building LFS in college.
Learning by building wouldn't help you remember all the details but many things would make more sense after going through the process step by step. And it's fun.
I find it absurd that, compared to the past, large companies now have more abundant stock prices and cash than ever before, yet nearly every AI Lab in these companies is facing greater pressure than ever, being asked to generate short-term profits. In the midst of AI's unprecedented boom, the research environment and atmosphere in the industry seem to have worsened compared to the past.
Can anyone give some numbers for a more intuitive understanding of the advantages from GS? How large would the file/content be if it is in mesh? Can we get similar rendering FPS?
This reminds me the trick to make recent text-to-image model generate highly realistic (but amateur) photos by adding "IMG_XXXX" into the prompt. Although these videos have nearly zero views on YouTube, they may be part of the training data behind these models.
It’s also the default naming for every digital SLR, phone camera, etc… lots of which upload with file name as title to Flickr and many other photo sharing services, most of which have also been used in training data.
This looks great. I have been using self hosted next cloud for a while and it is pretty slow in loading image thumbnails under a large folder. I am curious about Ente’s performance.
reply