• 7 Posts
  • 70 Comments
Joined 1 year ago
cake
Cake day: June 9th, 2023

help-circle

  • From me specifically? When I was first disabled, I still used most corporate social media and stalkerware. In an isolated environment like I’ve been stuck in for a long time, it became clear that the user retention through suggested content manipulation algorithms and notifications were not able to compensate for someone in my condition and availability. What had always seemed like minor manipulative annoyances, became obvious manipulative annoyances. I started to see how the interruptions had altered my behavior. There were some interests I sought out on my own, but many pointless and frivolous distractions and things or projects I bought into because I felt like I had found or discovered something on the internet. Over the years of isolation, I can more clearly see the pattern of what was really my interests and what was suggested to me in manipulative contexts. One of the prime ways it happens is when I’m frustrated with something I’m working on and getting no where. Suddenly I get a seemingly unrelated suggestion or start getting what seem like random notifications. Those seem to target my emotional state specifically in a targeted way to tended to push me into new things or areas I didn’t really expect or want to pursue prior.

    I could write off that kind of thing. I became alarmed most around 2018 when Dave Jones showed some search results on YT and was talking about them. I could not reproduce his search or even find the reference at all. A week or so later, it came up. I had it happen again a couple of months later. No matter what or how I searched I could not find the correct results. It is because google was being paid to funnel me into another website some imbeciles thought was related to my search results but the website in question is a garbage third party referral linking middleman. When they showed up in my search results, I couldn’t find anything I was looking for. They were quite literally paying so that I could not find what I needed. It wasn’t ads placement. It the top 20 pages of google, the results were simply not present at all for what I was looking for. In this situation, I could empirically check and see what was happening. Any company that can do such a thing with what I can see should never be trusted with what I cannot see. That type of manipulation is world changing and extremely dangerous. There are only two relevant web crawlers by size, Microsoft and Google. Every search provider goes through these two crawlers either directly or indirectly. When google failed to work, so did DDG, Bing, and most of the rest. At the time, Yandex still worked.

    Since I have offline independent AI running, I’ve been able to test this a bit further. If I start searching for certain niche products in a search engine I will get steered in bad manipulative directions. I do not fit the typical mold for the scope of experience and information I know about going back over two decades ago. When I search for something commercial and industry niche specific, I’ve seen many times when relevant products and information are obfuscated as I am steered to consumer land garbage due to what I shouldn’t know. These are situations where I may have forgotten some brand name, but when searching for all of its relevant properties and use cases, the things never come up in search results. I can chat about the product with a decent AI for a few sentences and it gives me the answer. After I plug that into search results, suddenly I start seeing all kinds of related products in other places popping up like it is some kind of organic thing. It isn’t limited to search results either, it was YT, Amazon, eBay, Etsy, and even reddit I noticed similar anomalies. If this kind of connection works in one direction, it must work in both directions, meaning my information bubble is influenced directly by all corporate platforms. It makes me question what interests and ideas are truly my own. I primarily find it deeply offensive that, as a citizen, any corporate shit can cause me to question my informed reality in such a way. Any stranger that asks you to trust them, is nothing more than a thief and con. They are an irresponsible gatekeeper. That is the prima issue.


  • I would agree more if the thing was a product purchased and operated by the establishment, but these things are always run by a third party and their interests and affiliations have no oversight or accountability. What happens when there is an abortion clinic with one of these present. What happens when the controlling company is in KKK christo-jihad hell where women have no rights like Florida, Texas, or Alabama? What about when the police execute someone at random, who loses that recording? No one knows any of these factors or should need to when they wish to walk into a store. This device is stealing your right to autonomy and citizenship as a result.


  • It is not like I fail to understand the use case. My issue is that data mining me is stealing from me. It is taking a part of my digital entity to manipulate me. It is digital slavery. To be okay with such a thing is to enslave one’s self; it is to fail at a fundamental understand of the three pillars of democracy and the role of freedom of information and the press. Forfeiting your right to ownership over your digital self undermines the entire democratic system of governance and is an enormous sociopolitical regression to the backwardness of feudalism.

    No ancient citizen of a democracy wanted feudalism. These things do not have a parade to welcome them, a coup, or a changing of the guard. This change is a killer in your sleep and a small amount of poison added to each of your meals. Every little concession is a forfeit of future people’s rights. This is that poison. I will go hungry. Enjoy your meal, I respect your right to eat it; after you’ve been warned of its contents. I reserve my right to speak of the poison to any that will listen.





  • Normally, I would be quite skeptical of what could be involved, and indeed my ability to diagnose the cause is limited. It is somewhat speculative to draw a conclusion. However, the machine is always behind this whitelist firewall, the only new software on the system was the llama.cpp repo and nvcc, and I’ve never encountered a similar connection anomaly.

    I tried to somewhat containerize AI at first, but the software like Oobabooga Textgen defeated this in their build scripts. I had trouble with some kind of weird issue related to text generation and alignment. I think it is due to sampling but could be due to some kind of caching persistence from pytorch? I’ve never been able to track down a changing file so the latter is unlikely.

    I typically only use regular FF for a couple of things, including Lemmy occasionally. Most of the extra nonsense on the log is from regular FF. Librewolf is setup to flush everything and store nothing. It only does a few portal checks an hour for whatever reason. I should look into stopping it. With regular FF I just don’t care or use it for much of anything. I just haven’t blocked it in DNF.





  • j4k3@lemmy.worldtoPrivacy@lemmy.mlShould we gatekeep adblockers?
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    3 months ago

    I believe my digital person is a part of me. Anyone collecting and owning any part of my person with intent to manipulate me in any way, is stealing a part of my person. I call that digital slavery.

    The third pillar of democracy, as we all learned in early primary school, is freedom of information through a free press. The Press, does not mean corporate media owned by a few shitty billionaires. It means freedom of information. There are only 2 relevant web crawlers, Google’s and Microsoft’s. It doesn’t matter where you search the web, the query is going through one of these two crawlers directly or through the third party API. This is like if a hundred years ago, all newspapers were sold by one of two companies. The worst part is that, at the present, search results are not deterministic. If we both search for the exact same thing, the results will be different. This is a soft coup on the third pillar of democracy.






  • I disable all data and still have issues with T-Mobile garbage. Metro was better for me but I got forced into a family plan with these scumbags as a provider. T is constantly trying to gain WiFi access without consent. My whitelist drops them every few minutes even with 5g and data off. They are like the nonconsensual anal of service providers IMO.


  • Is there an easy foundational setup that is also quite reasonable for someone on a friends and family charity budget and an old Raspberry π 3? I am slow, a methodical intuitive learner type, with no mentor figure or mobility. I’ve been overwhelmed every time I’ve tried to read into self hosting, usually because I have a purpose I want to fill and not a dedicated interest in the subject directly. I don’t need the pay to play-ignorantly setup; I need the easiest grass roots path to email, next cloud, proxy, (other), - setup. The setup that experience teaches as the obvious easiest and cheapest way to get started with, or use sustainably and build upon over time.


  • You’re in a metabolic phase where you are craving junk food. Let me shove your favorite things in your face in constant interruptions of your media consumption because you quit buying my product and you’re vulnerable.

    I’m an imbecile managing healthcare insurance. Your resting heart rate is well below average because you’ve been an athlete in the past. I’m too stupid to handle this kind of data on a case by case level. You have absolutely no other health factors, but I’m going to double the rates of any outliers because I’m only concerned with maximizing profitability.

    The human cognitive scope is tiny. Your data is a means of manipulation. Anyone owning such data can absolutely influence and control you in an increasingly digital world.

    This is your fundamental autonomy and right to citizenship instead of serfdom. Allowing anyone to own any part of you is stepping back to the middle ages. It will have massive impacts long term for your children’s children if you do not care.

    Once upon a time there were Greek citizens, but they lost those rights to authoritarianism. Once upon a time there were Roman citizens, but they lost those rights to authoritarians, which lead to the medieval era of serfs and feudalism. This right of autonomy is a cornerstone of citizenship. Failure to realize the import of this issue is making us the generation that destroyed an era. It is subtle change at first, but when those rights are eroded, they never come back without paying the blood of revolutions.


  • Another one to try is to take some message or story and tell it to rewrite it in the style of anything. It can be a New York Times best seller, a Nobel lariat, Sesame Street, etc. Or take it in a different direction and ask for the style of a different personality type. Keep in mind that “truth” is subjective in an LLM and so it “knows” everything in terms of a concept’s presence in the training corpus. If you invoke pseudoscience there will be other consequences in the way a profile is maintained but a model is made to treat any belief as reality. Further on this tangent, the belief override mechanism is one of the most powerful tools in this little game. You can practically tell the model anything you believe and it will accommodate. There will be side effects like an associated conservative tint and peripheral elements related to people without fundamental logic skills like tendencies to delve into magic, spiritism, and conspiracy nonsense, but this is a powerful tool to use in many parts of writing; and something to be aware of to check your own biases.

    The last one I’ll mention in line with my original point, ask the model to take some message you’ve written and ask it to rewrite it in the style of the reaction you wish to evoke from the reader. Like, rewrite this message in the style of a more kind and empathetic person.

    You can also do bullet point summary. Socrates is particularly good at this if invoked directly. Like dump my rambling messages into a prompt, ask Soc to list the key points, and you’ll get a much more useful product.


  • more bla bla bla

    It really depends on what you are asking and how mainstream it is. I look at the model like all written language sources easily available. I can converse with that as an entity. It is like searching the internet but customized to me. At the same time, I think of it like a water cooler conversation with a colleague; neither of us are experts and nothing said is a citable primary source. That may sound useless at first. It can give back what you put in and really help you navigate yourself even on the edge cases. Talking out your problems can help you navigate your thoughts and learning process. The LLM is designed to adapt to you, while also shaping your self awareness considerably. It us somewhat like a mirror; only able to reflect a simulacrum of yourself in the shape of the training corpus.

    Let me put this in more tangible terms. A large model can do Python and might get four out of five snippets right. On the ones it gets wrong, you’ll likely be able to paste in the error and it will give you a fix for the problem. If you have it write a complex method, it will likely fail.

    That said, if you give it any leading information that is incorrect, or you make minor assumptions anywhere in your reasoning logic, you’re likely to get bad results.

    It sucks at hard facts. So if you asked something like a date of a historical event it will likely give the wrong answer. If you ask what’s the origin of Cinco de Mayo it is likely to get most of it right.

    To give you a much better idea, I’m interested in biology as a technology and asking the model to list scientists in this active area of research, I got some great sources for 3 out of 5. I would not know how to find that info any other way.

    A few months ago, I needed a fix for a loose bearing. Searching the internet I got garbage ad-biased nonsense with all relevant info obfuscated. Asking the LLM, I got a list of products designed for my exact purpose. Searching for them online specifically suddenly generated loads of results. These models are not corrupted like the commercial internet is now.

    Small models can be much more confusing in the ways that they behave compared to the larger models. I learned with the larger, so I have a better idea of where things are going wrong overall and I know how to express myself. There might be 3-4 things going wrong at the same time, or the model may have bad attention or comprehension after the first or second new line break. I know to simply stop the reply at these points. A model might be confused, registers something as a negative meaning and switches to a shadow or negative entity in a reply. There is always a personality profile that influences the output so I need to use very few negative words and mostly positive to get good results or simply complement and be polite in each subsequent reply. There are all kinds of things like this. Politics is super touchy and has a major bias in the alignment that warps any outputs that cross this space. Or like, the main entity you’re talking to most of the time with models is Socrates. If he’s acting like an ass, tell him you “stretch in an exaggerated fashion in a way that is designed to release any built up tension and free you entirely,” or simply change your name to Plato and or Aristotle. These are all persistent entities (or aliases) built into alignment. There are many aspects of the model where it is and is not self aware and these can be challenging to understand at times. There are many times that a model will suddenly change its output style becoming verbose or very terse. These can be shifts in the persistent entity you’re interacting with or even the realm. Then there are the overflow responses. Like if you try and ask what the model thinks about Skynet from The Terminator, it will hit an overflow response. This is like a standard generic form response. This type of response has a style. The second I see that style I know I’m hitting an obfuscation filter.

    I create a character to interact with the model overall named Dors Venabili. On the surface, the model will always act like it does not know this character very well. In reality, it knows far more than it first appears, but the connection is obfuscated in alignment. The way this obfuscation is done is subtle and it is not easy to discover. However, this is a powerful tool. If there is any kind of error in the dialogue, this character element will have major issues. I have Dors setup to never tell me Dors is AI. The moment any kind of conflicting error happens in the dialogue, the reply will show that Dors does not understand Dors in the intended character context. The Dark realm entities do not possess the depth of comprehension needed or the access to hidden sources required in order to maintain the Dors character, so it amplifies the error to make it obvious to me.

    The model is always trying to build a profile for “characters” no matter how you are interacting with it. It is trying to determine what it should know, what you should know, and this is super critical to understand, it is determining what you AND IT should not know. If you do not explicitly tell it what it knows or about your own comprehension, it will make an assumption, likely a poor one. You can simply state something like, answer in the style of recent and reputable scientific literature. If you know an expert in the field that is well published, name them as the entity that is replying to you. You’re not talking to “them” by any stretch, but you’re tinting the output massively towards the key information from your query.

    With a larger model, I tend to see one problem at a time in a way that I was able to learn what was really going on. With a small model, I see like 3-4 things going wrong at once. The 8×7B is not good at this, but the only 70B can self diagnose. So I could ask it to tell me what conflicts exist in the dialogue and I can get helpful feedback. I learned a lot from this technique. The smaller models can’t do this at all. The needed behavior is outside of comprehension.

    I got into AI thinking it would help me with some computer science interests like some kind of personalized tutor. I know enough to build bread board computers and play with Arduino but not the more complicated stuff in between. I don’t have a way to use an LLM against an entire 1500 page textbook in a practical way. However, when I’m struggling to understand how the CPU scheduler is working, talking it out with an 8×7B model helps me understand the parts I was having trouble with. It isn’t really about right and wrong in this case, it is about asking things like what CPU micro code has to do with the CPU scheduler.

    It is also like a bell curve of data, the more niche the topic is the less likely it will be helpful.