Did you read the new features? CSS and HTML component testing, complete with web scalability (I.e media-query). Sounds very WYSIWYG to me.
But yeah, I know it’s an open source Figma, because Figma can ligma balls.
I have peepee doodoo caca brains.
Did you read the new features? CSS and HTML component testing, complete with web scalability (I.e media-query). Sounds very WYSIWYG to me.
But yeah, I know it’s an open source Figma, because Figma can ligma balls.
Is… is this the comeback of WYSIWYGs?
“Oh, we’d better become a bunch of walled gardens with phat EULA’s and IP lawyers”.
Or, or, and hear me out on this: make better models and datasets with the public data, open source that shit and call people idiots for using SaaS AI.
I’m wondering how many open WriteFreely instances there are.
Your grandma’s pacemaker can run doom.
“LOOK!! WE’RE ACTUALLY DOING SOMETHING!!! ALL OUR USERS ARE NOT PDF FILES!”
What are you doing, step GM?
The funny thing is that prompts also have unwanted (or “negative”) parameters, like “weird hands”. You could easily just input “disadvantageous framing for police officers”.
This is why these parameters should be public knowledge, so no exceptions are made that clear cops of wrongdoing if they committed a crime.
I promote running neural networks, LLM’s, SLM’s and stable diffusion locally. Why?
The way I see it, there’s a curve when various forms of AI technology becomes so effective and so powerful that it poses a problem for society. People are afraid AI will take their jobs, and that’s a valid concern.
Why then do I promote the use of local AI? Because I think that human+AI will be what prevents centralisation of data, the centralisation of knowledge, the centralisation of power that big tech firms, venture capitalists and authoritarians would love to have.
It’s an uphill battle though, because much like the other boardroom buzzwords like “cloud”, crypto, blockchain, etc, AI is something that makes billionaires pants wet and something that people despise - which is fully understandable.
But, I also fear it is self-defeatist. If we allow AI technology to be centralised instead of learning to liberate ourselves from the central tech cabals that wish to control it, then we set our selves up for new forms of authoritarianism we never knew before.
If you see the cyberdystopia that is China, or the tech oligarchy of the US, if you are left leaning, socialist, anarchist, etc, then it should be your prerogative to take that power away from central authorities.
Please reply with actual arguments and not cathartic putdowns, because I do want to see another way, but just being a troll on Lemmy will not sway me.
Again, I am open to reproach, just be objective.
In my day, son, a WYSIWYG spat out HTML and CSS. It was up to you to integrate it.