I know I haven’t talked about this, but it’s better to say something now than later, especially now that I understand both sides.
People need to understand that you can’t just get rid of the actual artist and animators, and writes. Sure sometimes the AI art looks good and better than a humans, but a human can make art look 10 times better.
Big companies are literally using AI in the wrong way. They’re doing it to be lazy, and they know it’s cheaper. But these big companies need to understand that they can’t always just rely on AI.
this whole thing started because people started using AI, y’all got to remember AI is just a robot, it isn’t a human, hire real artist and actual writers and animators.
Its not the fact that it’s bad. It’s the way it’s being used that makes it bad, this is what happens when things are used in the wrong way. Animators and artist lose their jobs, and the art community gets hurt by this, because they know AI art is going to take over because these lazy companies don’t wanna spend money trying to actually hire artist, animators, and writers. and actually pay them with money.
so the true problem is the companies, these companies are lazy and act like they don’t want to spend a pretty penny on trying to get actual artist. Now here’s the true question is AI art bad? Yes, and no. Like I just explained it can be used in good and bad ways.
Because at the end of the day, it all matters about who uses it and how it used.
it all comes down to the people and the decisions they decide to make
People need to understand that you can’t just get rid of the actual artist and animators, and writes. Sure sometimes the AI art looks good and better than a humans, but a human can make art look 10 times better.
Big companies are literally using AI in the wrong way. They’re doing it to be lazy, and they know it’s cheaper. But these big companies need to understand that they can’t always just rely on AI.
this whole thing started because people started using AI, y’all got to remember AI is just a robot, it isn’t a human, hire real artist and actual writers and animators.
Its not the fact that it’s bad. It’s the way it’s being used that makes it bad, this is what happens when things are used in the wrong way. Animators and artist lose their jobs, and the art community gets hurt by this, because they know AI art is going to take over because these lazy companies don’t wanna spend money trying to actually hire artist, animators, and writers. and actually pay them with money.
so the true problem is the companies, these companies are lazy and act like they don’t want to spend a pretty penny on trying to get actual artist. Now here’s the true question is AI art bad? Yes, and no. Like I just explained it can be used in good and bad ways.
Because at the end of the day, it all matters about who uses it and how it used.
it all comes down to the people and the decisions they decide to make
Category All / All
Species Unspecified / Any
Size 1095 x 633px
File Size 28.4 kB
"AI learns to make art the same exact way people do." is both a factually incorrect argument, and rather skewed in general.
People learn things from both exposure and self reflection. But more importantly, the disconnect between imagination and one's ability to recreate that imagination, the feedback to oneself as one goes and the mistakes and amendments done along the way. Yes, it is an iterative learning process, but that is partly where the similarities ends.
Diffusion models meanwhile learns rather differently during their training process. They don't learn how to move or how to best use the tools at hand. They do however learn how patterns of data relate to a provided description to the images containing the patterns. Then from there it is forced to recreate the same patterns. (I won't go into the details, this comment would be too long.)
But this difference generally don't matter and is more needless debate. But the statement that "AI learns like humans" is mostly trying to diminish the stark difference.
The big important difference is that AI learns fairly fast if given enough resources. (Ie, it becomes a debate about "can you afford it?", so fairly non-democratic.)
And in terms of content production, humans frankly can't compete in terms of volume. (and the dopamine addicted people of today want volume, not quality.)
This brings us to the issue with current AI development, it isn't the technology. But rather the sourcing of training data.
Historically when an up and coming creator gets better. Society rewards them with an audience, potential to build a career and sufficient income to facilitate their life to their craft. To the benefit of their audience since the creator now has more time and economic freedom to their craft.
These days though, that is rapidly diminishing.
Now the "AI" field takes whatever content is available in the creator's portfolio, trains an "AI" to imitate it. Instead of the creator growing an audience and furthering their career, they instead just provide mediocre fuel to something they stand no chance competing against, their career suffers and the audience is left with another keyword for their content generators.
Is it a net benefit to society?
Likely not. Great works needs time and effort to foster into what people consider a classic.
People learn things from both exposure and self reflection. But more importantly, the disconnect between imagination and one's ability to recreate that imagination, the feedback to oneself as one goes and the mistakes and amendments done along the way. Yes, it is an iterative learning process, but that is partly where the similarities ends.
Diffusion models meanwhile learns rather differently during their training process. They don't learn how to move or how to best use the tools at hand. They do however learn how patterns of data relate to a provided description to the images containing the patterns. Then from there it is forced to recreate the same patterns. (I won't go into the details, this comment would be too long.)
But this difference generally don't matter and is more needless debate. But the statement that "AI learns like humans" is mostly trying to diminish the stark difference.
The big important difference is that AI learns fairly fast if given enough resources. (Ie, it becomes a debate about "can you afford it?", so fairly non-democratic.)
And in terms of content production, humans frankly can't compete in terms of volume. (and the dopamine addicted people of today want volume, not quality.)
This brings us to the issue with current AI development, it isn't the technology. But rather the sourcing of training data.
Historically when an up and coming creator gets better. Society rewards them with an audience, potential to build a career and sufficient income to facilitate their life to their craft. To the benefit of their audience since the creator now has more time and economic freedom to their craft.
These days though, that is rapidly diminishing.
Now the "AI" field takes whatever content is available in the creator's portfolio, trains an "AI" to imitate it. Instead of the creator growing an audience and furthering their career, they instead just provide mediocre fuel to something they stand no chance competing against, their career suffers and the audience is left with another keyword for their content generators.
Is it a net benefit to society?
Likely not. Great works needs time and effort to foster into what people consider a classic.
I am well aware that neural networks / machine learning / "AI" is far from new. (Where I will continue having "AI" in quotes since the current use covers far from the whole AI field... It's on par as saying that cooking is only about cupcakes. But this bracketed text is not important to this discussion.)
I am likewise aware that there is plenty of useful tools using NN/ML/"AI" solutions. But that is likewise not what I were against. (I for one don't see much issue with NN/ML based analytics like content recognition or voice to text applications, or even audio filtering to name but a select few. Even if I do consider the "progress" in some fields to be a bit negligent and over hyped at times where the "AI" figured out shortcuts to satisfying but in practice incorrect solutions. One of the big challenges of training these sorts of systems to start with, but yet again completely off topic.)
But regardless.
You seem to agree on the main issue:
"And I know models exist that have "(so-and-so artist) style" tags... but that isn't a sin you can blame AI on. That's improperly/unethically trained AI. We don't like hand-tracing here either."
But just like you, I don't blame AI.
My main point is that the sourcing of training data is largely unethical and completely disrespectful to content creators at large for the models that exist currently.
And that I personally won't consider such unethically made NN/ML/"AI" solutions/tools or the products created thereby as something I want to see used in society. (now if it weren't for the nuanced complexity, then one could trivially argue that they shall all be erased and the content too. But in practice that is a bit excessive. The unethically trained models yes, the content is harder...)
In regards to making the technology better.
Yes, there is ample ways to make better content generation systems. But that is largely uninteresting to me personally given the current unethical practices. But yet again, I do not blame the field at large for the unethical individuals partaking in it.
(Bellow is just an idea for how to adequately solve the ethical issues at hand. But I do know it is a hot take and a lot see it as unrealistic. Though, most people don't actually try to understand it and give truly stupendous statements in return that has been outlined in the text already...)
A potential solution to the ethical problem is to strictly require all content generating models to provide a source and training manifest. And then require that an accredited 3rd party does validation of the claim. List all sources publicly so that people can search and find infractions where applicable.
Training steps and the exact approach to the system itself can however be kept privet if desired, though shared to and validated by the 3rd party. Or shared publicly in case of more open source projects that can rely on community validation of the claim and further work on the model with new branching source/training manifests. The training manifest would be the training iterations themselves, can skip including all the billions of dead ends.
There is ample ways to improve the training manifest such that it won't need PBs of disk space, but rather become on par with the model itself. RNG algorithms needs a seed and from there it is a rather logical tree to follow, ie a small dataset. Even if each branch in itself technically is is the size of the model itself, but that we don't need to store. We just need the path since the RNG spits out the same sequence each time we generate it with a given seed.
However more nuanced training methods needs more than just a seed for each iteration, since some are more specific in what portions of the network they amend, but that is just extra data to add into the manifest, ballooning it in size "a bit".... And anyone arguing that "a company can provide BS seeds that gives the same result and still use unethically sourced data!", but then they could have just used the BS seed to start with and only the ethically sourced data and spent far far less effort on making their model, so a stupid loop hole no one would use in practice.... (and yes, I am oversimplifying how complex training can be at times. But I likewise haven't made limitations for the training manifest itself, so just describe the process. (honestly, the manifest is likely just a big script, a log of sorts from the training program itself, a replay.))
And of course start with "nothing up my sleeve numbers", or random numbers from an accredited number generator like random.org preferably though a source that can back up the claim and prove that they did generate the specific numbers. Or one could use another dataset one can prove ownership over. Reason for this is that otherwise one can just start with an unethical model and polish it a bit.
And yes, of course this wouldn't work all that well in practice if only 1 country/region adopted such a policy. Since apparently we live in a world where one can't be the responsible adult in the room if everyone else acts like children, even if history has proven time and time again that almost everyone eagerly wants to join whoever is the first responsible person to take action if said action is adequately given available data reasonable.
Then there is self learning systems that constantly train even as they work. These are to my knowledge rather rare in the content generation world so far. Since one somewhat don't want one's "AI" system fumbling away in its own imagination and become incapable of doing what it did yesterday.
For non content generation systems, then this whole manifest thing is honestly not important.
Then there is the in between field where it is debatable. Is a speech to text system generating content? And that is a philosophical debate. (Generally speaking, when one simplifies a dataset then one can be argued to not create content. Speech is more complex than text. So the other way around would be content generation. This is however a simplified argument lacking nuance.)
And no, I don't know what short and concise is supposed to mean.
I am likewise aware that there is plenty of useful tools using NN/ML/"AI" solutions. But that is likewise not what I were against. (I for one don't see much issue with NN/ML based analytics like content recognition or voice to text applications, or even audio filtering to name but a select few. Even if I do consider the "progress" in some fields to be a bit negligent and over hyped at times where the "AI" figured out shortcuts to satisfying but in practice incorrect solutions. One of the big challenges of training these sorts of systems to start with, but yet again completely off topic.)
But regardless.
You seem to agree on the main issue:
"And I know models exist that have "(so-and-so artist) style" tags... but that isn't a sin you can blame AI on. That's improperly/unethically trained AI. We don't like hand-tracing here either."
But just like you, I don't blame AI.
My main point is that the sourcing of training data is largely unethical and completely disrespectful to content creators at large for the models that exist currently.
And that I personally won't consider such unethically made NN/ML/"AI" solutions/tools or the products created thereby as something I want to see used in society. (now if it weren't for the nuanced complexity, then one could trivially argue that they shall all be erased and the content too. But in practice that is a bit excessive. The unethically trained models yes, the content is harder...)
In regards to making the technology better.
Yes, there is ample ways to make better content generation systems. But that is largely uninteresting to me personally given the current unethical practices. But yet again, I do not blame the field at large for the unethical individuals partaking in it.
(Bellow is just an idea for how to adequately solve the ethical issues at hand. But I do know it is a hot take and a lot see it as unrealistic. Though, most people don't actually try to understand it and give truly stupendous statements in return that has been outlined in the text already...)
A potential solution to the ethical problem is to strictly require all content generating models to provide a source and training manifest. And then require that an accredited 3rd party does validation of the claim. List all sources publicly so that people can search and find infractions where applicable.
Training steps and the exact approach to the system itself can however be kept privet if desired, though shared to and validated by the 3rd party. Or shared publicly in case of more open source projects that can rely on community validation of the claim and further work on the model with new branching source/training manifests. The training manifest would be the training iterations themselves, can skip including all the billions of dead ends.
There is ample ways to improve the training manifest such that it won't need PBs of disk space, but rather become on par with the model itself. RNG algorithms needs a seed and from there it is a rather logical tree to follow, ie a small dataset. Even if each branch in itself technically is is the size of the model itself, but that we don't need to store. We just need the path since the RNG spits out the same sequence each time we generate it with a given seed.
However more nuanced training methods needs more than just a seed for each iteration, since some are more specific in what portions of the network they amend, but that is just extra data to add into the manifest, ballooning it in size "a bit".... And anyone arguing that "a company can provide BS seeds that gives the same result and still use unethically sourced data!", but then they could have just used the BS seed to start with and only the ethically sourced data and spent far far less effort on making their model, so a stupid loop hole no one would use in practice.... (and yes, I am oversimplifying how complex training can be at times. But I likewise haven't made limitations for the training manifest itself, so just describe the process. (honestly, the manifest is likely just a big script, a log of sorts from the training program itself, a replay.))
And of course start with "nothing up my sleeve numbers", or random numbers from an accredited number generator like random.org preferably though a source that can back up the claim and prove that they did generate the specific numbers. Or one could use another dataset one can prove ownership over. Reason for this is that otherwise one can just start with an unethical model and polish it a bit.
And yes, of course this wouldn't work all that well in practice if only 1 country/region adopted such a policy. Since apparently we live in a world where one can't be the responsible adult in the room if everyone else acts like children, even if history has proven time and time again that almost everyone eagerly wants to join whoever is the first responsible person to take action if said action is adequately given available data reasonable.
Then there is self learning systems that constantly train even as they work. These are to my knowledge rather rare in the content generation world so far. Since one somewhat don't want one's "AI" system fumbling away in its own imagination and become incapable of doing what it did yesterday.
For non content generation systems, then this whole manifest thing is honestly not important.
Then there is the in between field where it is debatable. Is a speech to text system generating content? And that is a philosophical debate. (Generally speaking, when one simplifies a dataset then one can be argued to not create content. Speech is more complex than text. So the other way around would be content generation. This is however a simplified argument lacking nuance.)
And no, I don't know what short and concise is supposed to mean.
I agree, its the use that really is the problem and the companies exploiting it. There's already clothing being made with poor quality prints of AI designs slapped on them which is truly heartbreaking to see.
If AI is to be used it should be used as a tool FOR artists not a replacement.
If AI is to be used it should be used as a tool FOR artists not a replacement.
Personally I have little issues with the technology itself.
But its use and the way the "AI" models are made to start with is a bigger issue.
Now the bellow will be about content generation in general. So both visual arts, text, sound, etc.
For content generation I don't see it as particularly respectful for people/organizations to train these models on artwork/content that haven't been authorized for such use by its original creator. (or copyright holder.)
Now a lot of generative "AI" art development stems from the content recognition and search filtering field. And these fields were generally considered okay by everyone. Since it more or less helps people find content. But building further on that to start generating new content is a bit of a stab in the back.
A lot of people have asked me to make art for them over the years. Not that I am a particularly skilled artist. But my lack of time makes it fairly hard for me to churn through the requests. I can only work as fast as I can and I also have other things to do in life, like work to pay my rent.
But every time I talk to these people making their trivial requests about why they don't go and learn the basics themselves, the answer is always. "Its too much effort." Yes, making decent content takes years of practice, and ample study. Rome weren't built in a day, not even a year. If you haven't spent 100 hours on learning, then you haven't even tried to learn. (unless it is a very simple topic, but art isn't simple)
But somehow the AI fans of the world deem it as morally acceptable to just take an artist's portfolio of years of dedication to their craft, and simply train an AI to imitate their style.
So the reward for becoming a good enough artist is that you will now need to compete against AI generated content imitating your style, taking most of the people who would have otherwise wanted to commission you for work. And worst case, your career is at its end.
I have used a few popular AI models to see what artists work have been sucked in, and that list is long... And a fair few publicly state that they don't want their art used to train AI.
In short, if a model is trained on respectfully sourced material, then I have no issues with the content it provides from an ethical standpoint. So far I know of no such models. (I have seen a few "supposedly ethical" models, most often a LoRA building on top of Pony. Ie, its like a pick pocketer saying that "the last dollars in the million I earned were from honest work." I don't care if the gold flakes on the cake is real, if the rest of cake is made from mud.)
Then there is the way it is used.
This is honestly a bigger issue for society at large than the ethical debate around sourcing training data.
Here issue is generally the far more trivial spread of misinformation and the increasingly diminishing line between satire and defamation. Two things that will happen even if AI were far more heavily regulated.
In the end it is a question of what society you want to live in.
Do you want to live in a world where your own creations can be taken from you and used against you?
Or do you rather want to live in a world where your personal creativity is respected and you get to build your career in peace?
Do you enjoy seeing creativity fostered and cared for, projects allowed to flourish and provide content with deeper insight.
Or you might rather just want the next meme tickling your most basic of senses to fuel that dopamine addition.
But its use and the way the "AI" models are made to start with is a bigger issue.
Now the bellow will be about content generation in general. So both visual arts, text, sound, etc.
For content generation I don't see it as particularly respectful for people/organizations to train these models on artwork/content that haven't been authorized for such use by its original creator. (or copyright holder.)
Now a lot of generative "AI" art development stems from the content recognition and search filtering field. And these fields were generally considered okay by everyone. Since it more or less helps people find content. But building further on that to start generating new content is a bit of a stab in the back.
A lot of people have asked me to make art for them over the years. Not that I am a particularly skilled artist. But my lack of time makes it fairly hard for me to churn through the requests. I can only work as fast as I can and I also have other things to do in life, like work to pay my rent.
But every time I talk to these people making their trivial requests about why they don't go and learn the basics themselves, the answer is always. "Its too much effort." Yes, making decent content takes years of practice, and ample study. Rome weren't built in a day, not even a year. If you haven't spent 100 hours on learning, then you haven't even tried to learn. (unless it is a very simple topic, but art isn't simple)
But somehow the AI fans of the world deem it as morally acceptable to just take an artist's portfolio of years of dedication to their craft, and simply train an AI to imitate their style.
So the reward for becoming a good enough artist is that you will now need to compete against AI generated content imitating your style, taking most of the people who would have otherwise wanted to commission you for work. And worst case, your career is at its end.
I have used a few popular AI models to see what artists work have been sucked in, and that list is long... And a fair few publicly state that they don't want their art used to train AI.
In short, if a model is trained on respectfully sourced material, then I have no issues with the content it provides from an ethical standpoint. So far I know of no such models. (I have seen a few "supposedly ethical" models, most often a LoRA building on top of Pony. Ie, its like a pick pocketer saying that "the last dollars in the million I earned were from honest work." I don't care if the gold flakes on the cake is real, if the rest of cake is made from mud.)
Then there is the way it is used.
This is honestly a bigger issue for society at large than the ethical debate around sourcing training data.
Here issue is generally the far more trivial spread of misinformation and the increasingly diminishing line between satire and defamation. Two things that will happen even if AI were far more heavily regulated.
In the end it is a question of what society you want to live in.
Do you want to live in a world where your own creations can be taken from you and used against you?
Or do you rather want to live in a world where your personal creativity is respected and you get to build your career in peace?
Do you enjoy seeing creativity fostered and cared for, projects allowed to flourish and provide content with deeper insight.
Or you might rather just want the next meme tickling your most basic of senses to fuel that dopamine addition.
Criticism about AI’s problems, like in the post by Countryballfurry, reflects a need for informed dialogue. Enrolling in an Artificial Intelligence (AI) Strategy Course https://www.iim-edu.org/managementc.....rategy-course/ helps users unpack complex topics such as AI ethics, surveillance, bias, and automation, fostering constructive conversations instead of fear-driven narratives.
This furry artist’s perspective on AI is refreshingly honest. Tech affects every community differently. Encouraging more inclusive dialogue aligns well with the principles of the Artificial Intelligence (AI) Strategy for Executives at https://www.iim.education/executive.....ategy-seminar/ website.
FA+

Comments