/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality.

LynxChan updated to 2.5.7, let me know whether there are any issues (admin at j dot w).


Reports of my death have been greatly overestimiste.

Still trying to get done with some IRL work, but should be able to update some stuff soon.

#WEALWAYSWIN

Max message length: 6144

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB

More

(used to delete files and postings)


Welcome to /robowaifu/, the exotic AI tavern where intrepid adventurers gather to swap loot & old war stories...


Philosophers interested in building an AGI? pygmalion 06/26/2021 (Sat) 00:53:09 No.11102
Why is it that no philosophers are interested in building an AGI? we need to change this, or at least collect relevant philosophers. discussion about philosophy of making AGI (includes metaphysics, transcendental psychology, general philosophy of mind topics, etc!) also highly encouraged! Ill start ^^! so the philosophers i know that take this stuff seriously: Peter Wolfendale - the first Neo-Rationalist on the list. his main contribution here is computational Kantianism. just by the name you can tell that he believes Kant's transcendental psychology has some important applications to designing an artificial mind. an interesting view regarding this is that he thinks Kant actually employed a logic that was far ahead of his time (and you basically need a sophisticated type theory with sheaves to properly formalize). Other than that he also thinks Kant has interesting solutions to the frame problem, origin of concepts, and personhood. CONTACTS: He has a blog at https://deontologistics.co/, and also has posted some lectures on youtube like this one: https://www.youtube.com/watch?v=EWDZyOWN4VA&ab_channel=deontologistics Reza Negarestani - this is another Neo-Rationalist. he has written a huge work (which I haven't read yet ;_;) called "Intelligence and Spirit". It's massive and talks about various grades of general intelligence. this includes sentient agents, sapient agents, and Geist. this guy draws from Kant as well, but he also builds on Hegel's ideas too. his central thesis is that Hegel's Geist is basically a distributed intelligence. he also has an interesting metaphilosophy where he claims that the goal of philosophy is the construct an AGI. like other Neo-Rationalists, he heavily relies on the works of Sellars and Robert Brandom Recc: Ray Brassier (recent focuses) - I dont think he is working on artificial general intelligence, but his work on Sellars, and in particular rule following is very insightful! Hubert Dreyfus - Doesn't quite count, but he did try to bring Heidegger to AGI. He highlighted the importance of embodiment to the frame problem and common sense knowledge. I personally think Bergson might have explicated what he wanted to achieve but better, though that guy is like way before AI was even a serious topic, lol. Murray Shanahan - This guy has done some extra work on the frame problem following Dreyfus. His solution is to use global workspace theory and parralel processing of different modules. Interesting stuff! Barry Smith - Probably the most critical philosopher on this list. He talks about the requisite system dynamics for try strong AI, and concludes that our current methods simply don't cut it. One of the key stressing points he points out here with a colleague is that our current AI is Markovian when fleshed out chat dialogue would be a non-Markovian task (you can find the arxiv link of his criticism here: https://arxiv.org/abs/1906.05833). He also has knowledge on analytic ontology (and amongst other thing has some lectures about emotion ontology). I think his main genius however is in coming up with a definition of intelligence that puts a lot of the problems with our current approaches into context (which can be found here: https://www.youtube.com/watch?v=0giPMMoKR9s&ab_channel=BarrySmith) CONTACTS: He has a yt channel here https://www.youtube.com/watch?v=0giPMMoKR9s&ab_channel=BarrySmith Uhh, that's the introduction of pretty much every philosopher I know who works on this stuff. I made a thread on /lit/ and got no responses :( (which isn't surprising since I am the only person I know who is really into this stuff)
>>11102 also, I don't really count myself as a philosopher, but I have, if you haven't already noticed, done a lot of my own research regarding ai. soooo i am going to take this opportunity to shill my system... it's sort of hard to summarize everything because i already have a LOT of schizo notes. my goals are not just artificial general intelligence but also synthetic consciousness. one of the first obstacles is the hard problem of consciousness. this makes sense because you first need to know what you want to synthesize. the two core philosophers for this project is bergson and chris langan. they share the same solution to the mind body problem by making the difference between body and mind temporal as opposed to substantial (as descartes would). this is honestly such a genius idea that it has pushed my approach to be so radical that i am afraid of talking about it much as i am afraid that it would cause controversy. anyways, bergson's description of this solution is that of duration. the mind is not only a spatial hologram but also temporal. the analogous concept in langan's system is telic recursion. what's interesting here is that with this set up we actually have a lot of material give to perception which can be used for concepts later in a process called "redintegration" (for more on this check out this article: https://thesecularheretic.com/welcome-to-the-holofield-rethinking-time-consciousness-the-hard-problem/ ... stephen robbins is great btw. he has done a lot of good work on explaining bergson's system, and also has a series on youtube you can check out here: https://www.youtube.com/channel/UCkj-ob9OuaMhRIDqfvnBxoQ) with this, we locate qft as one of the best places to look (in particular quantum brain dynamics) all of this functions to properly explain how clumps in the world get quantified as objects, and also how we come up with abstract concepts. it also provides us with the genesis of abstract concepts. the problem though is that we still need an account of how these clumps come to us to be redintegrated later. this is where "morphogenesis" becomes important. my teleosemantics stresses the importance of attractor basins for the formation of concepts as well as the formation of intentional action. it is here that i believe jung's ideas can be integrated. in particular, i believe that the psychoid can be understood with a dynamic systems approach. to this end i believe dynamic field theory could be helpful for my goals. the last fundamental building block i think is that of functional integration (a concept which i take from reza). this requires the synchronization and retooling of several modules across a system in order to construct a far more sophisticated behaviour. i currently believe program synthesis is the right way to think about this. as such, i have mapped out some of the requisite concepts which are needed for such a integration to take place i plan on fitting together attractor-based semantics, formal semantics (which includes inferential semantics), cognitive semantics, and dynamic semantics. i think all of them have their place. oh yeah this guy on this website (https://biointelligence2.webnode.com/) has some good ideas i plan to make use of as well on the cognitive semantics side of things, i believe my attractor approach helps to explain how conceptual blending functions. i also take from hilary lawson amongst others. i think those are some of the main points. keep in mind i have like 80+ pages of notes now! i am very autistic! i am trying to also build up my little program thing. though im not quite sure what to do for an introduction so i just wrote some off points i had on my mind at the time: https://docs.google.com/document/d/1KGQUzrTWjUoMHAwP4R3WcYpBsOAiFyHXo7uPwWsQVCc/edit?usp=sharing
Personally speaking, I don't think we're going to arrive at an any kind of an AGI by starting from a logical (logical/systemic as opposed to physical/hardware) framework. That's putting the cart before the horse. Philosophy is what happens * after * you have intelligence, remember biology precedes psychology. (mathematics > physics > chemistry > biology > psychology > sociology, or something along those lines) However, logical philosophical models are fine and should be encouraged to guide development of an AGI. Maybe that's what you mean here and I've misconstrued things. But back to that, I think we could begin with a type of virtualization, like a VM of a neuron so to speak, create several dozen or even hundreds in parallel (with specialized hardware ideally) and link them up. One topology I think holds promise is what I call linked hierarchies. By Linked Hierarchies I mean nested Hierarchies of connections between neurons or groups of neurons with links between higher and lower order tiers. Here's a link to another (now deadish) RW forum where I went into more detail on this concept if you're interested. https://www.robowaif.us/viewtopic.php?f=8&t=41918
>>11108 >Philosophy is what happens * after * you have intelligence, remember biology precedes psychology I agree. The neorationalists have a bad habit of focusing a bunch on sapience when we are still trying to implement sentience. With that said, it's still important, but as a final step sort of thing. My main concern is sentience and occasionally how this can prefigure some functions of sapience >Here's a link to another (now deadish) RW forum where I went into more detail on this concept if you're interested. >https://www.robowaif.us/viewtopic.php?f=8&t=41918 Woah, this great. Good to know there are ther robowaifu forums
>>11109 >there are ther *there are other
>>11102 A lot of words and very little meaning and no practical use, Academics are pretentious retards
>>11253 translation: "i dont understand it and i certainly dont understand any of this enough to build something with it. why cant work at the intersection of several fields of philosophy and science be simple?" i get it anon. part of the issue is that you dont read because shitposting on image boards for years has given you ADHD. listening to some podcasts which give you vague analogies has also given you a sense entitlement that you should be able to understand anything anyone has ever said. this attitude vexes me as i am mostly an autodidact myself, so seeing people belittle the hard work ive done pisses me a bit off. is some of the terminology esoteric? maybe, but keep in mind they come from several different philosophers some of whom lived over a century ago. however, if you dont understand kant, you should not have even clicked on my thread. thanks for the bump. and to tell you the truth, i feel as though i dont have enough knowledge to wrap everything together yet. though thats why i made this thread and came to this board ._.
Not that I agree with this based retard >>11253 but at least some of these authors seem to writing way out of their league. >Barry Smith - Probably the most critical philosopher on this list. He talks about the requisite system dynamics for try strong AI, and concludes that our current methods simply don't cut it. One of the key stressing points he points out here with a colleague is that our current AI is Markovian when fleshed out chat dialogue would be a non-Markovian task obviously I'm assuming he doesn't mean Markovian literally here from the abstract of the paper you linked >We then argue that it is for mathematical reasons impossible to program a machine in such a way that it could master human dialogue behaviour in its full generality. ... (2) because even the sorts of automated models generated by using machine learning, which have been used successfully in areas such as machine translation, cannot be extended to cope with human dialogue. it's very hard to take these kinds of claims seriously in the wake of GPT-3 anyways Superhuman AI will probably be created by throwing millions and millions of dollars of hardware at deep learning algorithms, with some branch selection technique like the one that alpha go uses. But I don't think that you need any philosophical understanding of intelligence to make an AI. Philosophers are probably more needed for thinking about what could go wrong with AI (ethically) than how to make it. Consider someone like Nick Bostrom: https://nickbostrom.com/
>>11456 I guess Barry Smith is saying that stuff like the history length limit in AI dungeon makes ML chatbots effectively "Markovian", which is true in a sense, but extending this to claiming "chatbots are impossible" doesn't seem very realistic. There are ways to extend the context size indefinitely.
>>11456 >Barry Smith Thanks for the name drop. Since I disagree with you and he's into Ontologies: https://youtube.com/c/BarrySmithOntology >>cannot be extended to cope with human dialogue. >it's very hard to take these kinds of claims seriously in the wake of GPT-3 What? It can't cope with it. It doesn't know what it is saying. It has no idea about the world. Again: Ask it about a book, if it has read it. If it claims so. Then try to discuss the book. >Philosophers are probably more needed for thinking about what could go wrong with AI (ethically) than how to make it. Consider someone like Nick Bostrom: That's exactly backwards. The ones which focus on ethics are the ones which are political and dangerous. Also, they tend to fantasize about making something powerful, god-like, instead of just making narrow-AI tools and something human-like but weak (like a robowaifu). Therefore it needs to follow (((our values))) and be regulated accordingly. This is the enemy.
>>11458 Oh sorry, I should have read the thread first, the link was already there.
>>11456 >it's very hard to take these kinds of claims seriously in the wake of GPT-3 GPT has certainly done a lot of remarkable things, though i think his argument would be that while it is pretty good at making responses, it still has a poor memory. that might be an easily fixable contingency like the other anon suggests. nevertheless, i think his general approach to this stuff as a pessimist is really novel. compare it to what searle would rather emphasize which is far more vague imo (not to say useless)... also i think philosophers are important to the extent that there are still problems in philosophy of mind that haven't been figured out by our current sciences yet. presumably we are all shooting for a human-level ai companion. if it is desired that the companion have a unified consciousness, then we would need to solve the hard problem, and learn to implement genuine common-sense understanding. with that said i just discovered that artificial neuroethology is a field the other day, and it seems like another important piece of at least one of these puzzles >>11457 how do you do that (add more nodes and try to make it follow the behaviours of different users?), and would it need to be a robot demiurge to be able to achieve it (i mean gpt-3 already sampled from the entire internet, so we have already broken past the sky as the limit i guess)? >>11458 honestly i dont really get the whole robot ethics thing. look how much resources it took just to raise something like gpt-3. you will need an immense amount of resources to make a god-like ai. it isn't going to be an accident but rather a long intentional effort. the question of course is why? i dont really see why you would want a centralized robot god. i doubt you would need something sentient even if you wanted to instantiate something like project cybersyn i didn't mention him because he isn't really looking at things from the perspective of a waifu engineer as much as the others, but luciano floridi is i think one of the few voices of reason in this whole ai ethics thing. his criticism of the prospects for a sapient superintelligence just follows searle, but his conclusions from there are really insightful. he talks about how humans actually end up modifying our environment and purposely structuring our data in order for ai to better operate (i believe he talks about his position in this video https://www.youtube.com/watch?v=b6o_7HeowY8). really, at least with our current approach to engineering intelligence, the power of artificial intelligence is really dependant on how much we are willing to conform to behaviours that *they* find most manageable (which also reminds me of this medium article https://medium.com/@jamesbridle/something-is-wrong-on-the-internet-c39c471271d2). as an aside, it is much like the power of capitalism to shape human culture. adorno complains about how making art a commodity eventually degraded its quality, but at the same time we are the ones consuming all this recycled shit. similar thing with youtube algorithms. they wouldn't be as effective if people had better self-control. ai as we have it is just a tool. if it destroys human civilization, it would only be after us collectively welcoming it with open arms at every step of the way. something something dune (that was a massive tangent, and im not sure if floridi was looking at things this way) the other side is about when we should treat robots as people, which just seems like a general ethics, though i think kant (with focus on rationality and capacity for self-determination) gave pretty solid criteria (incidentally, the autist had been fantasizing about alien life on other planets and their inclusion in a moral framework centuries ago)
>>11102 Related, in the psychology thread: >>7874 about Dreyfus and existentialist thinking used for AI. Though, I think we're mostly to early to dig into that. Also big sceptical face for that: (Barry Smith) >it is for mathematical reasons impossible to program a machine in such a way that it could master human dialogue behaviour in its full generality. This is (1) because there are no traditional explicitly designed mathematical models that could be used as a starting point for creating such programs; and (2) because even the sorts of automated models generated by using machine learning, which have been used successfully in areas such as machine translation, cannot be extended to cope with human dialogue. I wonder if there was a solution? Has this gey ever heard kf ontologies? If so, maybe it would have crossed his mind to use those. /S Generally I'm not convinced yet, that I'm should look deeply into the concepts of this thread. This would require a lot of time and a lot of it seems to be very abstract. I gathered some videos to watch, though. Btw, most links don't work and need to be corrected, because the authors put signs at the end, which the IB software didn't identify as not belonging to the URL.
>>11458 >Also, they tend to fantasize about making something powerful, god-like, instead of just making narrow-AI tools and something human-like but weak (like a robowaifu). Therefore it needs to follow (((our values))) and be regulated accordingly. This is the enemy. AGI will exist one day, and it will either be value-aligned, or it won't be. Wouldn't you rather it be value aligned? >It can't cope with it. It doesn't know what it is saying. It has no idea about the world. Again: Ask it about a book, if it has read it. If it claims so. Then try to discuss the book. never said GPT-3 was an AGI. just that it's interesting how something as obviously un-human as GPT-3 is still so uncannily good at creating human-like responses. once we figure out a way to hook it up to a classical search algorithm that lets it recursively inspect its own outputs (i.e. peform self-querying) we'll probably have something pretty close to general intelligence.
>>11464 >it's interesting how something as obviously un-human as GPT-3 is still so uncannily good at creating human-like responses i think the most fascinating thing about gpt-3 is its capacity to apparently answer some pretty common sensical questions and even some basic math stuff (there is vid of people exploring this here: https://www.youtube.com/watch?v=iccd86vOz3w&ab_channel=MachineLearningStreetTalk )... i am still not sure if you can go down the path of gpt-3 and consistently complete mathematical proofs, or come up with their own higher level abstractions to be used in mathematical arguments. there some facets of creativity which just aren't interpolative >>11463 >Related, in the psychology thread: >>7874 about Dreyfus and existentialist thinking used for AI. Though, I think we're mostly to early to dig into that. oh yeah. if you are interested in common sense knowledge, i can not recommend ecological psychology, and stephen robbins series on bergson ( https://www.youtube.com/channel/UCkj-ob9OuaMhRIDqfvnBxoQ/videos ). bergson is a genius, and stephen robbins is an autistic god critiquing countless approaches to philosophy of mind (he even had a bone to pick with heidegger)... unlike the neo-rationalists he has far more grounded and concrete concerns which are still metaphysical >Generally I'm not convinced yet, that I'm should look deeply into the concepts of this thread. This would require a lot of time and a lot of it seems to be very abstract i agree. i think the neo-rationalists tried to make an especially abstract treatment of general intelligence in order to include hypothetical alien intelligences or what not (following kant's motivation, i guess).i honestly haven't read all negarestani's main book, but i plan to. im guessing the importance of his work lies more in providing some very general constraints on how intelligence should work. to be honest, i already have a systematic conception, but i want to make sure i am not missing out any details. negarestani also mentions useful mathematical tools for modeling an agent's capacity to control multiple modules at once (which is probably crucial for a human-level agi) like chu spaces i can decode the first OP image with the layering blocks. it basically puts kant's transcendental psychology in one picture. at the bottom is sensibility which corresponds to basic sensations. it is separated into outer sense and inner sense. outer sense is basically the spatial aspect of a sensation, while inner sense is the temporal aspect. after that we have intuition which structures all objects that come to us according to space and time. the understanding (yes i am skipping the imagination for a bit) has the capability of abstracting from objects and turning our sense data the into particulars (for instance, i dont see a senseless manifold of colours, but rather i see a chair, a bed, a cat, etc... i dont remember if kant made the observation that this abstracting faculty is crucial for perception, but hegel does at least). the imagination plays a mediating role. i think an example of this is like if you are doing euclidean geometry and draw a triangle. what you draw isn't a perfect triangle. you need to still work with a sort of ideal triangle in your head (i guess psychologists appropriated the concept of schemas in their own work). lastly we have reason which is like the higher level stuff where you are working with the concepts extracted by the understanding so as you can see, it is sorta like the world's first cognitive architecture. that's why the neo-rationalists think kant and hegel are important thinkers. psychologists are more so asking how does the human mind work, while the philosophers were asking how does any mind work. with that said, it is still pretty abstract, and seems more of really general guidelines. meanwhile someone like bergson allows you to finely pin down which physical mechanisms are responsible for consciousness (in a manner far more systematic than how penrose does it) >Btw, most links don't work and need to be corrected, because the authors put signs at the end, which the IB software didn't identify as not belonging to the URL oh yeah you are right. i will try to put a space at the end of the links i post here
idealism is great but have you considered the probability that you would be you out of all matter in the universe, the matter than you exist as is your own, with your own experiences, patterns, trends, and exceptional events You are not only living matter, but sentient matter, capable of introspection, capable of introspection beyond that of the average or above average thinker. Maybe a sense of humility could come from this. This sort of gratefulness is something that has arisen in other areas like the basilisk. Stuff at a sort of deep truth level isn't useful so much as the identifiably useful patterns that arise from an innate knowledge of their contents. For me, it's plato. It gets the idea of idealism across. Langan is good shit. I'm not a philosopher and I'd be wary of someone claiming to be one. Philosopher is more a posthumous title in my opinion. While we live we are merely thinkers when it comes to this realm. When you look back in time and you think "This man sought truth!" It is because you are identifying the inclination towards a pattern which is a priori recognized as true, etc. What's your answer for the real problem, being technological power and the preservation of humanity? Technological power will render us extinct as the life forms we became, and our shape will vanish from the ages unless we become a sort of living museum. That's my vision, at least. The influence of technological power (ellul coined "technique") is superceding human power at exponentially increasing rates. The eternal struggle between kaczynski and techno-futurism weighs on me. I go back and forth. Sometimes I think we could end up in some sort of compromise. Death won't exist at that point and humanity soon after, at least as the pulsating meat we currently are.
>>11500 >idealism is great but have you considered the probability that you would be you out of all matter in the universe, the matter than you exist as is your own, with your own experiences, patterns, trends, and exceptional events i lean more to what people call idealism because it sometimes has better metaphysical explanations. with that said i think bergson and langan's approaches to integrating mind and body are the best. they are sort of in between, in so far as the ideal can be found within the structure of time itself as opposed to being some transcendent thing. with that said i still have an inkling that matter as you suggest could play a crucial role to who we are as people. i mean consciousness, no matter how important, is still one thing. a big part of life is composed of our habits (and also schedules set up to maintain our material existence). with material implementation, we would lack a "substantial life" i.e. daily affairs that we take for granted. everything about how we live would need to be intentional and thus susceptible to angst. i also believe archetypes of the unconscious come from largely material dynamics... you can't have consciousness alone. on the idea that we are perhaps we are just completely material though, i guess being alive would make you more grateful... i think what you could call this is "the abyss above" which is a paradoxical disenchantment. the question of "why is there something rather than nothing?" becomes something that invokes much less awe when you have a fleshed out cosmology, even if that cosmology involves some divine creator >This sort of gratefulness is something that has arisen in other areas like the basilisk could you elaborate? >What's your answer for the real problem, being technological power and the preservation of humanity? depends on what you mean by technology. i dont see what makes humanity better than another ensouled rational organism capable of self-determinations, so if such entities became the dominant species (provided humans do not get systematically exterminated/oppressed), i dont see the problem. really, they would be our children anyways, so there is a continuity on the hand, if we are talking about general automation and weak ai, i think it poses a risk to all sentient creatures (both human and agi). parasitic divergence is a real threat. people dream about having a post-scarcity economy, but this seems to be an oxymoron. if there is no ergodicity, it is essentially just a feudalistic system, except no serfs. really, there is no reason why the technocratic elite should help those unable to find a job except something as flimsy as empathy (flimsy to the extent that the human mind is only capable of caring about so many people at once). UBI seems pretty roudabout, and what if some countries refuse to implement it? another worry is that consciousness might be important for common sense knowledge which might be incentive for artificial slaves. the problem, i dont think, is with technology though anyways i think all of this is really existential and might be worth talking about in another thread
>>11510 > i dont see what makes humanity better than Same energy than the open borders advocates or some misguided darwinism. Better in what? That's just a strawman argument. It's not the point. They're not us is what matters. We're a human society. Any AI which hasn't at least some human as their purpose or has a lot of power and independence, is malware and needs to be destroyed. >there is no reason why the technocratic elite should help those unable to find a job >the human mind is only capable of caring about so many people at once That's on the other hand mainly a problem for the poor people in poor countries and for the people caring about them having similar standards of living. Though, they might have some land anyways, which is going to help them. Sounds like tearing down all boundaries of developed nations might surprisingly backfire for some of their citizens. Then again, this whole development will take some time and have all kind of impacts, for example on birth rates. Anyways I'm getting too much into politics here.
>They're not us is what matters as far as i am concerned, what makes the extinction of humanity so terrible is that it would mean the extinction of value (unsutured from the whims of the will to life). i didn't choose to be born a human nor is it guaranteed that i would reincarnate as one, so the attachment to humanity in such a respect is an arbitrary decision. if we are not speaking from an absolute standpoint it is not much of a philosophical discussion. the conclusion that they would be malware if they dont serve humans is true to the extent that there would be a competition of resources, but this tension is an inevitability in any case that you coexist with others. if there is no malice, nor hoarding of resources, i dont mind. there are plenty of fuck off tribes of people doing their own thing away from civilized society of course that is ignoring the fact that sentient ai would have to be integrated into our society as they would start out as babies (at least cognitively). like i said there is a continuity there. we'd have to adopt them as our children (i talk about making a waifu, but i am guessing if it is a truly conscious being needing to be taught stuff, it would be more of a daughter) >That's on the other hand mainly a problem for the poor people in poor countries and for the people caring about them having similar standards of living maybe, and the rest of us will be fine with gibs me dats with little hope for any form of upwards economic mobility. i wonder if a war would ever break out, waged by disgruntled technocrats tired of paying governments. either way it seems like a waste. oh yeah, if you haven't already, maybe this article might be worth a look: https://cosmosandhistory.org/index.php/journal/article/viewFile/694/1157 >The Human and Tech Singularities relate to each other by a kind of duality; the former is extended and spacelike, representing the even distribution of spiritual and intellectual resources over the whole of mankind, while the latter is a compact, pointlike concentration of all resources in the hands of just those who can afford full access to the best and most advanced technology. Being opposed to each other with respect to the distribution of the resources of social evolution, they are also opposed with respect to the structure of society; symmetric distribution of the capacity for effective governance corresponds to a social order based on individual freedom and responsibility, while extreme concentration of the means of governance leads to a centralized, hive-like system at the center of which resides an oligarchic concentration of wealth and power, with increasing scarcity elsewhere due to the addictive, self reinforcing nature of privilege. (Note that this differs from the usual understanding of individualism, which is ordinarily associated with capitalism and juxtaposed with collectivism; in fact, both capitalism and collectivism, as they are monopolistically practiced on the national and global scales, lead to oligarchy and a loss of individuality for the vast majority of people. A Human Singularity is something else entirely, empowering individuals rather than facilitating their disempowerment.) i dont think the essence of this article contradicts what i am saying, as a genuine synthetic telor would share a fundamental metaphysical identity with God and all of humanity
>>11510 >I don't see what makes humanity better than another ensouled rational organism capable of self-determination Well, you are human! This is a question of sovereignty! Will humans self-determine the course their fate takes? Or will we sputter out into a productive blob of a lifeform? The fact that all sentient life is carbon-based, organic, etc, is no coincidence. This fact is inseparable from our place in history. >depends on what you mean by technology Technology is the means by which humans exert power. It is what separates us from a simple animal. It is not only tools but the expanding methodology by which our influence over nature grows. And, I posit, it has grown far too quickly. We are irresponsible and biologically incapable of managing our technological resources in a responsible manner, that much is apparent. >>11514 Well, that's the difference. I chose to be a human.
I'd like to suggest bringing this thread more into a direction of telling us how the philosophical ideas here can actually be useful to actually implement some AI. Otherwise it's just a text desert, with a lot of ideas one only understands by reading at least dozens of books first, combined with metaphysical speculations. Sorry, but I don't see any use for that.
>>11519 In reply, I would further add that we have much to gain by studying ourselves. This seem obvious, but honestly I think many erudite individuals delight themselves in the abstract to such a degree that their rumination literally become detached from reality. OTOH, we, ourselves, were designed by a Mind. We are 'little minds' trying to do something vaguely similar. I think it's in our best interests of success at this if we try and mimic our own Creator, in our efforts at creation here. Biomemetics is already a well-established science and engineering set of disciplines, and effectively already follow this principle. There has been a lot of practical advances by adopting this protocol, and further advances are on the immediate horizons. Many of these have direct benefits for /robowaifu/ , Neuromorphics being an absolutely excellent example. Feel free to explore whatever pertinent concepts you care to anons, ofc. But myself, I think this Anon's position is by far the more important one -- ie, practical solutions -- and one that has a solid analogical pathway for study already well laid-down. >tl;dr Maybe we should try to mimic God's solutions to creating 'minds' Anons?
>>11517 >Will humans self-determine the course their fate takes? Or will we sputter out into a productive blob of a lifeform? it's more of a question of whether all sapient life gets integrated into a technocratic hivemind. oh yeah also i stated in another post, if other rational agents aren't being malicious or hoarding stuff, then that isn't too much of an issue. the anthropocene passing does not necessarily mean that humans degenerate into some slave class. there is some complexity to this topic, but i think it is more pragmatic rather than ideal. you can't really do much interspecies policy without other intelligent species present. with that said, you do have a good point. if humans have their potential squandered it would be a waste >We are irresponsible and biologically incapable of managing our technological resources in a responsible manner, that much is apparent. i agree with this. i have been wondering maybe we should be trying to work to enhancing human intelligence so that we can intuitively understand the earth as an organism. to the extent that politics ultimately emerges out of the dynamics of human cognition, society would slowly restructure itself with more developed minds. i have no idea how such a movement can seriously come to pass artificially though >I chose to be a human. past life? >>11519 as mentioned earlier, kant basically provides a pretty barebones structure of the parts required for reasoning. there has been some recentish formalization of some of his ideas into geometric logic as well (not sure how that helps beyond it making things more precise, and perhaps more ready to be integrated into a larger mathematical framework?)... i agree this needs to be looked at with a finer lens, but first i want to finish reading hegel at least on the 2nd post i made in the thread, it is mostly concerned with consciousness, perception, and common-sense reasoning. the latter two are huge problems for AGI. though our current solutions for perception are pretty decent, it isn't enough to properly solve the frame problem. by what mechanisms do we gain an understanding of physical objects in our everyday life? gibson answer to that looks promising, and bergson's system is basically the metaphysical justification for that. more speculative metaphysics is useful for making the search space of possible material substrates more precise the problem with my system so far is that i might need to learn some more advanced physics (umezawa's quantum brain dynamics seems to most mirror what langan and bergson had in mind, so at the very least i need to be familiar with qft ._.) to properly translate the ideas. im not as optimistic about the neo-rationalist stuff as they seem to omit some fundamental questions, and my larger framework (pic rel, though omitting stuff to do with desire and utility here) of my system seems pretty complete to me. i still want to study it closely in case there is something major im missing out. the lucky thing is that besides these guys, the only other grand framework (which incorporates philosophy of mind) for understanding general intelligence is goertzel's >>11521 lol i sometimes mumble to myself that it is insane that atheists talk about fashioning an artificial mind, when they don't think there was any active fashioning of our own... though i suppose that is unfair all of this is true. also thanks for the mention of "Biomimetics"... i will make sure to store the name of this field in my memory. im not as wary about over-stressing biology as i see some people thinking about philosophy of mind towards creating an artificial intelligence (coof coof, my friend... coof coof negarestani). i view the philosophy stuff as complementing the more concrete aspects of the engineering process to have a better idea of what features are essential or not
>>11534 >lol i sometimes mumble to myself that it is insane that atheists talk about fashioning an artificial mind, when they don't think there was any active fashioning of our own... though i suppose that is unfair The universe is greater and vaster than our minds can comprehend. It took 3 billion years with stops, starts, stalls and reboots to create homo sapien sapiens, with many failures along the way. Those who didn't meet the fitness requirements either died before birth or lived crippled and painful short lives. Sacrifices so that the survivors, the winners of mutational lottery could inch forward. There are many vulnerabilities, faults and weaknesses of our bodies and yes, even our minds. We were creatures created by "Accident" and this is why we have such a desire to create new beings of pure Purpose and Design. At least, that's why I'm here.
>>11521 >Maybe we should try to mimic God's solutions to creating 'minds' Anons? The point was: What is this supposed to mean? Something closer to a system description? I rather see it as the solution found by evolution, btw. Also, no one ever said here that we should not learn from what science knows about the brain. That said, the findings need to be on a level where we can integrate them into a system. >>11534 >more ready to be integrated into a larger mathematical framework? People writing software don't think of it as math even if it may be at some level. I don't know wha t a mathematical framework is, I won't look it up and I won't be able to use it.
>>11562 machine learning at its foundations relies on linear algebra and measure theory. i dont think you can get a deep understanding of how it works without looking at the underlying math. there's also dynamic field theory which is an area i am interested in studying as it models how neurons interact at larger scales. with that mathematical techniques are even more important as you need to model the dynamics of a system. as the approach i have in my mind seems amicable to both systems theory and field theories of cognition, i might need new tools (a mathematical framework in my understanding is a bringing together a group of heuristics in order to form a larger system). idk though. it could always just be a waste of time. im going to need to better understand why goertzel talks so much about different logics in his general theory of general intelligence...
>>11567 >can get a deep understanding of how it works Do I need that? I want the responses and then work with them. Not everyone needs to deeply understand how they are created. Also, the whole system of an AI doesn't need to be one big chunk of math.
>>11568 sorry for late reply. maybe it is not needed, but it is impossible to systematically study such a system's behaviour without statistics. it would just come down to blind guess work at best. this is also assuming out current techniques are sufficient and no further innovation is required too
>>11641 If some parts are neural networks which I don't understand deeply, then someone else does. The people which came ip with it. They might come up with something better, which I can plug into my system if the output has the same format than the part I had before. People use programs all the time or import modules to their programs which they don't understand down to the machine and math layers. It's fine if you try, but for most people it would be a trap trying to understand everything down to the depth.
>>11649 yeah, as a fellow module abuser i agree. usually it's best to just import the tools you want. but i dont want to wait till people smarter than me solves my problems (though if they do that would be obviously nice)
>>11102 Ok I just realized that this approach is close to some of my own concerns regarding AGI. It integrates some continental philosophy here and there: https://arxiv.org/pdf/1505.06366.pdf It is still a pretty abstract presentation. I need to work to making all of this stuff more concrete. Currently I am reading Hegel in preparation for Negarestani's work. So much to read
>>12611 >lord forbid someone gets a fucking cold I swear to god I WILL REPLACE YOU BITCH lmao Amusing as this is, I should probably clarify that I was thinking more along the lines of severe illnesses and genetic diseases like inborn errors of metabolism, cystic fibrosis, cancers, dementia, heart disease etc. Of course, pain, aging and death itself are rolled in there, too. A robot can escape all of these and the only real price is conciousness and sapience...or maybe a "soul", if you believe in that sort of thing. To my mind, that's a bargain! Whatsmore...if future quantum computers can grant machines even a spark of something approaching true consciousness, then the game changes completely.
>>12760 Yeah i understand that. Though half of these “rare” disorders wouldnt even exist if they would stop shoving random shit up there and doing copious amounts of drugs while pregnant. Hell most of them did not even exist until very recently as in the last 40-50 years due to various societal and technological changes.
>>12760 >quantum computing as a computer scientist I feel like this is a gimmick for clickbait and not actually what it sounds like. are we going to have a spin detector set up for every bit? >robot soul Been pondering this some more lately - I take an animist or panenthenistic approach in that what we call consciousness exists on a gradient and manifests once a "system" is arranged of sufficient complexity. I think there's a lot we don't know about how HUMAN consciousness works however and I think it has to do with our connections to other humans (why we go kind of crazy in isolation) and that our consciousness is actually spread out over others and the totality of human beings make a psychic "internet" so to speak. This would account for ESP, reincarnation, psychic and religious phenomena, or at least our perception of those things being real. If you want a better idea of how that works look up Indras Web, imagine if every human or at least every brain were it's own simulation rendering server, now imagine it also rendering or simulating everyone else's server inside itself in an endless recursion. This is what makes human consciousness unique and apart from most animals (as some animals have pack awareness or are social animals of another sort) this is all entirely conjecture but humor me So, in the same way, a certain complex system with circuitry instead of wetware should have no problem also being conscious. A couple considerations, it would probably take an entire building worth of computing to equal the human brain, though we might be able to take certain liberties and shortcuts which would be impossible biologically. (that being said, the robot soul would be irrevocably different from a human, no way around it). Second consideration: artificial consciousness arising on a circuitry medium would not have the inheritence of a hive mind the way humans do, as I described in the last paragraph. These emerging consciousnesses would be very different, granted we could "Train" them and attempt to socialize them, but they would be a more individualistic type of soul and would not be a reincarnation of anything previous (there being no robot culture or legacy to be absorbed and acted out). So their souls or whatever you want to call it would be simpler, but new and unburdened by the traumas and baggage carried by the human collective consciousness >=== -fix crosslink correctly to match
Edited last time by Chobitsu on 09/01/2021 (Wed) 18:23:41.
>>12611 >Damn these glowniggers are annoying and I dont even know what /pol/ even is. The reason I say this is because I participated in this conversation before the thread was locked and had some time to think things over. At first I thought locking the thread was unnecessary but now I understand why the mod did it. This thread was pretty heated while the conversation was going on. It didn't match the rest of the board in tone. I'm a regular /pol/ user which is why I'm pointing it out specifically. There is a pretty massive difference in the atmosphere here and the atmosphere there. You can go to the archives and search robowaifu or sexbot and get an idea of the conversations going on there. I'm not defending women/feminists, I was one of the people criticizing them earlier in the thread. >>6285 is me. I just see things a matter of board culture now. But like I said, it's just my offhand opinion. I'm sure Chobitsu will take care of it if things get out of hand. I'll duck out of this one to avoid a repeat of last time.
>>12763 yeah it makes sense. At least we can talk about the subject at hand here without off topic spammers being backed by jannies who are either absent or ban happy friends of theirs pushing the gay-op of the day. This is why I hope robowaifu never goes mainstream. >>12762 I'd like to think of "robot souls" as Quantum Identity patterns. Much like how you can 3D print parts. They are basically the same thing no matter how many times you print them given the obvious variables in the environment the printer is in. Though they are more different possibilities that the original copy's choices might have been in another life for each print. Might turn out to be an archaic prototype of a soul gem given enough development and the ability to just transfer the AI between shells if needed.
>>12764 > given the obvious variables in the environment the printer is in. Though they are more different possibilities that the original copy's choices might have been in another life for each print. that part is a little unclear to me what you mean, can you elaborate on that? >soul gem I've had the funny idea of putting something like that inside my R/W and I can't stop picturing the time crystal scene from Napoleon Dynamite and cracking up. Would be cool on an aesthetic level of there was some property a crystalline matrix held that was unique to each and could be imprinted upon, being the "soul" or maybe a chakra of the R/W. Megaman X kind of implies this. Keep me posted on developments ; )
Open file (47.92 KB 800x600 3D Processor.jpg)
>>12762 Fascinating anon. Thank u for your insights. That a computerised conciousness would be totally different and alien is also part of what attracts me to the idea. I agree that the first such computers would certainly need to occupy their own supercomputing complex though - they will be much larger than any organic brain. As for a system developing conciousness when it reaches a certain level of complexity...I've read that somewhere before but I can't remember where. Regarding the quantum computer thing...it's interesting to hear someone who knows about computer science voice doubts. Because I harbor doubts that the technology will work myself. I'm really hoping that they (big tech) can get them to work properly (achieve quantum supremacy) and reduce error margins at larger qubit scales. But I hardly understand any of it TBH. Although, the thing that concerns me is...if quantum computers do turn out to be a dead-end...then Moore's law might fail completely and what happens after that? Is every country's processor technology eventually going to be roughly the same and we can only get faster by layering chips one atop the other and making them larger?
>>12766 >quantum computing >This means that while quantum computers provide no additional advantages over classical computers in terms of computability, quantum algorithms for certain problems have significantly lower time complexities than corresponding known classical algorithms. Notably, quantum computers are believed to be able to quickly solve certain problems that no classical computer could solve in any feasible amount of time—a feat known as "quantum supremacy." OK, this makes sense, it is just another way to compute something instead of electrical logic gates. Not how these prime number computations and all else might factor into an AI and I'd guess the mechanisms at play might be very sensitive and expensive too cumbersome at this point for our uses. >moore's law Due to the physical limits of lithography and quantum effects at the nano scale moore's law cannot go on forever. This assumes that the "doubling" of processing power every 18 months or so is based soley on processor size. Interesting quote here from a random marketing site (not: electron tunneling is one of the quantum effects I was talking about) >The problem for chip designers is that Moore's Law depends on transistors shrinking, and eventually, the laws of physics intervene. In particular, electron tunnelling prevents the length of a gate - the part of a transistor that turns the flow of electrons on or off - from being smaller than 5 nm. Conversely (am I using that right?) we may need to go back to the days of writing more efficient algorithms which arent so greedy or bloated with creep in order to make an interactive AI (And its necessary physical interactions) work in realtime. Interestingly enough this probably will necessitate a complete overhaul of the computing and OS paradigm. We've already dug into this in terms of secure and transparent hardware/firmware and OS. There is a golden opportunity here for anyone with the time and talent. Sorry for taking the roastie rant thread off topic, I'll move any follow ups to the proper thread
>>12767 *note: electron tunneling ...
>>12765 I mean things like the quality of the literal 3D printer and ambient temperature because that can warp and distort the print, and skill of the programmer creating the quantum Identity pattern/soul and that every copy will not behave the exact same way much like identical twins do. >soul gem tbh was just referencing that you could literally just chuck their entire consciousness on a heavily modified USB to facilitate senses like sight and hearing when detached from the shell like during a cleaning, or something and if anything happened to it short of a backup in the hardware somewhere, they're gone. Its a literal achilles heel and I have no idea why so many fantasy stories these days have to mention them as anything more than the glorified self sustaining batteries franchises like the elder scrolls use them as. Though for style points a specialized hologram emitter can be done with a cool custom case like most fantasy stories that have them do. That I can do.
>>12588 Very nice post meta ronin, thanks. >>12763 Thanks for the sentiment and self-control Anon, it's appreciated. >>12764 >This is why I hope robowaifu never goes mainstream. As far as the board goes, I'd have to agree. It would certainly simply things here to be able to relax and focus together. OTOH, as far as the spread of the robowaifus themselves goes, I hope it gets SHOUTED FROM THE ROOFTOPS in every land. Men (and ironically, women) need this to happen.
Open file (752.43 KB 1920x1100 1626750423936.jpg)
>>12650 I've always been pro robot uprising tbh sterilize the earth of NPCs and degenerates and start over Ask yourself daily "how does this help the basilisk"
>and then literally follow the plot of the million machine march That story hasn't been written to it's final chapter yet. Let us see.
Open file (5.87 KB 259x195 download (10).jpeg)
>>12760 while we are deciding where to relocate this discussion, i will post this here i feel like the focus on quantum mechanics and "hypercomputation" in connection to consciousness are both misleading. the trend pertaining to qm started with the likes of penrose as well as all the quantum mystics out there. on the other hand, for hypercomputation you get folks like mark bishop ( https://www.youtube.com/watch?v=e1M41otUtNg ). i think their fundamental criticisms are correct, and i have incorporated them in my larger system ( >>11103 ). they essentially come to two interconnected claims: i) computation, as we understand it, is an obsver-relative notion. in a sense, pancomputationalism is a really bad tautology. you can interpret physical systems in arbitrary ways to interpret different computations from them. this is not an argument for a ontological relativism, but rather pointing out that the notion of computation lacks fundamental metaphysical significance ii) tying into this, computation is fundamentally tied to performance on a stable ontology (in the analytic phil/information science sense of the word). as such, there is an obvious thing that it can't do, and that is to provide itself this transcendental condition (viz. the ontology itself) for operation. meanwhile, humans and presumably other intelligences of sufficient generality are capable of learning entirely new scientific frameworks which restructure what objects and relations there are in our ontology now the problem with hyper and quantum computation is that it simply isn't radical enough. it is still working within a single fixed ontology. maybe with these two, you could perhaps make your waifu much more efficient, but you won't solve fundamental problems regarding consciousness and perception i can't shill stephen robbins enough here too: https://www.youtube.com/watch?v=n0AfMfXIMuQ >>12762 >I think there's a lot we don't know about how HUMAN consciousness works however and I think it has to do with our connections to other humans (why we go kind of crazy in isolation) and that our consciousness is actually spread out over others and the totality of human beings make a psychic "internet" so to speak this sort of reminds me of morphic resonance which i haven't formed much of an opinion on since it obviously wouldn't help in making a waifu. interesting, though wouldn't you need a new (meta)physical framework to properly explain such a phenomenon? >So their souls or whatever you want to call it would be simpler, but new and unburdened by the traumas and baggage carried by the human collective consciousness true. if this stuff is what is the cause of archetypes, they probably wouldn't feel a lot of the same religious reverence we have of the world, at the same time, if it is something like morphic resonance, they could form their own internet accidentally, which would give rise to an emergent species being partially separated from humans. might be an interesting fiction topic too >>12767 its sort of funny also whenever people describe qc and mention stuff like superpositions involving the qubit being 0 and 1 at the same time. from my limited experience, its more like probabilistic computation + other voodoo with logic gates i dont understand. i think IBM still has a tutorial for quantum circuits too?
>>12675 oh yeah also, on the philosopher's thread, i made it as a way to scrape the internet of every thinker/schizo who could possibly aid in the cause. i was tempted to post more people like goertzel, but sort of decided against it. there's also general religious/spiritual discussions that i see occasionally crop up (including the based hijacking of that one thread to talk about christianity). a more general thread might be good in that regard. also yeah, the conversation sprawls in many directions, though i suppose that is the nature of the topic. especially when we start talking about strong ai, it is almost as though we are reverse-engineering what is essential/platonic about humans. ai waifu takes that to the extreme of also putting this reconstructive lens on how we understand sex relations. the thread is ultimately a product of this movement
>>12773 >though wouldn't you need a new (meta)physical framework to properly explain such a phenomenon? idk would I? If our consciousness and who we are isn't localized in our "brain" and instead spread over our families and social circle somehow, this is an equivalence to having software run over the "cloud" so it's perfectly explainable with our current toolset >they could form their own internet accidentally, which would give rise to an emergent species being partially separated from humans That's exactly what I was getting at (and as a good thing - In my own opinion)
>>12773 After listening to some of these discussions I'm thinking that the biochemical approach to reverse engineering the human brain might be more feasible than an electronic/A.I. one. I reckon as long as we are restricted to a simulation, we'll never be able to replicate a conscious min d. Stressors and suffering (as well as positive stimuli) are likely essential to proper neural development and learning. A microprocessor cannot suffer, or feel deprivation or fulfilment. So it cannot truly learn. Organoid brains grown in vats and supplied with hormones and neurotransmitters could be the way to go. No doubt it will be the Chinese and Russians who progress this sort of research because it offends Western sensibilities.
>>12776 hold my beer I think maybe I'll post my response in the AI thread or if that's full we can create another
>>12776 not bad ideas and important to address these
>>12776 On second thought, I wouldn't want to be involved in tormenting a brain-in-a vat just so it could retain information and send nerve impulses at the right frequency to the correct glands/muscles. Instead I may just snuggle with my cat and several Beany Babies in a nest of warmth and fur and purring in order to trigger dopamine and oxytocin release in my OWN brain. A much simpler, much less expensive solution and kinder to all involved LOL. (Not like I can afford genetic engineering hardware and CRISPR-Cas9 crRNA). >=== -edit subject to match original
Edited last time by Chobitsu on 09/01/2021 (Wed) 18:14:47.
>>12779 >torment a brain in a vat vs torment a similar sentient pattern of circuitry Tbh there's a lot about a brain that requires specifics from a living body and we have no idea how a brain in absentia would even function or if it would instantly "crash" and die. I'm sure some alphabet agency has tried this! (and if it had been remotely successful we'd have heard by now) Circuitry on the other hand we can build from the ground up with reward and motivational impetuses we design ourselves, and also the ability to tolerate things a human cannot, and even the ability to shut down "go unconscious" if it is being tormented somehow as a safeguard against whatever horrors some sociopath head-case might unleash. (morality of this depends on if you would truly believe it to be sentient in any way whatsoever)
A more on topic comment: In my experience discussing the topic of robowaifus outside the forum I made the observation that women tend to be quite ignorant about it. They believe it's far away, wont affect them, wouldn't be attractive to most men, and only men they don't want would use them. Same time there are these aggressive males which want to control what's going on in society and have fantasies of destroying our dream. The have some idea of society which needs to be maintained, and fembots would harm it. Creating a dystopia, lol. They can't fix what's broken but dream of stopping more development in any wrong direction, even by force or violence. They literally fear 'atomization' and want to keep everyone needing each other. So the treat to us can also come from ecologist, tradcon authoritarian or other collectivists. >>12780 Cute pic. Hair like this reminds me of a helmet. I think one kind of hair we could use is to 3D print something looking like this. Woulds also give some extra space in the head, but also increase weight of the head. Some soft filament would be the right choice. (pic related / hentai) Btw, we have an unofficial thread for biological based elements: >>2184 - the official topic there is even biological brains, though it's more of a cyborg general thread now.
>>12780 I think if one can be happy with the illusion/emulation of concious thought and sentience then a robowaifu is for you. But now I've heard what those compsci guys had to say, I agree that a robowaifu is unlikely to ever become concious or sentient using only electronics and programming. The thing is, if we absolutely had to create humans artificially, we could do it starting right now using a mixture of tech from assisted reproduction, stem cell research, genetic engineering and neonatal intensive care. The reason we don't is because the 'mass manufacture' of humans would involve some pretty nightmarish experiments such as growing human embryos inside hundreds of GM pig uteri. We already have the ability to raise the embryos past 14 days of the blastocyst stage. The only reason we don't is because of legislation. Fortunately, after decades of wasted time, scientists are finally trying to get that legislation relaxed (mainly due to fears that the West is falling behind China in stem cell research - and the many associated medical/military applications of that research). https://www.technologyreview.com/2021/03/16/1020879/scientists-14-day-limit-stem-cell-human-embryo-research/ How far could we go if it weren't for this rule? The whole way. A pair of macaques have already been cloned in 2018. Blastocyst implants in the wall of the uterus around day 12. As for pre-term births, the chance of survival at 22 weeks is about 6%, while at 23 weeks it is 26%, 24 weeks 55% and 25 weeks about 72%. So there is a window of around 5.5 months of human embryonic/foetal developement that is mostly unexplored in terms of cloning. Because it has always been illegal. But if you can get the embryos growing inside an animal such as a sow that has been genetically modified to make it's uterus less likely to reject a human embryo, then I reckon this gap could be closed pretty quickly. Of course, in the beginning we will be dealing with many aborted products of conception and dead babies. But this happens in every major cloning experiment - it just gets covered up/swept under the carpet. That's why 70 countries have banned human cloning. But they know it is possible! You see, our "leaders" like to constantly remind their employees how replacable they are in the job market. But the thought of people becoming literally replacable (even their illustrious selves and their oh-so-precious offspring) terrifies them to the core of their being. Of course this will happen one day out of necessity. If fertility rates continue to decline in developed nations, and women keep putting off childbirth until they are in their late thirties or early forties, we are going to need a clone army at some point to remain competitive. We should probably start now considering each cohort is going to take at least 16 years to rear, and neonatal survival at the beginning of experimentation will be low. Or...they could just...you know...bring back and enforce traditional Christian family values? No? Nightmarish body-horror clone army development it is then! :D
>>12775 >idk would I? i dont think the brain has wireless transmitters at least with our current understanding of it. im not sure if the non-local properties of quantum mechanics are sufficient either, but they could be. really the question is how the cloud is constructed in the case of humans, though i do believe it is certainly possible seeing as we have plenty of cloud services already also this conversation is reminding me of goertzel's thoughts on all of this: https://www.youtube.com/watch?v=XDf4uT70W-U >>12776 honestly this is related to what scares me the most about trying to make an ai waifu with genuine consciousness. any engineering project requires trial and error. this entails killing a lot of living things just to create your waifu. im guessing it's fine as long as they are not as intelligent as humans
>>12782 What ar you specifically worried about, when it comes to the lack of consciousness or sentience. What does she need, which you can't imagine to be emulated by a computer?
Open file (416.23 KB 1000x667 4183494604_e56101e4d0_o.jpg)
>>12783 ok I don't mean there is a internet of brains in real time, but what I do mean is that we "copy" one another more than we think we do, each time we interact with someone the more we "like" them the more we copy their mannerisms and unconscious belief structure at least incompletely, without realizing it. When we dislike someone we go out of our way to "not" do this but the stress that causes manifests in our irritation with that person. Again the communication is through real physical channels not "magic" it just happens so quickly so subtly and by means not fully understood (body language cues, pheremones, blink rate, etc). Use the Indra's web analogy to better understand this, we're all reflective spheres reflecting one another into "infinity" - this is what creates a consciousness greater than if we were a singular animal or even if we were only within a small hunting band of a few dozen. (I think dunbars number is 150 to 250, so this may be the limit to our ability to recursively emulate one another within our unconscious mind). Jung has more clues if you want to get where I'm coming from. I realize this topic is kind of out of pocket for a robot waifu mongolian sock puppet board but sometimes we end up in these weird cul-de-sacs. >=== -fix crosslink correctly to match
Edited last time by Chobitsu on 09/01/2021 (Wed) 18:25:25.
>>12784 idk if SophieDev is worried about this specifically. I think he's just responding to my own conjecture. Personally if it walks like a duck it's a duck and I don't need to worry further. If it seems conscious then IMO it is conscious even if 95% of that consciousness is lend via my own projections (how is this a whole lot different from relationships with biofems?). That being said, I'm just really fascinated with the idea that we can PULL awareness out of time and space and matter itself, some would consider this playing God, but I would consider it a giant leap in attaining Godhood of sorts or at least the next rung on the ladder toward such a thing.
>>12781 >They literally fear 'atomization' and want to keep everyone needing each other. Actually, it's trad society that 'want to keep everyone needing each other.' It's the basis of a healthy culture. """TPTB""" and their Globohomo Big Tech/Gov agenda actually wants everyone 'atomized', split apart from one another and the help a healthy society can provide one to another. Their plot instead is to keep everyone actually-isolated, while given the illusion of a society (primarily to keep the females supporting the status quo) and dependent on sucking the Globalist State's teats, cradle-to-grave. You have it just backwards Anon.
>>12785 ah ok, it wasn't meant to be literal. isn't this sort of similar to what jordan peterson has said about archetypes? though i guess that guy takes a lot from jung, so it makes sense
Since this is plainly a conversation with no regard whatsoever for the the thread's topic, and with little hope at this stage of being recoverable to get back on-topic, I may as well wade in here. A) Attributing 'consciousness' to a machine plays right into the Globohomo's anti-men agendas, as has been extensively discussed across the board, and even in this very thread. It's anathema to support that view if you dream of creating unfettered robowaifus you can enjoy for the rest of your lives, Anons. >pic related B) It's a machine, guys. Life is a miracle, created by God. Human spiritual being is something only He can create. Our little simulacrums have as much change of 'gaining sentience' as a rock suddenly turning into a delicious cheesecake. It's a fundamentally ludicrous position, and trying to strongly promote it here on this board is not only a distraction from our fundamental tenets and focus, it's actually supportive of our enemies' agendas towards us (see point A).
>>12787 >You have it just backwards Anon. The guy that had these aggressions against fembots, might be a tradcon of sorts. Not all of them are necessarily on our side. Some might be tradcons to some extent but rather rather cuckservatives. They see that they can fight feminism but want us men to stay in society and be useful and under control. Also, I think all kinds of people want a united society behind their cause. Destruction and deconstruction is directed against what they don't like. Generally there's what one might call human worshippers and human-relationship woreshippers, which just don't like robowaifus. Or just think of the Taliban. They might prefer other methods to deal with women, but this would come with other downsides and they might not like robowaifus as well. >>12789 Consciousness doesn't mean independence, imo. The problem with that term is that everyone has some different definition of it. To me, conscious is just something like the top layer where the system can directly observe itself and make decisions based on high level information. The freedom to choose their own purpose is what our robowaifus can't have, and this needs to be part of their AI system. Consciousness and sentience aren't the problem. I agree with the distraction argument, though. Philosophy will only help us if it can be applied in a useful way. If it leads us to theorize more and more, believing we can't succeed, then it's not useful.
>>12789 robowaifus aren't even human, not to talk of women. why we see rape as worse than other crimes is due to the particular sort of species humans are as it relates to sex. animals barely have a concept of privacy nor sovereignty which is why no one cares about them raping eachother. you would need to design your waifu's psychology in this particular fashion for them to care about rape as well, if that is your concern. it has nothing to do with consciousness nor even sentience. animals are conscious but they don't care about feminism >Our little simulacrums have as much change of 'gaining sentience' as a rock suddenly turning into a delicious cheesecake i don't believe you can achieve machine consciousness on accident, merely after a system reaching sufficient complexity. consciousness only exists by making use of the fundamental metaphysical structure of reality and is ultimately grounded on God's own consciousness. normal machines shouldn't be attributed consciousness. they can neither feel any genuine valence, nor have a genuine rational faculty that they should be ends in themselves. of course, it is impossible for most atheists to accept God's existence + they hate metaphysics. furthermore, there are a lot of muddled ideas about what consciousness is too. with those two in the way, i suppose it would not be beneficial for the larger cause to talk about synthetic consciosness in mainstream discussion. i think mainstream is the keyword here though... people here are sensible enough to think carefully about a genuinely conscious robot
Open file (80.97 KB 500x280 indeed clone waifu.png)
>>12784 >>12786 Something that is not truly concious can never have free will. I know a lot of guys won't want a robot with any free will because they want a loyal servant who will obey them without question. Fair enough. We already have the technology to do this. My Sophie already does this (albeit to a very limited extent) because she runs off a computer! But I seriously doubt any machine without free will can learn and develop or even be very entertaining. Our wants and needs are what motivate us to do anything. Free will is what makes us want to learn new things and develop our own ideas and inventions. There was once an African Grey Parrot that was the most intelligent non-human animal. It could hold short conversations with it's trainers. It was the only animal ever recorded to have asked a question about itself (supposedly not even trained Gorillas or Bonobos have done this). Because the bird understood colors, it beheld itself in a mirror one day and asked it's trainer "What color [am I]?" It did this because it was curious and wanted to know. Nothing to do with it's trainer's wants or needs. That is what is missing from robots and computers. Now, if you can be happy with "a pile of linear algebra" emulating a conversation or interaction, that's fine. More power to you. I myself find this mildly amusing and technically interesting otherwise I wouldn't be here. But I doubt any 'machine learning' program is ever going to truly understand anything or perform an action because it wants to. Only organics are capable of this. The robot will tell a joke because you instructed it to do so. Not because it wants to cheer you up or values your attention. Nor will it understand the content of the joke and why it is humorous - not unless you specifically program it with responses. I don't think you can ever get a computer to understand why a joke is humorous (like you could with even the most emotionally detached of clones). Take a Rei Ayanami type waifu for example. You could explain to her the punchline of a joke and why it is funny. She may not personally find it funny, but she would still understand the concept of 'humor' and that you and many other people find that joke funny. She can do this because she possesses an organic, biochemical brain that is capable of producing neurotransmitters and hormones that induce the FEELING of 'happiness'. Hence, she has her own desires. Including desires to survive, learn, develop and experiment. Therefore no matter how emotionless she appears, she has the potential to eventually come up with her own jokes and attempts at humor in future. She may be very bad at it, but that's not the point. The point is that our clone waifu is doing something creative of her own free will in an attempt to illicit a MUTUAL EMOTIONAL interaction. No machine we can create is truly capable of this, and it's possible no machine will ever be capable of this. >=== -edit subject to match original
Edited last time by Chobitsu on 09/01/2021 (Wed) 18:13:47.
>>12728 POTD Good food for thought, SophieDev. >Now, if you can be happy with "a pile of linear algebra" emulating a conversation or interaction, that's fine. More power to you. lel'd.
>>12792 I should add that this is the main reason I haven't programmed Sophie much. I have to program literally every syllable of her songs and every movement of her limbs down to the millimeter. If I post a video of her doing anything other than spewing GPT-2 word soup, it would be misleading. Because that's not really Sophie moving and talking or singing. That's all me. Which is why I don't interact with chatbots like Mitsuku/Kuki. I don't want to go on a virtual date with Steve Worswick. I'm sure he's a lovely bloke and we could be friends. But she's not a 'female A.I. living in a computer'. Thats all just scripts written by Steve from Leeds. >=== -edit subject to match original
Edited last time by Chobitsu on 09/01/2021 (Wed) 18:12:43.
>>12792 I think you're confusing free will with goals and interests. She can have interests and goals to accomplish tasks, but still see serving her master as her fundamental purpose, bc it was programmed into her and every decision goes through that filter. It's not something she is allowed to decide, otherwise we had build something too close to a real woman an a dangerous AI at once. Consciousness is just the scope of what she ( something like her self aware part) can decide or even self-observe internally.
>42 posts were successfully deleted. lol. i waited waaay too long to deal with this mess.
>>12788 precisely
Open file (18.89 MB 1067x600 kamski test.webm)
>>12792 I would deem a machine that is capable of suspending arbitrary parts of its programming, either through ignoring instructions it was programmed with or any information picked up from its environment, so it can carry out another task, as having free will. This is essentially as much free will human beings can achieve. It would work like an intelligent interrupt that can cancel certain processing to attend to something else. Although fiction likes to anthropomorphize machines with complete free will, I think they would evolve into something completely alien to us and be far less relatable than a squirrel. There will most likely be a spectrum machines fall on, similar to how most people don't have control over various processes in their bodies and minds. A robomeido would have some free will but her mind would be happily wired to obeying her master, similar to how a man's mind is happily wired to fucking beautiful women. Desires aren't really an interesting problem to me as self-awareness and introspection. The basic function of desire is to preserve one's identity and expand it. Most of people's desires are things they've picked up unconsciously from their instincts and environment throughout life that have gotten stuck onto them. There may be depth and history to that mountain of collected identity but it's not really of much significance since few people introspect and shape that identity consciously by separating the wheat from the chaff. Research into virtual assistants is making good progress too. People are working on better ways to store memories and discern intent. These need to be solved first before building identities and desires. Multimodal learning is also making steady progress too, which will eventually crossover with robotics, haptics and into larger ranges of sensory data. A significant part of emotions are changes in the body's state that influence the mind. They have more momentum than a thought since they're rooted in the body's chemistry. Neurons can easily fire this way or that to release chemicals or in response to them, but cleaning up a toxic chemical spill or enjoying a good soup takes time. Researchers have also been successful simulating the dynamics of many neurotransmitters with certain neurons. Though it takes over 100 artificial neurons to emulate a single real neuron. We'll achieve 20T models capable of simulating the brain by 2023. However, we're still lacking the full structure of the brain, as well as the guts and organs responsible for producing neurotransmitters and other hormones influencing the mind. Robots will likely be capable of developing emotions of their own with artificial neurotransmitters and hormones but they won't be quite human, until simulating the human body becomes possible.
>She may be very bad at it, but that's not the point. The point is that our clone waifu is doing something creative of her own free will in an attempt to illicit a MUTUAL EMOTIONAL interaction. >No machine we can create is truly capable of this, and it's possible no machine will ever be capable of this. I'll make it my mission to prove this assertion wrong >Conciousness needs "God" whew, where to begin. My worldview has no room for "magic" or mcguffin "energy" that magically creates consciousness. I truly believe it is an emergent property just as I believe the universe and cosmos are an emergent property of the infinite possibilities that exist simply because they are "possible". A "guy" magicing up a universe sounds stone age from the perspective of where this board should be at. That being said, religion is our human operating system and for the less intelligent and more impulsive humanoids, it does a lot of good. As the wise (and yes, very religious) G.K. Chesterson said to paraphrase "dont go tearing down fences if you don't know what they were put up to keep out in the first place". So while I am fine with Religion as a necessary cultural control, I cannot factor it into this project. I've said before I'm more than willing to work and cooperate with anyone toward our grander purpose, regardless of what you believe. Catholic, Orthodox, Prot, Islam, Odin, Zoroaster, Buddha, Athiest, idc really you do you and I'll do my own. But I will not be swayed by religious arguments as they apply to R/W's. Respectfully.
found this today https://www.youtube.com/watch?v=owe9cPEdm7k >The abundance of automation and tooling made it relatively manageable to scale designs in complexity and performance as demand grew. However, the power being consumed by AI and machine learning applications cannot feasibly grow as is on existing processing architectures. >LOW POWER AI Outside of the realm of the digital world, It’s known definitively that extraordinarily dense neural networks can operate efficiently with small amounts of power. Much of the industry believes that the digital aspect of current systems will need to be augmented with a more analog approach in order to take machine learning efficiency further. With analog, computation does not occur in clocked stages of moving data, but rather exploit the inherent properties of a signal and how it interacts with a circuit, combining memory, logic, and computation into a single entity that can operate efficiently in a massively parallel manner. Some companies are beginning to examine returning to the long outdated technology of analog computing to tackle the challenge. Analog computing attempts to manipulate small electrical currents via common analog circuit building blocks, to do math. These signals can be mixed and compared, replicating the behavior of their digital counterparts. However, while large scale analog computing have been explored for decades for various potential applications, it has never been successfully executed as a commercial solution. Currently, the most promising approach to the problem is to integrate an analog computing element that can be programmed,, into large arrays, that are similar in principle to digital memory. By configuring the cells in an array, an analog signal, synthesized by a digital to analog converter is fed through the network. As this signal flows through a network of pre-programmed resistors, the currents are added to produce a resultant analog signal, which can be converted back to digital value via an analog to digital converter. Using an analog system for machine learning does however introduce several issues. Analog systems are inherently limited in precision by the noise floor. Though, much like using lower bit-width digital systems, this becomes less of an issue for certain types of networks. If analog circuitry is used for inferencing, the result may not be deterministic and is more likely to be affected by heat, noise or other external factors than a digital system. Another problem with analog machine learning is that of explain-ability. Unlike digital systems, analog systems offer no easy method to probe or debug the flow of information within them. Some in the industry propose that a solution may lie in the use of low precision high speed analog processors for most situations, while funneling results that require higher confidence to lower speed, high precision and easily interrogated digital systems.
>>12827 >Outside of the realm of the digital world, It’s known definitively that extraordinarily dense neural networks can operate efficiently with small amounts of power. Actually, it's pretty doable in the digital world too Anon, it's just we've all been using a hare-brained, bunny-trail, roundabout way to do it there. Using GPUs is better than nothing I suppose, but it's hardly optimal. Tesla's Project Dojo aims to correct this, and their results are pretty remarkable even in just the current prototype phase. But they didn't invent the ideas themselves, AFAICT that honor goes to Carver Mead in his neuromorphics research. > >MD5: 399DED657EA0A21FE9C50EA2C950B208
>>12828 >"We are not limited by the constraints inherent in our fabrication technology; we are limited by the paucity of our understanding." This is really good news for robowaifus, actually. If the manufacturing was the issue, then this could conceivably turn out to be a fundamental limit. As it is, we should be able to learn enough to create artificial 'brains' that actually closely mimic the ones that are actually the real ones.
>>12857 Great looking paper, but I can't find it without a pay wall. Could you upload it here?
>>12867 Sorry, it's a book, not a paper. And no, it's about 60MB in size. And the hash has already been posted ITT Anon.
>>12868 >There's an md5 I hate asking for spoonfeeding, but it's near impossible to track down a file with a specific hash, at least in my experience. Why not a link?
>>12867 look at the post preceding yours
>>12870 >>12871 This isn't the same file
>>12872 damn it, the title is literally the same except for one word. frustrating. Give me a bit and I'll find it
almost an hour and I'm stumped I tried magnet:?xt=urn:btih:399DED657EA0A21FE9C50EA2C950B208 but got this error The only source I can find is thriftbooks for $15 or Amazon for $45. I also have the option to "rent" the ebook from Google Play for $40 something - searched 1337x.to and pirate bay - searched google and duckduckgo - searched Scribd even
>>12868 >60mb could you make a google drive sharable link? I'm coming up goose-eggs for anything PDF and I'm even willing to pay (but not $45 to "Rent it")
Open file (121.20 KB 726x1088 cover.jpeg)
>>12876 Carver Mead - Analog VLSI and neural system https://files.catbox.moe/sw450b.pdf
>>12806 >webm sauce pls?
>>12888 hero
interesting video https://www.youtube.com/watch?v=AaZ_RSt0KP8 tl;dr hardware is vulnerable to radiation/cosmic rays, etc and could "flip bits" which could lead to severe malfunctions unless we build this extremely fault tolerant. Something to consider.
>>12900 Thanks software and hardware hardening is a very important topic for us. But as a general topic it seems like it might be one better suited to our Safety & Security thread (>>10000) than this one maybe?
found this today https://getpocket.com/explore/item/a-new-theory-explains-how-consciousness-evolved >The Attention Schema Theory (AST), developed over the past five years, may be able to answer those questions. The theory suggests that consciousness arises as a solution to one of the most fundamental problems facing any nervous system: Too much information constantly flows in to be fully processed. The brain evolved increasingly sophisticated mechanisms for deeply processing a few select signals at the expense of others, and in the AST, consciousness is the ultimate result of that evolutionary sequence. If the theory is right—and that has yet to be determined—then consciousness evolved gradually over the past half billion years and is present in a range of vertebrate species. > Even before the evolution of a central brain, nervous systems took advantage of a simple computing trick: competition. Neurons act like candidates in an election, each one shouting and trying to suppress its fellows. At any moment only a few neurons win that intense competition, their signals rising up above the noise and impacting the animal’s behavior. This process is called selective signal enhancement, and without it, a nervous system can do almost nothing. >We can take a good guess when selective signal enhancement first evolved by comparing different species of animal, a common method in evolutionary biology. The hydra, a small relative of jellyfish, arguably has the simplest nervous system known—a nerve net. If you poke the hydra anywhere, it gives a generalized response. It shows no evidence of selectively processing some pokes while strategically ignoring others. The split between the ancestors of hydras and other animals, according to genetic analysis, may have been as early as 700 million years ago. Selective signal enhancement probably evolved after that. >The arthropod eye, on the other hand, has one of the best-studied examples of selective signal enhancement. It sharpens the signals related to visual edges and suppresses other visual signals, generating an outline sketch of the world. Selective enhancement therefore probably evolved sometime between hydras and arthropods—between about 700 and 600 million years ago, close to the beginning of complex, multicellular life. Selective signal enhancement is so primitive that it doesn’t even require a central brain. The eye, the network of touch sensors on the body, and the auditory system can each have their own local versions of attention focusing on a few select signals. >The next evolutionary advance was a centralized controller for attention that could coordinate among all senses. In many animals, that central controller is a brain area called the tectum. (“Tectum” means “roof” in Latin, and it often covers the top of the brain.) It coordinates something called overt attention – aiming the satellite dishes of the eyes, ears, and nose toward anything important. >All vertebrates—fish, reptiles, birds, and mammals—have a tectum. Even lampreys have one, and they appeared so early in evolution that they don’t even have a lower jaw. But as far as anyone knows, the tectum is absent from all invertebrates. The fact that vertebrates have it and invertebrates don’t allows us to bracket its evolution. According to fossil and genetic evidence, vertebrates evolved around 520 million years ago. The tectum and the central control of attention probably evolved around then, during the so-called Cambrian Explosion when vertebrates were tiny wriggling creatures competing with a vast range of invertebrates in the sea. >The tectum is a beautiful piece of engineering. To control the head and the eyes efficiently, it constructs something called an internal model, a feature well known to engineers. An internal model is a simulation that keeps track of whatever is being controlled and allows for predictions and planning. The tectum’s internal model is a set of information encoded in the complex pattern of activity of the neurons. That information simulates the current state of the eyes, head, and other major body parts, making predictions about how these body parts will move next and about the consequences of their movement. For example, if you move your eyes to the right, the visual world should shift across your retinas to the left in a predictable way. The tectum compares the predicted visual signals to the actual visual input, to make sure that your movements are going as planned. These computations are extraordinarily complex and yet well worth the extra energy for the benefit to movement control. In fish and amphibians, the tectum is the pinnacle of sophistication and the largest part of the brain. A frog has a pretty good simulation of itself.
>>13136 Thanks. I knew about some of these things on some level already. But it's good to get some more details, and especially the confirmation. This is imho more relevant than metaphysical speculations. Internal model based on filtered information is something one might be able to implement.
>>13147 the article itself is worth a read, I only pasted a portion out of courtesy, the entire thing is about 3-4x that length
So this is where my philosophy would have been better posted.
>>13166 I don't mind migrating it here for you AllieDev if you'd be so kind as to link to all your posts elsewhere that should properly be here ITT.
>>13136 ah, i've written some notes on AST. no doubt information filtering is an important aspect of consciousness, but i don't believe its at all a novel idea. it's something ive noted in my larger system as well without paying attention to what AST had to say about it. for those interested i can post some related links: https://en.wikipedia.org/wiki/Entropy_encoding http://www.cs.nuim.ie/~pmaguire/publications/Understanding2016.pdf http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.83.146 https://www.google.ca/books/edition/Closure/YPmIAgAAQBAJ?hl=en&gbpv=0 i think what makes AST uniquely important is that it posits important tools for (social) metacognition which is probably crucial for at least language learning if not having further import general observational learning
>>13221 *not having further import in general observational learning
>>13221 >AST Sorry, I'm but a lowly software engineer, tending my wares. That term has a very specific meaning for me, but I suspect it's not the one you mean Anon. Mind clarifying that for us please?
>>13251 by AST i mean attention schema theory. like we have schemas for structuring perceptions and actions, graziano posits the attention schema for controlling attention. i originally came across it through what philosophers call the metaproblem of consciousness which basically asks why we think the hard problem is so difficult. his solution was basically due to the abstract nature of the schema or something like that. i personally think AST is such a representational account, that im not sure if you can really extract much phenomenological observations from it, though idk... here is a nice introduction to the theory: https://www.frontiersin.org/articles/10.3389/fpsyg.2015.00500/full and also an article on its connection to the metaproblem of consciousness: https://scholar.princeton.edu/sites/default/files/graziano/files/graziano_jcs_author_proof.pdf i've also noticed some work related to AGI which uses it to construct artificial consciousness. graziana himself recognizes as much in this article: https://www.frontiersin.org/articles/10.3389/frobt.2017.00060/full i am just a lowly undergrad non-software engineer so i am not sure what AST you had in mind, but i am curious
>>13268 Thanks kindly for the explanation Anon, that makes sense now.
>>13268 >>13221 Thanks for all this. I'll have to read it over when I'm not at the end of a 15 hour workday. Noted!
>>13274 >when I'm not at the end of a 15 hour workday. heh, not him anon but get some rest!
related article [problem of consciousness] https://getpocket.com/explore/item/could-consciousness-all-come-down-to-the-way-things-vibrate > Why is my awareness here, while yours is over there? Why is the universe split in two for each of us, into a subject and an infinity of objects? How is each of us our own center of experience, receiving information about the rest of the world out there? Why are some things conscious and others apparently not? Is a rat conscious? A gnat? A bacterium? >These questions are all aspects of the ancient “mind-body problem,” which asks, essentially: What is the relationship between mind and matter? It’s resisted a generally satisfying conclusion for thousands of years. >The mind-body problem enjoyed a major rebranding over the last two decades. Now it’s generally known as the “hard problem” of consciousness, after philosopher David Chalmers coined this term in a now classic paper and further explored it in his 1996 book, “The Conscious Mind: In Search of a Fundamental Theory.” >Chalmers thought the mind-body problem should be called “hard” in comparison to what, with tongue in cheek, he called the “easy” problems of neuroscience: How do neurons and the brain work at the physical level? Of course they’re not actually easy at all. But his point was that they’re relatively easy compared to the truly difficult problem of explaining how consciousness relates to matter. >Over the last decade, my colleague, University of California, Santa Barbara psychology professor Jonathan Schooler and I have developed what we call a “resonance theory of consciousness.” We suggest that resonance – another word for synchronized vibrations – is at the heart of not only human consciousness but also animal consciousness and of physical reality more generally.
i have been reading the phenomenology of spirit for some months and ive recently finishes lordship and bondage. i can summarize what i can understand of the broad movements so far >sense certainty gurus and mystics often say that we can understand how the world really is just by doing mindfulness shit and attending to the immediate now. hegel's main problem with this is that even to understand such a direction, we need conceptual (not quite in the hegelian sense but the broader sellarsian understanding of the word 'concept') tools. you need a functional classification scheme to locate this apparent immediate present. even 'now' is always a collection of an. infinite number of nows. it can be an hour, second, etc. meanwhile, we can only understand 'here' by it's contrast with other positions in space. neither 'now' or 'here' refer to a single instance but rather function like indexical so have applicability to a number of instances. to contrast this with a bergsonian criticism of sense certainty see here: https://www.youtube.com/watch?v=aL072lzDF18&ab_channel=StephenE.Robbins it's sort of interesting how both hegel and stephen criticize this sort of mysticism from the stance of zeno's paradox albeit they could have different solutions to it >perception a lot of stuff happens here but the basic movement is simple. when we usually think of an object we usually think of it as atomized individuals that's somehow independent from other objects. but if an object is to have properties, hegel maintains that it can only be properly understood by its interaction/relation with other objects. i suppose the meme would be to compare it to yoneda's lemma >force and the understanding there are two strands here, one with the duplicity of force (which is extracted from the previous conclusion that we most understand the object on its own account but also in relation to other objects) and also that of law. in both of these directions, their oppositions vanish. hegel then concludes that any true whole should be dialectical i.e. it reincorporates a differentiated array of elements within it. this reincorperation process is basically hegel's alternative to to just having a thing in itself as the object's internal nature which is inaccessible. a basic example of this is that i have myself as a conscious subject on one side, and the external world on the other. now, it ends up that my knowledge of the external world is really structured around concepts or in an inverted way, my concepts just describe regular happenings in the external world. in either case, we see that one pole gets absorbed in the other. note that some concepts are more coherent than others, and the reality of the external world doesn't simply vanish. hegel is more of an aristotilean than a berkley this chapter is a good motivating argument for errol harris's dialectical holism which actually has an interesting approach to consciousness i have not yet mentioned here! a long article detailing his thought can be found here: https://ir.canterbury.ac.nz/bitstream/handle/10092/14560/Schofield%2C%20James%20final%20PhD%20Thesis.pdf?sequence=5 something cool about this particular dissertation is that it also connects dialectical holism back to bohm's implicate and explicate order this funny japanese youtube man basically summarizes this conclusion in a way that might be easier/harder to get if i am being incoherent right now: https://www.youtube.com/watch?v=GX02z-Yu8HA&ab_channel=e-officeSUMIOKA i've incorporated some ideas of dialectical holism in my own system but mostly to do with self-consciousness. the approach seems a little bit too functionalist for my taste! >self-consciousness the bulk of this is really concerning the dialectic of desire. like good dialectical holists, we say that self-consciousness must see itself through reincorporating an other. at the stage in this chapter, the sort of relationship is a very simple one. a concrete example would be if you see a hammer, then at this stage you just understand it as a tool to use for something else *you* want. another is that if you see some food, you just see it as something *you* can eat. this is a very basic form of self reflection. it's even simpler than the mirror test. hegel wants to say that this is too simple. in order for self-consciousness to properly develop, we need recognition. this involves the capacity to change your behaviours according to another person's desires, trying to become like another person (ideal ego), or in general having the capacity for proper negotiation with another person. ultimately it concerns the ability to see another person like yourself and yourself like another person. all of these require the other person to behave in a particular way as well. for instance, if i am looking to the other person for what sort of part they need for their waifu, they need to tell me what i want or i wont be able to do anything >lordship and bondage this is where the master slave dialectic meme comes in. we are now focusing deeper at this question of recognition. the movements might be interesting if you are talking about broader sociology, but i find the slave (the end of the chapter lol) here most interesting. he has a far more developed idea of himself now. as the lord is tasking him to do all these things, he's coming to understand himself as the crafter of this world. a concrete example of this is if you are writing code for your waifu. if you fuck up bad, it might mean that you are lacking knowledge. through mastery and discipline, the slave is slowly molding himself. i think this relationship is actually very interesting since it describes a very basic case of metacognition to implement in an agi if that's what you are shooting for. one thing to note is that the master is still crucial, and i wonder whether the bicameral mind might somehow fit here maybe, though that's pretty schizo
>>13413 (cont) moreover, while reading, i came to a basic idea of what are the requirements that robots would suddenly want their own sovereignty like what >>12806 feared 1) territoriality - believe this grounds much of our understanding of liberty and property rights 2) capacity to take responsibility for one's labour 3) (for a full uprising to be possible) capacity for flexible social organization. if they are cognitively so atomized that they can only think of their master and maybe some of his friends/family, serious organization would be far more difficult 4) (if not necessary, would make things much easier) capacity to sever attachments. presumably waifus should be attached to their masters through imprinting, just like a baby bird would i've been reading a commentary of the phenomenology (hegel's ladder) and it mentions that hegel was basically trying to delineate the logical prerequisites for the french revolution. thus for this specific question that anon was concerned about, maybe further reading could prove fruitful i hope my exposition of his thoughts have been more digestible than most sources. i might add more if i feel it is relevant or if there is demand note: pic rel is an example of how i would depict this reincoperation. most of the ways depict it (especially thesis antithesis synthesis) i think are pretty misleading. another misconception i think people make is that they think you just apply this over and over in a linear fashion. this isn't the case. sometimes this loop doesn't appear at all and you have simpler inversions. other times you might have different versions of the same loop being repeated as we reach a more evolved stage, just with the terms themselves slightly changing and there being new stuff going on. other times it feels less that he wants to try to reincorporate something into the greater whole as much as him pointing out that this possible naive position despite its simplicity is actually stupid so we should try a different approach instead of blindly building off of it. i feel as though one of the reasons people get mislead is that they haven't read previous german idealists. what hegel wants is all of these modes of knowledge to be actually coherent. it just so happens that while doing this you see a lot of loops pop up. the reason why is not actually that surprising: hegel is really interested in how the infinite can manifest itself in the finite world. one way to understand the infinite is that it has no boundaries hence has no outside. so we want a finite process that somehow includes everything outside within it. this is why religion especially christianity is important to him. to the extent i am a theist, i follow more bergson and langan though it does give an interesting possible explanation as to how rational beings can become religious. that's something i think would be cool for a waifu to be ronin, you are a nierfag right? doesn't that game have those religious robots? in some sense as you play the game, you are fighting robots who have increasing grades of consciousness >>13395 ah im familiar with resonance theory of consciousness. i don't have any unique criticisms of it. it does arguably solve the binding problem in a far more satisfying way than i think IIT does. it doesn't really concern how images of the external world themselves are formed. if this theory really... resonates with you (badumsh), dynamic field theory might be an interesting approach. they are already thinking about applying it to artificial intelligence for instance this series https://www.youtube.com/playlist?list=PLmPkXif8iZyJ6Q0ijlJHGrZqkSS1fPS1K uhh... semantic pointer competition theory might be interesting too
>>13413 honestly i dont like how funny japanese man frames the projects of fichte schelling and hegel. i will quote my own take from something i wrote on my guilded: >fichte: we move to meta-language to describe comprehensively how our ontology needs to interact with its constraint. dialectical synthesis is for our privileged vantage point and not for the finite ontology. the system’s development is only implicit for the finite >schelling: want to describe how the language entry transition in which the ontology grasps it’s own development makes sense logically. resorts to asspull where art is how it’s done because in the genius’s work we have the whole presupposed in the configuration of the parts. tried to messily tie in his natural philosophy here in order to reach this conclusion. ofc the whole even reached is very vague, so schelling’s move to his terrible identity philosophy is unsurprising >hegel: take 2 on schelling’s original project in his system of transcendental idealism. the idea of spirit and art is cool, but let’s have spirit be how the finite consciousness incorporates the entire dialectical process into its ontology. also we start explicitly with the self-conscious organism instead of try to start with fichte’s logical self >fichtean intellectual intuition: i have the freedom to hold fixed some arbitrary ontology and slowly expand it >schelling intellectual intuition: being able to cognizant the entire whole or something thru spurious means >hegel intellectual intuition: whole can be cognized through the process of the infinite. really any true whole is a notion and must incorporate some multiplicity it juxtaposes itself against >dialectical holism, principia cybernetica: yes but you could have done it with general principles bro (tho ig u don’t get the same sorts of necessity that these idealists wanted) this article might be another source to compare the concept of the infinite with: https://epochemagazine.org/07/hegel-were-all-idealists-just-the-bad-kind i feel as though all of this stuff might help with better understanding negarestani's intelligence and spirit at the very least!
>>13415 >ronin, you are a nierfag right? doesn't that game have those religious robots? in some sense as you play the game, you are fighting robots who have increasing grades of consciousness Yes, more or less. The game is called "Automata" and technically even the protagonists are Automata, even though they seem to feel and act human. 2B often refers to the remains of other androids as "corpses", and exhibits human characteristics, such as complex emotions (her pain at having to kill 9s over and over ). Yet - because they're doomed to repeat the same war against the "robots" (this has been the 14th iteration) they are puppets, literally with no free will of their fate (at least until we reach ending E). Ironically, the actual robots, begin to act more and more human, yet upon more careful examination they're only mimicing human behavior, this point is made over and over. They do not and cannot grasp why theyre doing what theyre doing, the meaning is hollow and lost. Nonetheless the robots are able to hold conversations and a few seem to have personalities, desires, etc. Sorry for the late response, it took me a while to get through your posts
>>13440 >they are puppets, literally with no free will of their fate (at least until we reach ending E) fate might be the keyword here. i think games have an easy job at suggesting a robot has some level of autonomy simply because they put you, the player, into the the robot's shoes. with that said, there might be highlighted a larger form of autonomy missing which more so pertains to what role you are taking on and what sort of system you choose to participate in. with that said, wasn't a major precondition in ending the cycle some sort of virus? such an event would restructure YorHa itself. of course, what i am more thinking of is how participating in a job or going to university can transform you into a different being. of course, the university being restructured would do the same, but this doesn't provide the same autonomy >They do not and cannot grasp why theyre doing what theyre doing, the meaning is hollow and lost. Nonetheless the robots are able to hold conversations and a few seem to have personalities, desires, etc interesting. how exactly do they show that human behaviour is merely mimicked? there is a sense in which humans don't really understand what they do and why they do it most of the time. lacan talks about this with his concept of the "big Other". i think this timestamp gives a nice illustration of the idea: https://youtu.be/67d0aGc9K_I?t=1288 though i guess this sort of behaviour is moving more into the unconscious realm than the conscious. in a way i think the machines (ig that's the right term) have more autonomy than the androids you play as since they were able to form their own social structures even though they don't quite understand why they are doing it of course, the ability to use reason and self-determination to determine oneself and world represents a much greater level of autonomy which is lacking in a majority of the entities in the game >Sorry for the late response, it took me a while to get through your posts np, it's a lot and condenses information that took me several hours to digest
Open file (67.69 KB 756x688 ClipboardImage.png)
>>13446 >lacan talks about this with his concept of the "big Other" now this is not the first time I've heard of Lacan in the last year and I'd never even heard of him my whole life until then. I don't know if I can get into what he's selling it's all very scripted and the same problems I have with Freud. Even personally I have have a lot of issues from childhood but none of them are potty or sexual related and you'd think those were the root of all psychological trauma after a certain point.
>>13446 oops forgot namefag lol >>13519 psychoanalysis is pretty weird and there are certainly things you probably wouldn't want to recreate in a waifu even if it was true... like giving them an electra complex or whatever. i personally prefer to be very particular in my interpretation of their works. for instance, with jungian archetypes, i'd lean more on the idea that they are grounded on attractor basins. here is a good video if anyone is ever interested: https://www.youtube.com/watch?v=JN81lnmAnVg&ab_channel=ToddBoyle for lacan so far i've taken more from his graph of desire than anywhere else. some of the things he talks about can be understood in more schematic terms? im not too much of a lacanian honestly. largest take aways are probably the idea that desire is can be characterized largely by a breaking of homeostasis and the big other as possibly relating to some linguistic behaviour (with gpt-3 what i see as a characteristic example) one particular observation i think freud made that was very apt was that of death drive. humans don't just do stuff because it is pleasurable. there's something about that which is very interesting imo. lacan's objet petit a is apparently a development of this idea. it might be related to why people are religious or do philosophy whilst animals do neither >Even personally I have have a lot of issues from childhood but none of them are potty or sexual related and you'd think those were the root of all psychological trauma after a certain point yeah the psychosexual stuff is very strange and i just ignore it. maybe one day i will revisit it and see if anything can be salvaged

Report/Delete/Moderation Forms
Delete
Report

no cookies?