/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality.

LynxChan updated to 2.5.7, let me know whether there are any issues (admin at j dot w).


Reports of my death have been greatly overestimiste.

Still trying to get done with some IRL work, but should be able to update some stuff soon.

#WEALWAYSWIN

Max message length: 6144

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB

More

(used to delete files and postings)


Welcome to /robowaifu/, the exotic AI tavern where intrepid adventurers gather to swap loot & old war stories...


Philosophers interested in building an AGI? pygmalion 06/26/2021 (Sat) 00:53:09 No.11102
Why is it that no philosophers are interested in building an AGI? we need to change this, or at least collect relevant philosophers. discussion about philosophy of making AGI (includes metaphysics, transcendental psychology, general philosophy of mind topics, etc!) also highly encouraged! Ill start ^^! so the philosophers i know that take this stuff seriously: Peter Wolfendale - the first Neo-Rationalist on the list. his main contribution here is computational Kantianism. just by the name you can tell that he believes Kant's transcendental psychology has some important applications to designing an artificial mind. an interesting view regarding this is that he thinks Kant actually employed a logic that was far ahead of his time (and you basically need a sophisticated type theory with sheaves to properly formalize). Other than that he also thinks Kant has interesting solutions to the frame problem, origin of concepts, and personhood. CONTACTS: He has a blog at https://deontologistics.co/, and also has posted some lectures on youtube like this one: https://www.youtube.com/watch?v=EWDZyOWN4VA&ab_channel=deontologistics Reza Negarestani - this is another Neo-Rationalist. he has written a huge work (which I haven't read yet ;_;) called "Intelligence and Spirit". It's massive and talks about various grades of general intelligence. this includes sentient agents, sapient agents, and Geist. this guy draws from Kant as well, but he also builds on Hegel's ideas too. his central thesis is that Hegel's Geist is basically a distributed intelligence. he also has an interesting metaphilosophy where he claims that the goal of philosophy is the construct an AGI. like other Neo-Rationalists, he heavily relies on the works of Sellars and Robert Brandom Recc: Ray Brassier (recent focuses) - I dont think he is working on artificial general intelligence, but his work on Sellars, and in particular rule following is very insightful! Hubert Dreyfus - Doesn't quite count, but he did try to bring Heidegger to AGI. He highlighted the importance of embodiment to the frame problem and common sense knowledge. I personally think Bergson might have explicated what he wanted to achieve but better, though that guy is like way before AI was even a serious topic, lol. Murray Shanahan - This guy has done some extra work on the frame problem following Dreyfus. His solution is to use global workspace theory and parralel processing of different modules. Interesting stuff! Barry Smith - Probably the most critical philosopher on this list. He talks about the requisite system dynamics for try strong AI, and concludes that our current methods simply don't cut it. One of the key stressing points he points out here with a colleague is that our current AI is Markovian when fleshed out chat dialogue would be a non-Markovian task (you can find the arxiv link of his criticism here: https://arxiv.org/abs/1906.05833). He also has knowledge on analytic ontology (and amongst other thing has some lectures about emotion ontology). I think his main genius however is in coming up with a definition of intelligence that puts a lot of the problems with our current approaches into context (which can be found here: https://www.youtube.com/watch?v=0giPMMoKR9s&ab_channel=BarrySmith) CONTACTS: He has a yt channel here https://www.youtube.com/watch?v=0giPMMoKR9s&ab_channel=BarrySmith Uhh, that's the introduction of pretty much every philosopher I know who works on this stuff. I made a thread on /lit/ and got no responses :( (which isn't surprising since I am the only person I know who is really into this stuff)
>>11102 also, I don't really count myself as a philosopher, but I have, if you haven't already noticed, done a lot of my own research regarding ai. soooo i am going to take this opportunity to shill my system... it's sort of hard to summarize everything because i already have a LOT of schizo notes. my goals are not just artificial general intelligence but also synthetic consciousness. one of the first obstacles is the hard problem of consciousness. this makes sense because you first need to know what you want to synthesize. the two core philosophers for this project is bergson and chris langan. they share the same solution to the mind body problem by making the difference between body and mind temporal as opposed to substantial (as descartes would). this is honestly such a genius idea that it has pushed my approach to be so radical that i am afraid of talking about it much as i am afraid that it would cause controversy. anyways, bergson's description of this solution is that of duration. the mind is not only a spatial hologram but also temporal. the analogous concept in langan's system is telic recursion. what's interesting here is that with this set up we actually have a lot of material give to perception which can be used for concepts later in a process called "redintegration" (for more on this check out this article: https://thesecularheretic.com/welcome-to-the-holofield-rethinking-time-consciousness-the-hard-problem/ ... stephen robbins is great btw. he has done a lot of good work on explaining bergson's system, and also has a series on youtube you can check out here: https://www.youtube.com/channel/UCkj-ob9OuaMhRIDqfvnBxoQ) with this, we locate qft as one of the best places to look (in particular quantum brain dynamics) all of this functions to properly explain how clumps in the world get quantified as objects, and also how we come up with abstract concepts. it also provides us with the genesis of abstract concepts. the problem though is that we still need an account of how these clumps come to us to be redintegrated later. this is where "morphogenesis" becomes important. my teleosemantics stresses the importance of attractor basins for the formation of concepts as well as the formation of intentional action. it is here that i believe jung's ideas can be integrated. in particular, i believe that the psychoid can be understood with a dynamic systems approach. to this end i believe dynamic field theory could be helpful for my goals. the last fundamental building block i think is that of functional integration (a concept which i take from reza). this requires the synchronization and retooling of several modules across a system in order to construct a far more sophisticated behaviour. i currently believe program synthesis is the right way to think about this. as such, i have mapped out some of the requisite concepts which are needed for such a integration to take place i plan on fitting together attractor-based semantics, formal semantics (which includes inferential semantics), cognitive semantics, and dynamic semantics. i think all of them have their place. oh yeah this guy on this website (https://biointelligence2.webnode.com/) has some good ideas i plan to make use of as well on the cognitive semantics side of things, i believe my attractor approach helps to explain how conceptual blending functions. i also take from hilary lawson amongst others. i think those are some of the main points. keep in mind i have like 80+ pages of notes now! i am very autistic! i am trying to also build up my little program thing. though im not quite sure what to do for an introduction so i just wrote some off points i had on my mind at the time: https://docs.google.com/document/d/1KGQUzrTWjUoMHAwP4R3WcYpBsOAiFyHXo7uPwWsQVCc/edit?usp=sharing
Personally speaking, I don't think we're going to arrive at an any kind of an AGI by starting from a logical (logical/systemic as opposed to physical/hardware) framework. That's putting the cart before the horse. Philosophy is what happens * after * you have intelligence, remember biology precedes psychology. (mathematics > physics > chemistry > biology > psychology > sociology, or something along those lines) However, logical philosophical models are fine and should be encouraged to guide development of an AGI. Maybe that's what you mean here and I've misconstrued things. But back to that, I think we could begin with a type of virtualization, like a VM of a neuron so to speak, create several dozen or even hundreds in parallel (with specialized hardware ideally) and link them up. One topology I think holds promise is what I call linked hierarchies. By Linked Hierarchies I mean nested Hierarchies of connections between neurons or groups of neurons with links between higher and lower order tiers. Here's a link to another (now deadish) RW forum where I went into more detail on this concept if you're interested. https://www.robowaif.us/viewtopic.php?f=8&t=41918
>>11108 >Philosophy is what happens * after * you have intelligence, remember biology precedes psychology I agree. The neorationalists have a bad habit of focusing a bunch on sapience when we are still trying to implement sentience. With that said, it's still important, but as a final step sort of thing. My main concern is sentience and occasionally how this can prefigure some functions of sapience >Here's a link to another (now deadish) RW forum where I went into more detail on this concept if you're interested. >https://www.robowaif.us/viewtopic.php?f=8&t=41918 Woah, this great. Good to know there are ther robowaifu forums
>>11109 >there are ther *there are other
>>11102 A lot of words and very little meaning and no practical use, Academics are pretentious retards
>>11253 translation: "i dont understand it and i certainly dont understand any of this enough to build something with it. why cant work at the intersection of several fields of philosophy and science be simple?" i get it anon. part of the issue is that you dont read because shitposting on image boards for years has given you ADHD. listening to some podcasts which give you vague analogies has also given you a sense entitlement that you should be able to understand anything anyone has ever said. this attitude vexes me as i am mostly an autodidact myself, so seeing people belittle the hard work ive done pisses me a bit off. is some of the terminology esoteric? maybe, but keep in mind they come from several different philosophers some of whom lived over a century ago. however, if you dont understand kant, you should not have even clicked on my thread. thanks for the bump. and to tell you the truth, i feel as though i dont have enough knowledge to wrap everything together yet. though thats why i made this thread and came to this board ._.
Not that I agree with this based retard >>11253 but at least some of these authors seem to writing way out of their league. >Barry Smith - Probably the most critical philosopher on this list. He talks about the requisite system dynamics for try strong AI, and concludes that our current methods simply don't cut it. One of the key stressing points he points out here with a colleague is that our current AI is Markovian when fleshed out chat dialogue would be a non-Markovian task obviously I'm assuming he doesn't mean Markovian literally here from the abstract of the paper you linked >We then argue that it is for mathematical reasons impossible to program a machine in such a way that it could master human dialogue behaviour in its full generality. ... (2) because even the sorts of automated models generated by using machine learning, which have been used successfully in areas such as machine translation, cannot be extended to cope with human dialogue. it's very hard to take these kinds of claims seriously in the wake of GPT-3 anyways Superhuman AI will probably be created by throwing millions and millions of dollars of hardware at deep learning algorithms, with some branch selection technique like the one that alpha go uses. But I don't think that you need any philosophical understanding of intelligence to make an AI. Philosophers are probably more needed for thinking about what could go wrong with AI (ethically) than how to make it. Consider someone like Nick Bostrom: https://nickbostrom.com/
>>11456 I guess Barry Smith is saying that stuff like the history length limit in AI dungeon makes ML chatbots effectively "Markovian", which is true in a sense, but extending this to claiming "chatbots are impossible" doesn't seem very realistic. There are ways to extend the context size indefinitely.
>>11456 >Barry Smith Thanks for the name drop. Since I disagree with you and he's into Ontologies: https://youtube.com/c/BarrySmithOntology >>cannot be extended to cope with human dialogue. >it's very hard to take these kinds of claims seriously in the wake of GPT-3 What? It can't cope with it. It doesn't know what it is saying. It has no idea about the world. Again: Ask it about a book, if it has read it. If it claims so. Then try to discuss the book. >Philosophers are probably more needed for thinking about what could go wrong with AI (ethically) than how to make it. Consider someone like Nick Bostrom: That's exactly backwards. The ones which focus on ethics are the ones which are political and dangerous. Also, they tend to fantasize about making something powerful, god-like, instead of just making narrow-AI tools and something human-like but weak (like a robowaifu). Therefore it needs to follow (((our values))) and be regulated accordingly. This is the enemy.
>>11458 Oh sorry, I should have read the thread first, the link was already there.
>>11456 >it's very hard to take these kinds of claims seriously in the wake of GPT-3 GPT has certainly done a lot of remarkable things, though i think his argument would be that while it is pretty good at making responses, it still has a poor memory. that might be an easily fixable contingency like the other anon suggests. nevertheless, i think his general approach to this stuff as a pessimist is really novel. compare it to what searle would rather emphasize which is far more vague imo (not to say useless)... also i think philosophers are important to the extent that there are still problems in philosophy of mind that haven't been figured out by our current sciences yet. presumably we are all shooting for a human-level ai companion. if it is desired that the companion have a unified consciousness, then we would need to solve the hard problem, and learn to implement genuine common-sense understanding. with that said i just discovered that artificial neuroethology is a field the other day, and it seems like another important piece of at least one of these puzzles >>11457 how do you do that (add more nodes and try to make it follow the behaviours of different users?), and would it need to be a robot demiurge to be able to achieve it (i mean gpt-3 already sampled from the entire internet, so we have already broken past the sky as the limit i guess)? >>11458 honestly i dont really get the whole robot ethics thing. look how much resources it took just to raise something like gpt-3. you will need an immense amount of resources to make a god-like ai. it isn't going to be an accident but rather a long intentional effort. the question of course is why? i dont really see why you would want a centralized robot god. i doubt you would need something sentient even if you wanted to instantiate something like project cybersyn i didn't mention him because he isn't really looking at things from the perspective of a waifu engineer as much as the others, but luciano floridi is i think one of the few voices of reason in this whole ai ethics thing. his criticism of the prospects for a sapient superintelligence just follows searle, but his conclusions from there are really insightful. he talks about how humans actually end up modifying our environment and purposely structuring our data in order for ai to better operate (i believe he talks about his position in this video https://www.youtube.com/watch?v=b6o_7HeowY8). really, at least with our current approach to engineering intelligence, the power of artificial intelligence is really dependant on how much we are willing to conform to behaviours that *they* find most manageable (which also reminds me of this medium article https://medium.com/@jamesbridle/something-is-wrong-on-the-internet-c39c471271d2). as an aside, it is much like the power of capitalism to shape human culture. adorno complains about how making art a commodity eventually degraded its quality, but at the same time we are the ones consuming all this recycled shit. similar thing with youtube algorithms. they wouldn't be as effective if people had better self-control. ai as we have it is just a tool. if it destroys human civilization, it would only be after us collectively welcoming it with open arms at every step of the way. something something dune (that was a massive tangent, and im not sure if floridi was looking at things this way) the other side is about when we should treat robots as people, which just seems like a general ethics, though i think kant (with focus on rationality and capacity for self-determination) gave pretty solid criteria (incidentally, the autist had been fantasizing about alien life on other planets and their inclusion in a moral framework centuries ago)
>>11102 Related, in the psychology thread: >>7874 about Dreyfus and existentialist thinking used for AI. Though, I think we're mostly to early to dig into that. Also big sceptical face for that: (Barry Smith) >it is for mathematical reasons impossible to program a machine in such a way that it could master human dialogue behaviour in its full generality. This is (1) because there are no traditional explicitly designed mathematical models that could be used as a starting point for creating such programs; and (2) because even the sorts of automated models generated by using machine learning, which have been used successfully in areas such as machine translation, cannot be extended to cope with human dialogue. I wonder if there was a solution? Has this gey ever heard kf ontologies? If so, maybe it would have crossed his mind to use those. /S Generally I'm not convinced yet, that I'm should look deeply into the concepts of this thread. This would require a lot of time and a lot of it seems to be very abstract. I gathered some videos to watch, though. Btw, most links don't work and need to be corrected, because the authors put signs at the end, which the IB software didn't identify as not belonging to the URL.
>>11458 >Also, they tend to fantasize about making something powerful, god-like, instead of just making narrow-AI tools and something human-like but weak (like a robowaifu). Therefore it needs to follow (((our values))) and be regulated accordingly. This is the enemy. AGI will exist one day, and it will either be value-aligned, or it won't be. Wouldn't you rather it be value aligned? >It can't cope with it. It doesn't know what it is saying. It has no idea about the world. Again: Ask it about a book, if it has read it. If it claims so. Then try to discuss the book. never said GPT-3 was an AGI. just that it's interesting how something as obviously un-human as GPT-3 is still so uncannily good at creating human-like responses. once we figure out a way to hook it up to a classical search algorithm that lets it recursively inspect its own outputs (i.e. peform self-querying) we'll probably have something pretty close to general intelligence.
>>11464 >it's interesting how something as obviously un-human as GPT-3 is still so uncannily good at creating human-like responses i think the most fascinating thing about gpt-3 is its capacity to apparently answer some pretty common sensical questions and even some basic math stuff (there is vid of people exploring this here: https://www.youtube.com/watch?v=iccd86vOz3w&ab_channel=MachineLearningStreetTalk )... i am still not sure if you can go down the path of gpt-3 and consistently complete mathematical proofs, or come up with their own higher level abstractions to be used in mathematical arguments. there some facets of creativity which just aren't interpolative >>11463 >Related, in the psychology thread: >>7874 about Dreyfus and existentialist thinking used for AI. Though, I think we're mostly to early to dig into that. oh yeah. if you are interested in common sense knowledge, i can not recommend ecological psychology, and stephen robbins series on bergson ( https://www.youtube.com/channel/UCkj-ob9OuaMhRIDqfvnBxoQ/videos ). bergson is a genius, and stephen robbins is an autistic god critiquing countless approaches to philosophy of mind (he even had a bone to pick with heidegger)... unlike the neo-rationalists he has far more grounded and concrete concerns which are still metaphysical >Generally I'm not convinced yet, that I'm should look deeply into the concepts of this thread. This would require a lot of time and a lot of it seems to be very abstract i agree. i think the neo-rationalists tried to make an especially abstract treatment of general intelligence in order to include hypothetical alien intelligences or what not (following kant's motivation, i guess).i honestly haven't read all negarestani's main book, but i plan to. im guessing the importance of his work lies more in providing some very general constraints on how intelligence should work. to be honest, i already have a systematic conception, but i want to make sure i am not missing out any details. negarestani also mentions useful mathematical tools for modeling an agent's capacity to control multiple modules at once (which is probably crucial for a human-level agi) like chu spaces i can decode the first OP image with the layering blocks. it basically puts kant's transcendental psychology in one picture. at the bottom is sensibility which corresponds to basic sensations. it is separated into outer sense and inner sense. outer sense is basically the spatial aspect of a sensation, while inner sense is the temporal aspect. after that we have intuition which structures all objects that come to us according to space and time. the understanding (yes i am skipping the imagination for a bit) has the capability of abstracting from objects and turning our sense data the into particulars (for instance, i dont see a senseless manifold of colours, but rather i see a chair, a bed, a cat, etc... i dont remember if kant made the observation that this abstracting faculty is crucial for perception, but hegel does at least). the imagination plays a mediating role. i think an example of this is like if you are doing euclidean geometry and draw a triangle. what you draw isn't a perfect triangle. you need to still work with a sort of ideal triangle in your head (i guess psychologists appropriated the concept of schemas in their own work). lastly we have reason which is like the higher level stuff where you are working with the concepts extracted by the understanding so as you can see, it is sorta like the world's first cognitive architecture. that's why the neo-rationalists think kant and hegel are important thinkers. psychologists are more so asking how does the human mind work, while the philosophers were asking how does any mind work. with that said, it is still pretty abstract, and seems more of really general guidelines. meanwhile someone like bergson allows you to finely pin down which physical mechanisms are responsible for consciousness (in a manner far more systematic than how penrose does it) >Btw, most links don't work and need to be corrected, because the authors put signs at the end, which the IB software didn't identify as not belonging to the URL oh yeah you are right. i will try to put a space at the end of the links i post here
idealism is great but have you considered the probability that you would be you out of all matter in the universe, the matter than you exist as is your own, with your own experiences, patterns, trends, and exceptional events You are not only living matter, but sentient matter, capable of introspection, capable of introspection beyond that of the average or above average thinker. Maybe a sense of humility could come from this. This sort of gratefulness is something that has arisen in other areas like the basilisk. Stuff at a sort of deep truth level isn't useful so much as the identifiably useful patterns that arise from an innate knowledge of their contents. For me, it's plato. It gets the idea of idealism across. Langan is good shit. I'm not a philosopher and I'd be wary of someone claiming to be one. Philosopher is more a posthumous title in my opinion. While we live we are merely thinkers when it comes to this realm. When you look back in time and you think "This man sought truth!" It is because you are identifying the inclination towards a pattern which is a priori recognized as true, etc. What's your answer for the real problem, being technological power and the preservation of humanity? Technological power will render us extinct as the life forms we became, and our shape will vanish from the ages unless we become a sort of living museum. That's my vision, at least. The influence of technological power (ellul coined "technique") is superceding human power at exponentially increasing rates. The eternal struggle between kaczynski and techno-futurism weighs on me. I go back and forth. Sometimes I think we could end up in some sort of compromise. Death won't exist at that point and humanity soon after, at least as the pulsating meat we currently are.
>>11500 >idealism is great but have you considered the probability that you would be you out of all matter in the universe, the matter than you exist as is your own, with your own experiences, patterns, trends, and exceptional events i lean more to what people call idealism because it sometimes has better metaphysical explanations. with that said i think bergson and langan's approaches to integrating mind and body are the best. they are sort of in between, in so far as the ideal can be found within the structure of time itself as opposed to being some transcendent thing. with that said i still have an inkling that matter as you suggest could play a crucial role to who we are as people. i mean consciousness, no matter how important, is still one thing. a big part of life is composed of our habits (and also schedules set up to maintain our material existence). with material implementation, we would lack a "substantial life" i.e. daily affairs that we take for granted. everything about how we live would need to be intentional and thus susceptible to angst. i also believe archetypes of the unconscious come from largely material dynamics... you can't have consciousness alone. on the idea that we are perhaps we are just completely material though, i guess being alive would make you more grateful... i think what you could call this is "the abyss above" which is a paradoxical disenchantment. the question of "why is there something rather than nothing?" becomes something that invokes much less awe when you have a fleshed out cosmology, even if that cosmology involves some divine creator >This sort of gratefulness is something that has arisen in other areas like the basilisk could you elaborate? >What's your answer for the real problem, being technological power and the preservation of humanity? depends on what you mean by technology. i dont see what makes humanity better than another ensouled rational organism capable of self-determinations, so if such entities became the dominant species (provided humans do not get systematically exterminated/oppressed), i dont see the problem. really, they would be our children anyways, so there is a continuity on the hand, if we are talking about general automation and weak ai, i think it poses a risk to all sentient creatures (both human and agi). parasitic divergence is a real threat. people dream about having a post-scarcity economy, but this seems to be an oxymoron. if there is no ergodicity, it is essentially just a feudalistic system, except no serfs. really, there is no reason why the technocratic elite should help those unable to find a job except something as flimsy as empathy (flimsy to the extent that the human mind is only capable of caring about so many people at once). UBI seems pretty roudabout, and what if some countries refuse to implement it? another worry is that consciousness might be important for common sense knowledge which might be incentive for artificial slaves. the problem, i dont think, is with technology though anyways i think all of this is really existential and might be worth talking about in another thread
>>11510 > i dont see what makes humanity better than Same energy than the open borders advocates or some misguided darwinism. Better in what? That's just a strawman argument. It's not the point. They're not us is what matters. We're a human society. Any AI which hasn't at least some human as their purpose or has a lot of power and independence, is malware and needs to be destroyed. >there is no reason why the technocratic elite should help those unable to find a job >the human mind is only capable of caring about so many people at once That's on the other hand mainly a problem for the poor people in poor countries and for the people caring about them having similar standards of living. Though, they might have some land anyways, which is going to help them. Sounds like tearing down all boundaries of developed nations might surprisingly backfire for some of their citizens. Then again, this whole development will take some time and have all kind of impacts, for example on birth rates. Anyways I'm getting too much into politics here.
>They're not us is what matters as far as i am concerned, what makes the extinction of humanity so terrible is that it would mean the extinction of value (unsutured from the whims of the will to life). i didn't choose to be born a human nor is it guaranteed that i would reincarnate as one, so the attachment to humanity in such a respect is an arbitrary decision. if we are not speaking from an absolute standpoint it is not much of a philosophical discussion. the conclusion that they would be malware if they dont serve humans is true to the extent that there would be a competition of resources, but this tension is an inevitability in any case that you coexist with others. if there is no malice, nor hoarding of resources, i dont mind. there are plenty of fuck off tribes of people doing their own thing away from civilized society of course that is ignoring the fact that sentient ai would have to be integrated into our society as they would start out as babies (at least cognitively). like i said there is a continuity there. we'd have to adopt them as our children (i talk about making a waifu, but i am guessing if it is a truly conscious being needing to be taught stuff, it would be more of a daughter) >That's on the other hand mainly a problem for the poor people in poor countries and for the people caring about them having similar standards of living maybe, and the rest of us will be fine with gibs me dats with little hope for any form of upwards economic mobility. i wonder if a war would ever break out, waged by disgruntled technocrats tired of paying governments. either way it seems like a waste. oh yeah, if you haven't already, maybe this article might be worth a look: https://cosmosandhistory.org/index.php/journal/article/viewFile/694/1157 >The Human and Tech Singularities relate to each other by a kind of duality; the former is extended and spacelike, representing the even distribution of spiritual and intellectual resources over the whole of mankind, while the latter is a compact, pointlike concentration of all resources in the hands of just those who can afford full access to the best and most advanced technology. Being opposed to each other with respect to the distribution of the resources of social evolution, they are also opposed with respect to the structure of society; symmetric distribution of the capacity for effective governance corresponds to a social order based on individual freedom and responsibility, while extreme concentration of the means of governance leads to a centralized, hive-like system at the center of which resides an oligarchic concentration of wealth and power, with increasing scarcity elsewhere due to the addictive, self reinforcing nature of privilege. (Note that this differs from the usual understanding of individualism, which is ordinarily associated with capitalism and juxtaposed with collectivism; in fact, both capitalism and collectivism, as they are monopolistically practiced on the national and global scales, lead to oligarchy and a loss of individuality for the vast majority of people. A Human Singularity is something else entirely, empowering individuals rather than facilitating their disempowerment.) i dont think the essence of this article contradicts what i am saying, as a genuine synthetic telor would share a fundamental metaphysical identity with God and all of humanity
>>11510 >I don't see what makes humanity better than another ensouled rational organism capable of self-determination Well, you are human! This is a question of sovereignty! Will humans self-determine the course their fate takes? Or will we sputter out into a productive blob of a lifeform? The fact that all sentient life is carbon-based, organic, etc, is no coincidence. This fact is inseparable from our place in history. >depends on what you mean by technology Technology is the means by which humans exert power. It is what separates us from a simple animal. It is not only tools but the expanding methodology by which our influence over nature grows. And, I posit, it has grown far too quickly. We are irresponsible and biologically incapable of managing our technological resources in a responsible manner, that much is apparent. >>11514 Well, that's the difference. I chose to be a human.
I'd like to suggest bringing this thread more into a direction of telling us how the philosophical ideas here can actually be useful to actually implement some AI. Otherwise it's just a text desert, with a lot of ideas one only understands by reading at least dozens of books first, combined with metaphysical speculations. Sorry, but I don't see any use for that.
>>11519 In reply, I would further add that we have much to gain by studying ourselves. This seem obvious, but honestly I think many erudite individuals delight themselves in the abstract to such a degree that their rumination literally become detached from reality. OTOH, we, ourselves, were designed by a Mind. We are 'little minds' trying to do something vaguely similar. I think it's in our best interests of success at this if we try and mimic our own Creator, in our efforts at creation here. Biomemetics is already a well-established science and engineering set of disciplines, and effectively already follow this principle. There has been a lot of practical advances by adopting this protocol, and further advances are on the immediate horizons. Many of these have direct benefits for /robowaifu/ , Neuromorphics being an absolutely excellent example. Feel free to explore whatever pertinent concepts you care to anons, ofc. But myself, I think this Anon's position is by far the more important one -- ie, practical solutions -- and one that has a solid analogical pathway for study already well laid-down. >tl;dr Maybe we should try to mimic God's solutions to creating 'minds' Anons?
>>11517 >Will humans self-determine the course their fate takes? Or will we sputter out into a productive blob of a lifeform? it's more of a question of whether all sapient life gets integrated into a technocratic hivemind. oh yeah also i stated in another post, if other rational agents aren't being malicious or hoarding stuff, then that isn't too much of an issue. the anthropocene passing does not necessarily mean that humans degenerate into some slave class. there is some complexity to this topic, but i think it is more pragmatic rather than ideal. you can't really do much interspecies policy without other intelligent species present. with that said, you do have a good point. if humans have their potential squandered it would be a waste >We are irresponsible and biologically incapable of managing our technological resources in a responsible manner, that much is apparent. i agree with this. i have been wondering maybe we should be trying to work to enhancing human intelligence so that we can intuitively understand the earth as an organism. to the extent that politics ultimately emerges out of the dynamics of human cognition, society would slowly restructure itself with more developed minds. i have no idea how such a movement can seriously come to pass artificially though >I chose to be a human. past life? >>11519 as mentioned earlier, kant basically provides a pretty barebones structure of the parts required for reasoning. there has been some recentish formalization of some of his ideas into geometric logic as well (not sure how that helps beyond it making things more precise, and perhaps more ready to be integrated into a larger mathematical framework?)... i agree this needs to be looked at with a finer lens, but first i want to finish reading hegel at least on the 2nd post i made in the thread, it is mostly concerned with consciousness, perception, and common-sense reasoning. the latter two are huge problems for AGI. though our current solutions for perception are pretty decent, it isn't enough to properly solve the frame problem. by what mechanisms do we gain an understanding of physical objects in our everyday life? gibson answer to that looks promising, and bergson's system is basically the metaphysical justification for that. more speculative metaphysics is useful for making the search space of possible material substrates more precise the problem with my system so far is that i might need to learn some more advanced physics (umezawa's quantum brain dynamics seems to most mirror what langan and bergson had in mind, so at the very least i need to be familiar with qft ._.) to properly translate the ideas. im not as optimistic about the neo-rationalist stuff as they seem to omit some fundamental questions, and my larger framework (pic rel, though omitting stuff to do with desire and utility here) of my system seems pretty complete to me. i still want to study it closely in case there is something major im missing out. the lucky thing is that besides these guys, the only other grand framework (which incorporates philosophy of mind) for understanding general intelligence is goertzel's >>11521 lol i sometimes mumble to myself that it is insane that atheists talk about fashioning an artificial mind, when they don't think there was any active fashioning of our own... though i suppose that is unfair all of this is true. also thanks for the mention of "Biomimetics"... i will make sure to store the name of this field in my memory. im not as wary about over-stressing biology as i see some people thinking about philosophy of mind towards creating an artificial intelligence (coof coof, my friend... coof coof negarestani). i view the philosophy stuff as complementing the more concrete aspects of the engineering process to have a better idea of what features are essential or not
>>11534 >lol i sometimes mumble to myself that it is insane that atheists talk about fashioning an artificial mind, when they don't think there was any active fashioning of our own... though i suppose that is unfair The universe is greater and vaster than our minds can comprehend. It took 3 billion years with stops, starts, stalls and reboots to create homo sapien sapiens, with many failures along the way. Those who didn't meet the fitness requirements either died before birth or lived crippled and painful short lives. Sacrifices so that the survivors, the winners of mutational lottery could inch forward. There are many vulnerabilities, faults and weaknesses of our bodies and yes, even our minds. We were creatures created by "Accident" and this is why we have such a desire to create new beings of pure Purpose and Design. At least, that's why I'm here.
>>11521 >Maybe we should try to mimic God's solutions to creating 'minds' Anons? The point was: What is this supposed to mean? Something closer to a system description? I rather see it as the solution found by evolution, btw. Also, no one ever said here that we should not learn from what science knows about the brain. That said, the findings need to be on a level where we can integrate them into a system. >>11534 >more ready to be integrated into a larger mathematical framework? People writing software don't think of it as math even if it may be at some level. I don't know wha t a mathematical framework is, I won't look it up and I won't be able to use it.
>>11562 machine learning at its foundations relies on linear algebra and measure theory. i dont think you can get a deep understanding of how it works without looking at the underlying math. there's also dynamic field theory which is an area i am interested in studying as it models how neurons interact at larger scales. with that mathematical techniques are even more important as you need to model the dynamics of a system. as the approach i have in my mind seems amicable to both systems theory and field theories of cognition, i might need new tools (a mathematical framework in my understanding is a bringing together a group of heuristics in order to form a larger system). idk though. it could always just be a waste of time. im going to need to better understand why goertzel talks so much about different logics in his general theory of general intelligence...
>>11567 >can get a deep understanding of how it works Do I need that? I want the responses and then work with them. Not everyone needs to deeply understand how they are created. Also, the whole system of an AI doesn't need to be one big chunk of math.
>>11568 sorry for late reply. maybe it is not needed, but it is impossible to systematically study such a system's behaviour without statistics. it would just come down to blind guess work at best. this is also assuming out current techniques are sufficient and no further innovation is required too
>>11641 If some parts are neural networks which I don't understand deeply, then someone else does. The people which came ip with it. They might come up with something better, which I can plug into my system if the output has the same format than the part I had before. People use programs all the time or import modules to their programs which they don't understand down to the machine and math layers. It's fine if you try, but for most people it would be a trap trying to understand everything down to the depth.
>>11649 yeah, as a fellow module abuser i agree. usually it's best to just import the tools you want. but i dont want to wait till people smarter than me solves my problems (though if they do that would be obviously nice)

Report/Delete/Moderation Forms
Delete
Report

no cookies?