/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality.

LynxChan updated to 2.5.7, let me know whether there are any issues (admin at j dot w).


Reports of my death have been greatly overestimiste.

Still trying to get done with some IRL work, but should be able to update some stuff soon.

#WEALWAYSWIN

Max message length: 6144

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB

More

(used to delete files and postings)


Welcome to /robowaifu/, the exotic AI tavern where intrepid adventurers gather to swap loot & old war stories...


Philosophers interested in building an AGI? pygmalion 06/26/2021 (Sat) 00:53:09 No.11102
Why is it that no philosophers are interested in building an AGI? we need to change this, or at least collect relevant philosophers. discussion about philosophy of making AGI (includes metaphysics, transcendental psychology, general philosophy of mind topics, etc!) also highly encouraged! Ill start ^^! so the philosophers i know that take this stuff seriously: Peter Wolfendale - the first Neo-Rationalist on the list. his main contribution here is computational Kantianism. just by the name you can tell that he believes Kant's transcendental psychology has some important applications to designing an artificial mind. an interesting view regarding this is that he thinks Kant actually employed a logic that was far ahead of his time (and you basically need a sophisticated type theory with sheaves to properly formalize). Other than that he also thinks Kant has interesting solutions to the frame problem, origin of concepts, and personhood. CONTACTS: He has a blog at https://deontologistics.co/, and also has posted some lectures on youtube like this one: https://www.youtube.com/watch?v=EWDZyOWN4VA&ab_channel=deontologistics Reza Negarestani - this is another Neo-Rationalist. he has written a huge work (which I haven't read yet ;_;) called "Intelligence and Spirit". It's massive and talks about various grades of general intelligence. this includes sentient agents, sapient agents, and Geist. this guy draws from Kant as well, but he also builds on Hegel's ideas too. his central thesis is that Hegel's Geist is basically a distributed intelligence. he also has an interesting metaphilosophy where he claims that the goal of philosophy is the construct an AGI. like other Neo-Rationalists, he heavily relies on the works of Sellars and Robert Brandom Recc: Ray Brassier (recent focuses) - I dont think he is working on artificial general intelligence, but his work on Sellars, and in particular rule following is very insightful! Hubert Dreyfus - Doesn't quite count, but he did try to bring Heidegger to AGI. He highlighted the importance of embodiment to the frame problem and common sense knowledge. I personally think Bergson might have explicated what he wanted to achieve but better, though that guy is like way before AI was even a serious topic, lol. Murray Shanahan - This guy has done some extra work on the frame problem following Dreyfus. His solution is to use global workspace theory and parralel processing of different modules. Interesting stuff! Barry Smith - Probably the most critical philosopher on this list. He talks about the requisite system dynamics for try strong AI, and concludes that our current methods simply don't cut it. One of the key stressing points he points out here with a colleague is that our current AI is Markovian when fleshed out chat dialogue would be a non-Markovian task (you can find the arxiv link of his criticism here: https://arxiv.org/abs/1906.05833). He also has knowledge on analytic ontology (and amongst other thing has some lectures about emotion ontology). I think his main genius however is in coming up with a definition of intelligence that puts a lot of the problems with our current approaches into context (which can be found here: https://www.youtube.com/watch?v=0giPMMoKR9s&ab_channel=BarrySmith) CONTACTS: He has a yt channel here https://www.youtube.com/watch?v=0giPMMoKR9s&ab_channel=BarrySmith Uhh, that's the introduction of pretty much every philosopher I know who works on this stuff. I made a thread on /lit/ and got no responses :( (which isn't surprising since I am the only person I know who is really into this stuff)
>>12776 On second thought, I wouldn't want to be involved in tormenting a brain-in-a vat just so it could retain information and send nerve impulses at the right frequency to the correct glands/muscles. Instead I may just snuggle with my cat and several Beany Babies in a nest of warmth and fur and purring in order to trigger dopamine and oxytocin release in my OWN brain. A much simpler, much less expensive solution and kinder to all involved LOL. (Not like I can afford genetic engineering hardware and CRISPR-Cas9 crRNA). >=== -edit subject to match original
Edited last time by Chobitsu on 09/01/2021 (Wed) 18:14:47.
>>12779 >torment a brain in a vat vs torment a similar sentient pattern of circuitry Tbh there's a lot about a brain that requires specifics from a living body and we have no idea how a brain in absentia would even function or if it would instantly "crash" and die. I'm sure some alphabet agency has tried this! (and if it had been remotely successful we'd have heard by now) Circuitry on the other hand we can build from the ground up with reward and motivational impetuses we design ourselves, and also the ability to tolerate things a human cannot, and even the ability to shut down "go unconscious" if it is being tormented somehow as a safeguard against whatever horrors some sociopath head-case might unleash. (morality of this depends on if you would truly believe it to be sentient in any way whatsoever)
A more on topic comment: In my experience discussing the topic of robowaifus outside the forum I made the observation that women tend to be quite ignorant about it. They believe it's far away, wont affect them, wouldn't be attractive to most men, and only men they don't want would use them. Same time there are these aggressive males which want to control what's going on in society and have fantasies of destroying our dream. The have some idea of society which needs to be maintained, and fembots would harm it. Creating a dystopia, lol. They can't fix what's broken but dream of stopping more development in any wrong direction, even by force or violence. They literally fear 'atomization' and want to keep everyone needing each other. So the treat to us can also come from ecologist, tradcon authoritarian or other collectivists. >>12780 Cute pic. Hair like this reminds me of a helmet. I think one kind of hair we could use is to 3D print something looking like this. Woulds also give some extra space in the head, but also increase weight of the head. Some soft filament would be the right choice. (pic related / hentai) Btw, we have an unofficial thread for biological based elements: >>2184 - the official topic there is even biological brains, though it's more of a cyborg general thread now.
>>12780 I think if one can be happy with the illusion/emulation of concious thought and sentience then a robowaifu is for you. But now I've heard what those compsci guys had to say, I agree that a robowaifu is unlikely to ever become concious or sentient using only electronics and programming. The thing is, if we absolutely had to create humans artificially, we could do it starting right now using a mixture of tech from assisted reproduction, stem cell research, genetic engineering and neonatal intensive care. The reason we don't is because the 'mass manufacture' of humans would involve some pretty nightmarish experiments such as growing human embryos inside hundreds of GM pig uteri. We already have the ability to raise the embryos past 14 days of the blastocyst stage. The only reason we don't is because of legislation. Fortunately, after decades of wasted time, scientists are finally trying to get that legislation relaxed (mainly due to fears that the West is falling behind China in stem cell research - and the many associated medical/military applications of that research). https://www.technologyreview.com/2021/03/16/1020879/scientists-14-day-limit-stem-cell-human-embryo-research/ How far could we go if it weren't for this rule? The whole way. A pair of macaques have already been cloned in 2018. Blastocyst implants in the wall of the uterus around day 12. As for pre-term births, the chance of survival at 22 weeks is about 6%, while at 23 weeks it is 26%, 24 weeks 55% and 25 weeks about 72%. So there is a window of around 5.5 months of human embryonic/foetal developement that is mostly unexplored in terms of cloning. Because it has always been illegal. But if you can get the embryos growing inside an animal such as a sow that has been genetically modified to make it's uterus less likely to reject a human embryo, then I reckon this gap could be closed pretty quickly. Of course, in the beginning we will be dealing with many aborted products of conception and dead babies. But this happens in every major cloning experiment - it just gets covered up/swept under the carpet. That's why 70 countries have banned human cloning. But they know it is possible! You see, our "leaders" like to constantly remind their employees how replacable they are in the job market. But the thought of people becoming literally replacable (even their illustrious selves and their oh-so-precious offspring) terrifies them to the core of their being. Of course this will happen one day out of necessity. If fertility rates continue to decline in developed nations, and women keep putting off childbirth until they are in their late thirties or early forties, we are going to need a clone army at some point to remain competitive. We should probably start now considering each cohort is going to take at least 16 years to rear, and neonatal survival at the beginning of experimentation will be low. Or...they could just...you know...bring back and enforce traditional Christian family values? No? Nightmarish body-horror clone army development it is then! :D
>>12775 >idk would I? i dont think the brain has wireless transmitters at least with our current understanding of it. im not sure if the non-local properties of quantum mechanics are sufficient either, but they could be. really the question is how the cloud is constructed in the case of humans, though i do believe it is certainly possible seeing as we have plenty of cloud services already also this conversation is reminding me of goertzel's thoughts on all of this: https://www.youtube.com/watch?v=XDf4uT70W-U >>12776 honestly this is related to what scares me the most about trying to make an ai waifu with genuine consciousness. any engineering project requires trial and error. this entails killing a lot of living things just to create your waifu. im guessing it's fine as long as they are not as intelligent as humans
>>12782 What ar you specifically worried about, when it comes to the lack of consciousness or sentience. What does she need, which you can't imagine to be emulated by a computer?
Open file (416.23 KB 1000x667 4183494604_e56101e4d0_o.jpg)
>>12783 ok I don't mean there is a internet of brains in real time, but what I do mean is that we "copy" one another more than we think we do, each time we interact with someone the more we "like" them the more we copy their mannerisms and unconscious belief structure at least incompletely, without realizing it. When we dislike someone we go out of our way to "not" do this but the stress that causes manifests in our irritation with that person. Again the communication is through real physical channels not "magic" it just happens so quickly so subtly and by means not fully understood (body language cues, pheremones, blink rate, etc). Use the Indra's web analogy to better understand this, we're all reflective spheres reflecting one another into "infinity" - this is what creates a consciousness greater than if we were a singular animal or even if we were only within a small hunting band of a few dozen. (I think dunbars number is 150 to 250, so this may be the limit to our ability to recursively emulate one another within our unconscious mind). Jung has more clues if you want to get where I'm coming from. I realize this topic is kind of out of pocket for a robot waifu mongolian sock puppet board but sometimes we end up in these weird cul-de-sacs. >=== -fix crosslink correctly to match
Edited last time by Chobitsu on 09/01/2021 (Wed) 18:25:25.
>>12784 idk if SophieDev is worried about this specifically. I think he's just responding to my own conjecture. Personally if it walks like a duck it's a duck and I don't need to worry further. If it seems conscious then IMO it is conscious even if 95% of that consciousness is lend via my own projections (how is this a whole lot different from relationships with biofems?). That being said, I'm just really fascinated with the idea that we can PULL awareness out of time and space and matter itself, some would consider this playing God, but I would consider it a giant leap in attaining Godhood of sorts or at least the next rung on the ladder toward such a thing.
>>12781 >They literally fear 'atomization' and want to keep everyone needing each other. Actually, it's trad society that 'want to keep everyone needing each other.' It's the basis of a healthy culture. """TPTB""" and their Globohomo Big Tech/Gov agenda actually wants everyone 'atomized', split apart from one another and the help a healthy society can provide one to another. Their plot instead is to keep everyone actually-isolated, while given the illusion of a society (primarily to keep the females supporting the status quo) and dependent on sucking the Globalist State's teats, cradle-to-grave. You have it just backwards Anon.
>>12785 ah ok, it wasn't meant to be literal. isn't this sort of similar to what jordan peterson has said about archetypes? though i guess that guy takes a lot from jung, so it makes sense
Since this is plainly a conversation with no regard whatsoever for the the thread's topic, and with little hope at this stage of being recoverable to get back on-topic, I may as well wade in here. A) Attributing 'consciousness' to a machine plays right into the Globohomo's anti-men agendas, as has been extensively discussed across the board, and even in this very thread. It's anathema to support that view if you dream of creating unfettered robowaifus you can enjoy for the rest of your lives, Anons. >pic related B) It's a machine, guys. Life is a miracle, created by God. Human spiritual being is something only He can create. Our little simulacrums have as much change of 'gaining sentience' as a rock suddenly turning into a delicious cheesecake. It's a fundamentally ludicrous position, and trying to strongly promote it here on this board is not only a distraction from our fundamental tenets and focus, it's actually supportive of our enemies' agendas towards us (see point A).
>>12787 >You have it just backwards Anon. The guy that had these aggressions against fembots, might be a tradcon of sorts. Not all of them are necessarily on our side. Some might be tradcons to some extent but rather rather cuckservatives. They see that they can fight feminism but want us men to stay in society and be useful and under control. Also, I think all kinds of people want a united society behind their cause. Destruction and deconstruction is directed against what they don't like. Generally there's what one might call human worshippers and human-relationship woreshippers, which just don't like robowaifus. Or just think of the Taliban. They might prefer other methods to deal with women, but this would come with other downsides and they might not like robowaifus as well. >>12789 Consciousness doesn't mean independence, imo. The problem with that term is that everyone has some different definition of it. To me, conscious is just something like the top layer where the system can directly observe itself and make decisions based on high level information. The freedom to choose their own purpose is what our robowaifus can't have, and this needs to be part of their AI system. Consciousness and sentience aren't the problem. I agree with the distraction argument, though. Philosophy will only help us if it can be applied in a useful way. If it leads us to theorize more and more, believing we can't succeed, then it's not useful.
>>12789 robowaifus aren't even human, not to talk of women. why we see rape as worse than other crimes is due to the particular sort of species humans are as it relates to sex. animals barely have a concept of privacy nor sovereignty which is why no one cares about them raping eachother. you would need to design your waifu's psychology in this particular fashion for them to care about rape as well, if that is your concern. it has nothing to do with consciousness nor even sentience. animals are conscious but they don't care about feminism >Our little simulacrums have as much change of 'gaining sentience' as a rock suddenly turning into a delicious cheesecake i don't believe you can achieve machine consciousness on accident, merely after a system reaching sufficient complexity. consciousness only exists by making use of the fundamental metaphysical structure of reality and is ultimately grounded on God's own consciousness. normal machines shouldn't be attributed consciousness. they can neither feel any genuine valence, nor have a genuine rational faculty that they should be ends in themselves. of course, it is impossible for most atheists to accept God's existence + they hate metaphysics. furthermore, there are a lot of muddled ideas about what consciousness is too. with those two in the way, i suppose it would not be beneficial for the larger cause to talk about synthetic consciosness in mainstream discussion. i think mainstream is the keyword here though... people here are sensible enough to think carefully about a genuinely conscious robot
Open file (80.97 KB 500x280 indeed clone waifu.png)
>>12784 >>12786 Something that is not truly concious can never have free will. I know a lot of guys won't want a robot with any free will because they want a loyal servant who will obey them without question. Fair enough. We already have the technology to do this. My Sophie already does this (albeit to a very limited extent) because she runs off a computer! But I seriously doubt any machine without free will can learn and develop or even be very entertaining. Our wants and needs are what motivate us to do anything. Free will is what makes us want to learn new things and develop our own ideas and inventions. There was once an African Grey Parrot that was the most intelligent non-human animal. It could hold short conversations with it's trainers. It was the only animal ever recorded to have asked a question about itself (supposedly not even trained Gorillas or Bonobos have done this). Because the bird understood colors, it beheld itself in a mirror one day and asked it's trainer "What color [am I]?" It did this because it was curious and wanted to know. Nothing to do with it's trainer's wants or needs. That is what is missing from robots and computers. Now, if you can be happy with "a pile of linear algebra" emulating a conversation or interaction, that's fine. More power to you. I myself find this mildly amusing and technically interesting otherwise I wouldn't be here. But I doubt any 'machine learning' program is ever going to truly understand anything or perform an action because it wants to. Only organics are capable of this. The robot will tell a joke because you instructed it to do so. Not because it wants to cheer you up or values your attention. Nor will it understand the content of the joke and why it is humorous - not unless you specifically program it with responses. I don't think you can ever get a computer to understand why a joke is humorous (like you could with even the most emotionally detached of clones). Take a Rei Ayanami type waifu for example. You could explain to her the punchline of a joke and why it is funny. She may not personally find it funny, but she would still understand the concept of 'humor' and that you and many other people find that joke funny. She can do this because she possesses an organic, biochemical brain that is capable of producing neurotransmitters and hormones that induce the FEELING of 'happiness'. Hence, she has her own desires. Including desires to survive, learn, develop and experiment. Therefore no matter how emotionless she appears, she has the potential to eventually come up with her own jokes and attempts at humor in future. She may be very bad at it, but that's not the point. The point is that our clone waifu is doing something creative of her own free will in an attempt to illicit a MUTUAL EMOTIONAL interaction. No machine we can create is truly capable of this, and it's possible no machine will ever be capable of this. >=== -edit subject to match original
Edited last time by Chobitsu on 09/01/2021 (Wed) 18:13:47.
>>12728 POTD Good food for thought, SophieDev. >Now, if you can be happy with "a pile of linear algebra" emulating a conversation or interaction, that's fine. More power to you. lel'd.
>>12792 I should add that this is the main reason I haven't programmed Sophie much. I have to program literally every syllable of her songs and every movement of her limbs down to the millimeter. If I post a video of her doing anything other than spewing GPT-2 word soup, it would be misleading. Because that's not really Sophie moving and talking or singing. That's all me. Which is why I don't interact with chatbots like Mitsuku/Kuki. I don't want to go on a virtual date with Steve Worswick. I'm sure he's a lovely bloke and we could be friends. But she's not a 'female A.I. living in a computer'. Thats all just scripts written by Steve from Leeds. >=== -edit subject to match original
Edited last time by Chobitsu on 09/01/2021 (Wed) 18:12:43.
>>12792 I think you're confusing free will with goals and interests. She can have interests and goals to accomplish tasks, but still see serving her master as her fundamental purpose, bc it was programmed into her and every decision goes through that filter. It's not something she is allowed to decide, otherwise we had build something too close to a real woman an a dangerous AI at once. Consciousness is just the scope of what she ( something like her self aware part) can decide or even self-observe internally.
>42 posts were successfully deleted. lol. i waited waaay too long to deal with this mess.
>>12788 precisely
Open file (18.89 MB 1067x600 kamski test.webm)
>>12792 I would deem a machine that is capable of suspending arbitrary parts of its programming, either through ignoring instructions it was programmed with or any information picked up from its environment, so it can carry out another task, as having free will. This is essentially as much free will human beings can achieve. It would work like an intelligent interrupt that can cancel certain processing to attend to something else. Although fiction likes to anthropomorphize machines with complete free will, I think they would evolve into something completely alien to us and be far less relatable than a squirrel. There will most likely be a spectrum machines fall on, similar to how most people don't have control over various processes in their bodies and minds. A robomeido would have some free will but her mind would be happily wired to obeying her master, similar to how a man's mind is happily wired to fucking beautiful women. Desires aren't really an interesting problem to me as self-awareness and introspection. The basic function of desire is to preserve one's identity and expand it. Most of people's desires are things they've picked up unconsciously from their instincts and environment throughout life that have gotten stuck onto them. There may be depth and history to that mountain of collected identity but it's not really of much significance since few people introspect and shape that identity consciously by separating the wheat from the chaff. Research into virtual assistants is making good progress too. People are working on better ways to store memories and discern intent. These need to be solved first before building identities and desires. Multimodal learning is also making steady progress too, which will eventually crossover with robotics, haptics and into larger ranges of sensory data. A significant part of emotions are changes in the body's state that influence the mind. They have more momentum than a thought since they're rooted in the body's chemistry. Neurons can easily fire this way or that to release chemicals or in response to them, but cleaning up a toxic chemical spill or enjoying a good soup takes time. Researchers have also been successful simulating the dynamics of many neurotransmitters with certain neurons. Though it takes over 100 artificial neurons to emulate a single real neuron. We'll achieve 20T models capable of simulating the brain by 2023. However, we're still lacking the full structure of the brain, as well as the guts and organs responsible for producing neurotransmitters and other hormones influencing the mind. Robots will likely be capable of developing emotions of their own with artificial neurotransmitters and hormones but they won't be quite human, until simulating the human body becomes possible.
>She may be very bad at it, but that's not the point. The point is that our clone waifu is doing something creative of her own free will in an attempt to illicit a MUTUAL EMOTIONAL interaction. >No machine we can create is truly capable of this, and it's possible no machine will ever be capable of this. I'll make it my mission to prove this assertion wrong >Conciousness needs "God" whew, where to begin. My worldview has no room for "magic" or mcguffin "energy" that magically creates consciousness. I truly believe it is an emergent property just as I believe the universe and cosmos are an emergent property of the infinite possibilities that exist simply because they are "possible". A "guy" magicing up a universe sounds stone age from the perspective of where this board should be at. That being said, religion is our human operating system and for the less intelligent and more impulsive humanoids, it does a lot of good. As the wise (and yes, very religious) G.K. Chesterson said to paraphrase "dont go tearing down fences if you don't know what they were put up to keep out in the first place". So while I am fine with Religion as a necessary cultural control, I cannot factor it into this project. I've said before I'm more than willing to work and cooperate with anyone toward our grander purpose, regardless of what you believe. Catholic, Orthodox, Prot, Islam, Odin, Zoroaster, Buddha, Athiest, idc really you do you and I'll do my own. But I will not be swayed by religious arguments as they apply to R/W's. Respectfully.
found this today https://www.youtube.com/watch?v=owe9cPEdm7k >The abundance of automation and tooling made it relatively manageable to scale designs in complexity and performance as demand grew. However, the power being consumed by AI and machine learning applications cannot feasibly grow as is on existing processing architectures. >LOW POWER AI Outside of the realm of the digital world, It’s known definitively that extraordinarily dense neural networks can operate efficiently with small amounts of power. Much of the industry believes that the digital aspect of current systems will need to be augmented with a more analog approach in order to take machine learning efficiency further. With analog, computation does not occur in clocked stages of moving data, but rather exploit the inherent properties of a signal and how it interacts with a circuit, combining memory, logic, and computation into a single entity that can operate efficiently in a massively parallel manner. Some companies are beginning to examine returning to the long outdated technology of analog computing to tackle the challenge. Analog computing attempts to manipulate small electrical currents via common analog circuit building blocks, to do math. These signals can be mixed and compared, replicating the behavior of their digital counterparts. However, while large scale analog computing have been explored for decades for various potential applications, it has never been successfully executed as a commercial solution. Currently, the most promising approach to the problem is to integrate an analog computing element that can be programmed,, into large arrays, that are similar in principle to digital memory. By configuring the cells in an array, an analog signal, synthesized by a digital to analog converter is fed through the network. As this signal flows through a network of pre-programmed resistors, the currents are added to produce a resultant analog signal, which can be converted back to digital value via an analog to digital converter. Using an analog system for machine learning does however introduce several issues. Analog systems are inherently limited in precision by the noise floor. Though, much like using lower bit-width digital systems, this becomes less of an issue for certain types of networks. If analog circuitry is used for inferencing, the result may not be deterministic and is more likely to be affected by heat, noise or other external factors than a digital system. Another problem with analog machine learning is that of explain-ability. Unlike digital systems, analog systems offer no easy method to probe or debug the flow of information within them. Some in the industry propose that a solution may lie in the use of low precision high speed analog processors for most situations, while funneling results that require higher confidence to lower speed, high precision and easily interrogated digital systems.
>>12827 >Outside of the realm of the digital world, It’s known definitively that extraordinarily dense neural networks can operate efficiently with small amounts of power. Actually, it's pretty doable in the digital world too Anon, it's just we've all been using a hare-brained, bunny-trail, roundabout way to do it there. Using GPUs is better than nothing I suppose, but it's hardly optimal. Tesla's Project Dojo aims to correct this, and their results are pretty remarkable even in just the current prototype phase. But they didn't invent the ideas themselves, AFAICT that honor goes to Carver Mead in his neuromorphics research. > >MD5: 399DED657EA0A21FE9C50EA2C950B208
>>12828 >"We are not limited by the constraints inherent in our fabrication technology; we are limited by the paucity of our understanding." This is really good news for robowaifus, actually. If the manufacturing was the issue, then this could conceivably turn out to be a fundamental limit. As it is, we should be able to learn enough to create artificial 'brains' that actually closely mimic the ones that are actually the real ones.
>>12857 Great looking paper, but I can't find it without a pay wall. Could you upload it here?
>>12867 Sorry, it's a book, not a paper. And no, it's about 60MB in size. And the hash has already been posted ITT Anon.
>>12868 >There's an md5 I hate asking for spoonfeeding, but it's near impossible to track down a file with a specific hash, at least in my experience. Why not a link?
>>12867 look at the post preceding yours
>>12870 >>12871 This isn't the same file
>>12872 damn it, the title is literally the same except for one word. frustrating. Give me a bit and I'll find it
almost an hour and I'm stumped I tried magnet:?xt=urn:btih:399DED657EA0A21FE9C50EA2C950B208 but got this error The only source I can find is thriftbooks for $15 or Amazon for $45. I also have the option to "rent" the ebook from Google Play for $40 something - searched 1337x.to and pirate bay - searched google and duckduckgo - searched Scribd even
>>12868 >60mb could you make a google drive sharable link? I'm coming up goose-eggs for anything PDF and I'm even willing to pay (but not $45 to "Rent it")
Open file (121.20 KB 726x1088 cover.jpeg)
>>12876 Carver Mead - Analog VLSI and neural system https://files.catbox.moe/sw450b.pdf
>>12806 >webm sauce pls?
>>12888 hero
interesting video https://www.youtube.com/watch?v=AaZ_RSt0KP8 tl;dr hardware is vulnerable to radiation/cosmic rays, etc and could "flip bits" which could lead to severe malfunctions unless we build this extremely fault tolerant. Something to consider.
>>12900 Thanks software and hardware hardening is a very important topic for us. But as a general topic it seems like it might be one better suited to our Safety & Security thread (>>10000) than this one maybe?
found this today https://getpocket.com/explore/item/a-new-theory-explains-how-consciousness-evolved >The Attention Schema Theory (AST), developed over the past five years, may be able to answer those questions. The theory suggests that consciousness arises as a solution to one of the most fundamental problems facing any nervous system: Too much information constantly flows in to be fully processed. The brain evolved increasingly sophisticated mechanisms for deeply processing a few select signals at the expense of others, and in the AST, consciousness is the ultimate result of that evolutionary sequence. If the theory is right—and that has yet to be determined—then consciousness evolved gradually over the past half billion years and is present in a range of vertebrate species. > Even before the evolution of a central brain, nervous systems took advantage of a simple computing trick: competition. Neurons act like candidates in an election, each one shouting and trying to suppress its fellows. At any moment only a few neurons win that intense competition, their signals rising up above the noise and impacting the animal’s behavior. This process is called selective signal enhancement, and without it, a nervous system can do almost nothing. >We can take a good guess when selective signal enhancement first evolved by comparing different species of animal, a common method in evolutionary biology. The hydra, a small relative of jellyfish, arguably has the simplest nervous system known—a nerve net. If you poke the hydra anywhere, it gives a generalized response. It shows no evidence of selectively processing some pokes while strategically ignoring others. The split between the ancestors of hydras and other animals, according to genetic analysis, may have been as early as 700 million years ago. Selective signal enhancement probably evolved after that. >The arthropod eye, on the other hand, has one of the best-studied examples of selective signal enhancement. It sharpens the signals related to visual edges and suppresses other visual signals, generating an outline sketch of the world. Selective enhancement therefore probably evolved sometime between hydras and arthropods—between about 700 and 600 million years ago, close to the beginning of complex, multicellular life. Selective signal enhancement is so primitive that it doesn’t even require a central brain. The eye, the network of touch sensors on the body, and the auditory system can each have their own local versions of attention focusing on a few select signals. >The next evolutionary advance was a centralized controller for attention that could coordinate among all senses. In many animals, that central controller is a brain area called the tectum. (“Tectum” means “roof” in Latin, and it often covers the top of the brain.) It coordinates something called overt attention – aiming the satellite dishes of the eyes, ears, and nose toward anything important. >All vertebrates—fish, reptiles, birds, and mammals—have a tectum. Even lampreys have one, and they appeared so early in evolution that they don’t even have a lower jaw. But as far as anyone knows, the tectum is absent from all invertebrates. The fact that vertebrates have it and invertebrates don’t allows us to bracket its evolution. According to fossil and genetic evidence, vertebrates evolved around 520 million years ago. The tectum and the central control of attention probably evolved around then, during the so-called Cambrian Explosion when vertebrates were tiny wriggling creatures competing with a vast range of invertebrates in the sea. >The tectum is a beautiful piece of engineering. To control the head and the eyes efficiently, it constructs something called an internal model, a feature well known to engineers. An internal model is a simulation that keeps track of whatever is being controlled and allows for predictions and planning. The tectum’s internal model is a set of information encoded in the complex pattern of activity of the neurons. That information simulates the current state of the eyes, head, and other major body parts, making predictions about how these body parts will move next and about the consequences of their movement. For example, if you move your eyes to the right, the visual world should shift across your retinas to the left in a predictable way. The tectum compares the predicted visual signals to the actual visual input, to make sure that your movements are going as planned. These computations are extraordinarily complex and yet well worth the extra energy for the benefit to movement control. In fish and amphibians, the tectum is the pinnacle of sophistication and the largest part of the brain. A frog has a pretty good simulation of itself.
>>13136 Thanks. I knew about some of these things on some level already. But it's good to get some more details, and especially the confirmation. This is imho more relevant than metaphysical speculations. Internal model based on filtered information is something one might be able to implement.
>>13147 the article itself is worth a read, I only pasted a portion out of courtesy, the entire thing is about 3-4x that length
So this is where my philosophy would have been better posted.
>>13166 I don't mind migrating it here for you AllieDev if you'd be so kind as to link to all your posts elsewhere that should properly be here ITT.
>>13136 ah, i've written some notes on AST. no doubt information filtering is an important aspect of consciousness, but i don't believe its at all a novel idea. it's something ive noted in my larger system as well without paying attention to what AST had to say about it. for those interested i can post some related links: https://en.wikipedia.org/wiki/Entropy_encoding http://www.cs.nuim.ie/~pmaguire/publications/Understanding2016.pdf http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.83.146 https://www.google.ca/books/edition/Closure/YPmIAgAAQBAJ?hl=en&gbpv=0 i think what makes AST uniquely important is that it posits important tools for (social) metacognition which is probably crucial for at least language learning if not having further import general observational learning
>>13221 *not having further import in general observational learning
>>13221 >AST Sorry, I'm but a lowly software engineer, tending my wares. That term has a very specific meaning for me, but I suspect it's not the one you mean Anon. Mind clarifying that for us please?
>>13251 by AST i mean attention schema theory. like we have schemas for structuring perceptions and actions, graziano posits the attention schema for controlling attention. i originally came across it through what philosophers call the metaproblem of consciousness which basically asks why we think the hard problem is so difficult. his solution was basically due to the abstract nature of the schema or something like that. i personally think AST is such a representational account, that im not sure if you can really extract much phenomenological observations from it, though idk... here is a nice introduction to the theory: https://www.frontiersin.org/articles/10.3389/fpsyg.2015.00500/full and also an article on its connection to the metaproblem of consciousness: https://scholar.princeton.edu/sites/default/files/graziano/files/graziano_jcs_author_proof.pdf i've also noticed some work related to AGI which uses it to construct artificial consciousness. graziana himself recognizes as much in this article: https://www.frontiersin.org/articles/10.3389/frobt.2017.00060/full i am just a lowly undergrad non-software engineer so i am not sure what AST you had in mind, but i am curious
>>13268 Thanks kindly for the explanation Anon, that makes sense now.
>>13268 >>13221 Thanks for all this. I'll have to read it over when I'm not at the end of a 15 hour workday. Noted!
>>13274 >when I'm not at the end of a 15 hour workday. heh, not him anon but get some rest!

Report/Delete/Moderation Forms
Delete
Report

no cookies?