/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality.

Reports of my death have been greatly overestimiste.

Still trying to get done with some IRL work, but should be able to update some stuff soon.

#WEALWAYSWIN

Max message length: 6144

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB

More

(used to delete files and postings)


AI Design principles and philosophy Robowaifu Technician 09/09/2019 (Mon) 06:44:15 No.27
My understanding of AI is somewhat limited, but personally I find the software end of things far more interesting than the hardware side. To me a robot that cannot realistically react or hold a conversation is little better than a realdoll or a dakimakura.

As such, this is a thread for understanding the basics of creating an AI that can communicate and react like a human. Some examples I can think of are:

>ELIZA
ELIZA was one of the first chatbots, and was programmed to respond to specific cues with specific responses. For example, she would respond to "Hello" with "How are you". Although this is one of the most basic and intuitive ways to program a chat AI, it is limited in that every possible cue must have a response pre-programmed in. Besides being time-consuming, this makes the AI inflexible and unadaptive.

>Cleverbot
The invention of Cleverbot began with the novel idea to create a chatbot using the responses of human users. Cleverbot is able to learn cues and responses from the people who use it. While this makes Cleverbot a bit more intelligent than ELIZA, Cleverbot still has very stilted responses and is not able to hold a sensible conversation.

>Taybot
Taybot is the best chatbot I have ever seen and shows a remarkable degree of intelligence, being able to both learn from her users and respond in a meaningful manner. Taybot may even be able to understand the underlying principles of langauge and sentence construction, rather than simply responding to phrases in a rote fashion. Unfortunately, I am not sure how exactly Taybot was programmed or what principles she uses, and it was surely very time-intensive.

Which of these AI formats is most appealing? Which is most realistic for us to develop? Are there any other types you can think of? Please share these and any other AI discussion in this thread!
>>27
Cleverbot is the best that anyone could hope for in a homebrew operation in my opinion. I remember some IRC guys made a few meme chatbots in the hope to rebuild Tay from scratch by going the Cleverbot route but there's really no matching a vanity project built by a billion dollar multinational.
>>331
I think the framework M$ devised that was behind Tay is available for use by anyone willing to fork over the sheqels to do so.
>>332
As is typical with M$ they make a big deal about being open but if you look beneath the surface there's nothing there. They only release a few token items that don't matter so their shills in the media have something to point at.

The /machinecult/ board on 8chan that wanted to revive Tay and learned the hard way that their 'commitment to open source' is fraudulent and were given nothing to work with.
>>333
>trip trips get
their bot framework api is 'open' to use, but as it's entirely dependent on Azure as it's backend, and it's a pay-per-transaction model, then only businesses can really use it. there are other approaches that /machinecult/ might have taken that would have given them better traction tbh. The Lita framework for example.
www.lita.io/
>>333 Dubs of Truth
Damn seriously? I think it's gotta means something if you gt a 333 when talking about this /machinecult/ board.
Please tell me more about /machinecult/.
Now that I think of it Turd Flinging Monkey made a tutorial/review video about this subject on his BitChute channel. Can't use their search at the moment but I remember that Replika was in the title.
https://www.bitchute.com/channel/turdflingingmonkey/?showall=1

Replika isn't entirely open but some aspects of it are through CakeChat. They also publish some of their research and presentations on their github repository.
https://github.com/lukalabs

>>334
That's not surprising as cloud integration is the new method of keeping users locked into to your ecosystem.

>>335
There isn't much to say about it as I only visited it once or twice. I'd say that it was similar to /robowaifu/ with very few people doing any work or research and mostly just idle talk about the topic.
>>335
>Please tell me more about /machinecult/.
>>377
>>27
>Which of these AI formats is most appealing?
the last one
>Which is most realistic for us to develop?
the first one
>Are there any other types you can think of?
using one of the yuge botnet frameworks from Jewgle, Amazon, and Microshaft (such as the extremely short-lived Tay and it's cuckold follow-on) is the path most likely to produce reasonable results in short order. but then you have to deal with all the horrible mess that approach entails down the road.

the alternative is to wait until reasonably good FOSS AI frameworks become available.
Basically make the thing open source and the lonely coders will do all the work for you.
>>944
i certainly intend to make all of my code opensauce. not for the reason you mentioned, but to help ensure personal security of anon's robowaifu (ie, code is fully open to peer-review). the group-effort aspect is beneficial ofc, but not the greatest priority imo.
>Paradigms of Artificial Intelligence Programming
Book and code: https://github.com/norvig/paip-lisp
>>27
>Which is most realistic for us to develop?
If you ask me, I want an AI waifu that can benefit me on teaching things that I'm not good at such as Languages, including Programming Languages.
>>1785
Yes, The Golden Oracle is a cherished grail for almost all AI researchers, robowaifus notwithstanding. We all want one too ofc.
Deep Learning has plenty of issues. Here's an interesting paper addressing some of it's shortcomings. https://arxiv.org/ftp/arxiv/papers/1801/1801.00631.pdf
symbols vs. connections >you a lefty or a righty anon? https://neurovenge.antonomase.fr/NeuronsSpikeBack.pdf
Open file (86.93 KB 1000x1498 1517259554551.jpg)
I'm currently playing with the idea of writing down models of situations which we and our waifus will have to deal with, in some kind of pseudo code. This is meant to make notes about situations the AI needs to handle, to think about solutions, but also for us to talk about it in a way which is close to something we might implement at some point. >>18 is about coding personality and >>2731 about psychology, but this here is more general idea of coding responses beyond those, same for chatbot respondes >>22. Maybe it's the closest to NLP >>77 but this here includes more internal processes and general functions. I might open a thread of it's own if either I write enough pseudo code or someone else joins me. Here the basic idea in form of some crude first examples what one could write: if abode="kitchen", occ="washing dishes", human_input="": stop "washing dishes" do something if human_input="<question>": check ambiguity check ambiguity: my_opinion on topic, context general_opinion on topic, context find_additional_context on topic The idea is to think about how things could work, before we know the details how to implement it, especially how to do that down to every detail. The idea is of course not, about to write something for every situation. Just to figure out how it could work in general, and how to write it down to discuss it. About finding patterns. Then, as a programmer I can look at this and think about how to implement it. Might even be possible to write a parser for it at some point, and transform it into something close to Python, so I would only need to make some changes to it. So if you encounter a dialog or situation in your life, or in some media, where you wonder how your fembot could understand and handle that, then write it down in some code like above and post it here or in the thread I might make at some point. You don't need to know how the functions which you make up would work. It's rather about how they are connected to each other and how some of them could work. Just write it down to the level of detail you can and want to.
>>7871 Oh, and since my posting in the psychology thread is also about philosophy, which is also the topic of this thread, I need link back to it. It's about Heidegger, Existentialism, Dreyfus... >>7874
>>7871 This seems like a really good idea Anon, and since there doesn't seem to be a thread specifically for this type of thing (Robot Wife Programming thread?) I'll see if I can think of ideas related to your guidance and post it here.
>>7878 Okay, maybe that's the right place. I'll look through it again, since it has been a while. I rather remembered it as oriented towards movement, probably since the title picture is a rather mindless factory robot..
>>7879 As you mentioned I think it deserves it's own thread, and possibly as a collection of pseudo-code exemplars for a /robowaifu/ compendium for submission to >>7855 >- Internet community devoted to coming up with exact wordings for wishes: Open-Source Wish Project
>>7881 BTW, on the topic of pseudocode, while not a strict language specification like C++ or Python, still we as an independent group can certainly devise a standard for pseudocode here for our own use. IMO, it should be very close to one of these two languages to facilitate both specific technical clarity, and also fairly direct translation into functional code. You seem to have suggested something similar, AFAICT.
>>7882 To me it's more important to have something to express my ideas in a simple way and making it easy for non-programmers to follow and contribute. Doesn't need to be very strict, for all I care. If we create a spec, we will first need to discuss that and then later people will point out each others mistakes... My examples are like a very simplified Python, which is already close to human language. I thought it would be okay to use commas as AND like we humans normally do in our language. But then in the last example it's clear to me that 'something, context' means in that context, not AND. Humans will probably understand this by filling the gap and make their interpretation. However, maybe should have pointed out better that these different blocks are like functions, I autocompleted that in my mind, but people which don't write functional programs wouldn't see it. There's also the problem that functions are normally defined at the beginning of a program, then maybe called by some loop or other functions later. Made it a bit more like programming (Python3): define check_ambiguity: my_opinion(topic, context) general_opinion(topic, context) find_additional_context(topic) while 42 is True: if abode="kitchen", occ="washing dishes", human_input="": stop "washing dishes" do something if is_question(human_input): check_ambiguity The more it becomes like a programming language the more it becomes harder to read for beginners, and the more I cringe on some other simplifications which are still left. Also, I can't correct errors in here...
>>7896 >If we create a spec, we will first need to discuss that and then later people will point out each others mistakes... That's a good thing, and it's how we advance as developers. For a domain with such stringent technical requirements as software development, reducing ambiguity is overall much more important to the process than catering to aversion to disagreement. In fact a good coding standard literally eliminates 'pointing out each other's mistakes' whenever it's just insubstantial pilpul handwaving, and not a fundamental flaw in logic or design. But obviously the ability to come to an agreement on specific standard would be pretty vital for a small team that is devising their own from scratch. I think the example you gave (and the points you made) are a pretty good example. >Also, I can't correct errors in here... Yeah, it's a basic issue with imageboards as a forum (not that most other forums are much better in general). If we ever move to some other software then that might be feasible, but till then you just have to deal with it. On /robowaifu/ original posters are allowed to delete their postings. The way I deal with the need is to just copy+delete, then edit+repost. We'd actually need to make a written document to work back and forth on at some point it we actually want to establish this paradigm here. Specific files are better as references than trying to comb through postings, even with good search tools.
Open file (97.96 KB 1207x842 IMG_20210322_084800.jpg)
Related: >>9278 and reposting the picture here, because it one of four in the other thread.
Open file (644.33 KB 440x320 hammer.gif)
Google published a new paper the other day on replacing rewards with examples: https://ai.googleblog.com/2021/03/recursive-classification-replacing.html >We propose a machine learning algorithm for teaching agents how to solve new tasks by providing examples of success. This algorithm, recursive classification of examples (RCE), does not rely on hand-crafted reward functions, distance functions, or features, but rather learns to solve tasks directly from data, requiring the agent to learn how to solve the entire task by itself, without requiring examples of any intermediate states. >...the proposed method offers a user-friendly alternative for teaching robots new tasks. The basic idea of how it works is it learns a value function for the current state by using the model's predictions at a future time step as a label for the current time step. This recursive classification learns directly from the transitions and success examples without using rewards. >First, by definition, a successful example must be one that solves the given task. Second, even though it is unknown whether an arbitrary state-action pair will lead to success in solving a task, it is possible to estimate how likely it is that the task will be solved if the agent started at the next state. If the next state is likely to lead to future success, it can be assumed that the current state is also likely to lead to future success. In effect, this is recursive classification, where the labels are inferred based on predictions at the next time step. I'm still reading the paper but as I understand it, it starts off not knowing whether any state will lead to success or not. So at first it tries random actions and gradually finds more and more states that don't lead to success since they don't match any of the given examples. Eventually it tries something that does match the examples and learns to predict the correct actions to take to reach it. It's basically learning through failure until it reaches something close to the examples. Something similar could be done in natural language where the examples could be user happiness, compliments, optimism, excitement, etc. The large amount of examples also generalize better. Github: https://github.com/google-research/google-research/tree/master/rce Project website: https://ben-eysenbach.github.io/rce/
>>9438 >I'm still reading the paper but as I understand it, it starts off not knowing whether any state will lead to success or not. So at first it tries random actions and gradually finds more and more states that don't lead to success since they don't match any of the given examples. Eventually it tries something that does match the examples and learns to predict the correct actions to take to reach it. It's basically learning through failure until it reaches something close to the examples. Neat. Not only does this have potential for language interactions as you indicated, but I think there are obviously 'baby learning to walk' physical corollaries for those of us making robowaifus. I hope we can learn to capitalize on this approach here. Not only does it seem like it will be lower-cost computationally, but it's also likely to simpler for Anon to utilize as an interaction engagement paradigm to use with our waifus. Thanks!
>>9440 Having the reverse will also be important, like examples to avoid at all costs. You wouldn't wanna give your robowaifu an example of a finished pizza and end up with your house burning down smogged in the smell of burnt cheese pancakes. We're probably getting close to rudimentary general intelligence with this. I can imagine conversational AI picking up on a user's intent to create an example for a robowaifu to learn and her figuring out ways to do it on her own. Even better progress would be being able to learn by example with metaphors. Perhaps that will come once the AI is embodied and can attach language to experiences.
>>9442 These are good points Anon. I'll have to think about this more.
Open file (1.11 MB 878x262 discoRL.gif)
Open file (268.50 KB 1187x380 goal distribution.png)
A new paper came out a couple days ago called Distribution-Conditioned Reinforcement Learning, which I feel is a significant step forward towards creating artificial general intelligence. https://sites.google.com/view/disco-rl >Can we use reinforcement learning to learn general-purpose policies that can perform a wide range of different tasks, resulting in flexible and reusable skills? >In this paper, we propose goal distributions as a general and broadly applicable task representation suitable for contextual policies. Goal distributions are general in the sense that they can represent any state-based reward function when equipped with an appropriate distribution class, while the particular choice of distribution class allows us to trade off expressivity and learnability. We develop an off-policy algorithm called distribution-conditioned reinforcement learning (DisCo RL) to efficiently learn these policies. We evaluate DisCo RL on a variety of robot manipulation tasks and find that it significantly outperforms prior methods on tasks that require generalization to new goal distributions. It's similar in a sense to recursive classification of examples >>9438 in that it uses multiple examples of successful solutions. Unlike Hindsight Experience Replay and other methods though it creates a goal distribution over various latent features, rather than having a specific goal-state it must reach. Part of the algorithm also decomposes tasks into easier subtasks, just by examples of the solution. However, what makes it truly remarkable is that it generalizes what it has learned to new goals it has never seen before and successfully solves tasks it has never been trained on. There's still a lot of work to be done with this idea, such as combining it with distribution learning and goal-distributed directed exploration. It'd be interesting to see it combined with intrinsic rewards so it can explore an environment curiously and learn to solve new tasks on its own. The paper is also encouraging to my own research because it shows how powerful latent variable models can be and these goal distributions can be easily integrated into my AI project.
>>10157 Great, they need to be smart and be able to learn new stuff.
>MLP-Mixer: An all-MLP Architecture for Vision pdf: https://t.co/z7tXRHoGvN abs: https://t.co/ZEEl6ls6yt >MLP-Mixer, an architecture based exclusively on multi-layer perceptrons (MLPs) https://t.co/wEw9s7ZONB Similar: >TL;DR >We replace the attention layer in a vision transformer with a feed-forward layer and find that it still works quite well on ImageNet. https://github.com/lukemelas/do-you-even-need-attention RepMLP: Quite similar: https://arxiv.org/abs/2105.01883
>>10304 Sounds like they are removing parts of the model. If this is true, it seems like it would run faster. Is this accurate? If so, then it might be usable on smaller computers possibly? >also >An all-MLP Architecture for Vision obligatory
>>10305 I'm not the anon that posted it but from my understanding Mixer performs slightly worse than the state of the art and requires more compute on smaller scales. In large scale models (that we can't train anyway because they require 1000+ TPU core-days) it only requires half as much. The paper is basically a jab at the Transformer paper and shows that simple neural networks we've been using for decades perform nearly as well without self-attention, while using other recent advances in machine learning like layer normalization and GELU as a non-linearity, which Transformers also use. What I take from it is that self-attention is incredibly efficient for small models but becomes wasted compute as the model scales. In a way it confirms what the Linformer paper found that excessive self-attention isn't necessary. Mixer starts to outperform Visual Transformers at larger scales because of this inefficiency. >Linformer: Self-Attention with Linear Complexity https://arxiv.org/abs/2006.04768
>>10306 I see, I think I followed that to some extent. The one bit I absolutely understood was both the 1'000+ TPU-days (and it's inaccessibility for any organization refusing to toe the globohomo line). >What I take from it is that self-attention is incredibly efficient for small models but becomes wasted compute as the model scales. I presume that any robowaifu that would function at a level of any reasonably-near facsimile of the Chinese Cartoon Documentaries on the subject, would likely benefit from the largest models conceivable?
>>10306 Ahh, I see. Thanks. I posted it, but only understood the basic claims that it's somewhat better than a transformer. 1000+ GPU Days isn't useful for us right now, though the coming GPUs seem to be 2.5 times faster and what they're using now will be available to us in some time. Up to three high end GPUs seem to be doable for one PC, based on what I've read in the hardware guide I posted somewhere here (Meta, I guess).
>The machine learning community in the past decade has greatly advanced methods for recognizing perceptual patterns (e.g., image recognition, object detection), thanks to advancements in neural network research. >However, one defining property of advanced intelligence – reasoning – requires a much deeper understanding of the data beyond the perceptual level; it requires extraction of higher-level symbolic patterns or rules. Unfortunately, deep neural networks have not yet demonstrated the ability to succeed in reasoning. >In this workshop, we focus on a particular kind of reasoning ability, namely, mathematical reasoning. Advanced mathematical reasoning is unique in human intelligence, and it is also a fundamental building block for many intellectual pursuits and scientific developments. We believe that addressing this problem has the potential to shed light on a path towards general reasoning mechanisms, and hence general artificial intelligence. Therefore, we would like to bring together a group of experts from various backgrounds to discuss the role of mathematical reasoning ability towards the path of demonstrating general artificial intelligence. In addition, we hope to identify missing elements and major bottlenecks towards demonstrating mathematical reasoning ability in AI systems. >To fully address these questions, we believe that it is crucial to hear from experts in various fields: machine learning/AI leaders who assess the possibility of the approach; cognitive scientists who study human reasoning for mathematical problems; formal reasoning specialists who work on automated theorem proving; mathematicians who work on informal math theorem proving. We hope that the outcome of the workshop will lead us in meaningful directions towards a generic approach to mathematical reasoning, and shed light on general reasoning mechanisms for artificial intelligence. https://mathai-iclr.github.io/papers/
>>10350 This here in particular seems to excite people: >20. Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets
>>10350 > Therefore, we would like to bring together a group of experts from various backgrounds to discuss the role of mathematical reasoning ability towards the path of demonstrating general artificial intelligence. This no doubt will be a major breakthrough 'towards the path', but I have the sense from history, my own experience observing these type group's behavior in current year, and the general agenda of the corporate-controlled media that all the focus in any announcement towards success with this will likely be promoting very heavily the one following word: >demonstrating The spin and hyperbole machines will all be in overdrive proclaiming "SCIENTISTS but not the engineers who actually built the thing :^) ACHIEVE MAJOR BREAKTHROUGH'' Better than human intelligence created in the lab Even if they manage to breakdown a few general principles and manage a specified mathematical reasoning ability as a result -- it does no such thing as show 'better than human intelligence'. I realize this is just a presupposition (though a quite likely one IMO), and therefore a strawman. But there are already lots of things in the real world that can out-perform humans; cardinal birds & commercial jets for instance. But there is far, far, more to being a human being than simply figuring out that 2 + 2 = 4, or even F = ma. In line with the general materialist world-view of most of these spin-doctors, I'm confident enough they almost all will proclaim (ironically enough, in this case) that "None of that other stuff means 'being a human'. It's just Darwin." Mark my words. Thanks Anon. I hope they succeed at this and keep the results actually open-source in deed (not just word as with the OpenAI team). It will be a nice advancement of our goals if they do.
Open file (662.29 KB 1199x2048 ML bingo.jpeg)
>>10353 <Scientists achieve major breakthrough >but it can only be verified with $1,000,000 of compute >but it can't be verified because they refuse to release their source code/model because it's too dangerous >but we won't reproduce it because its carbon footprint is too big >but it's entrenching bias in AI If it became standard to release source code and models, 99.9% of papers in ML would never survive because people could easily test it on something else and show that it doesn't work like they said it does. ML in academia has become a game of smoke and mirrors and an ecosystem of papers built on unverified claims, and the peer review process is akin to pin the tail on the donkey due to the large volume of garbage papers. Most of the progress being made is in the research labs of corporations actually trying to get results because it affects their bottom line, and even then a lot of the hiring they do is just so their competition can't have that talent. Most of the research being done is just to pass the time until the company actually needs something solved. >>10351 Pretty sure this has already been known using regularization to prune neural networks, particularly lasso regularization and network pruning more so than weight decay. The fewer parameters a network needs to solve a particular amount of training data, the more parameters it has free to learn more training data and the better it generalizes. Usually there's a hill to climb and descend in validation loss before reaching peak performance, which they mention but misrepresent by cherry-picking papers. Beyond toy problems like this it never reaches >99%. And it certainly doesn't need to be said that more data works better. Other red flags are no significant ablation studies, no test set dissimilar from the validation and training set to show that it actually generalizes, and oversensitivity to hyperparameters (aka if you don't use this exact learning rate on this exact training data, it doesn't work.) Be very cautious of the ML hype train. They're like people who change their waifus from season to season, tossed to and fro with no direction. The only exception is if there's code going viral that people are playing around with and getting interesting results on other problems.

Report/Delete/Moderation Forms
Delete
Report

Captcha (required for reports)

no cookies?