/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality.

Reports of my death have been greatly overestimiste.

Still trying to get done with some IRL work, but should be able to update some stuff soon.

#WEALWAYSWIN

Max message length: 6144

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB

Captcha
no cookies?
More

(used to delete files and postings)


The Library of /robowaifu/ Card Catalogue Robowaifu Technician 11/26/2020 (Thu) 07:11:30 No.7143 [Reply] [Last]
Robowaifus are a big topic. They need a big library index! :^) Note -This is a living document. Please contribute topical thread/post crosslinks! Thread category quick-jumps >>7150 AI / VIRTUAL_SIM / UX_ETC >>7152 HARDWARE / MISC_ENGINEERING >>7154 DESIGN-FOCUSED >>7156 SOFTWARE_DEVELOPMENT / ETC >>7159 BIO / CYBORG >>7162 EDUCATION >>7164 PERSONAL PROJECTS >>7167 SOCIETY / PHILOSOPHY / ETC >>7169 BUSINESS(-ISH) >>7172 BOARD-ORIENTED >>7174 MISCELLANEOUS

Message too long. Click here to view full text.

Edited last time by Chobitsu on 05/04/2021 (Tue) 12:23:07.
119 posts and 35 images omitted.
waifusearch> bicentennial man THREAD SUBJECT POST LINK Batteries & Power >>791 bicentennial man Robowaifus in media >>8276 " " >>8280 " " >>9689 " What happens to your robowaifu w >>1536 " " >>4347 " " >>4382 " " >>4383 " ' bicentennial man ' = 8 results

Welcome to /robowaifu/ Anonymous 09/09/2019 (Mon) 00:33:54 No.3 [Reply]
Why Robowaifu? Most of the world's modern women have failed their men and their societies, feminism is rampant, and men around the world have been looking for a solution. History shows there are cultural and political solutions to this problem, but we believe that technology is the best way forward at present – specifically the technology of robotics. We are technologists, dreamers, hobbyists, geeks and robots looking forward to a day when any man can build the ideal companion he desires in his own home. However, not content to wait for the future; we are bringing that day forward. We are creating an active hobbyist scene of builders, programmers, artists, designers, and writers using the technology of today, not tomorrow. Join us! NOTES & FRIENDS > Notes: -This is generally a SFW board, given our engineering focus primarily. On-topic NSFW content is OK, but please spoiler it. -Our bunker is located at: https://anon.cafe/robowaifu/catalog.html Please make note of it. > Friends: -/clang/ - currently at https://8kun.top/clang/ - toaster-love NSFW. Metal clanging noises in the night. -/monster/ - currently at https://smuglo.li/monster/ - bizarre NSFW. Respect the robot. -/tech/ - currently at >>>/tech/ - installing Gentoo Anon? They'll fix you up. -/britfeel/ - currently at https://anon.cafe/britfeel/ - some good lads. Go share a pint! -/server/ - currently at https://anon.cafe/server/ - multi-board board. Eclectic thing of beauty. -/f/ - currently at https://anon.cafe/f/res/4.html#4 - doing flashtech old-school. -/kind/ - currently at https://kind.moe/kind/ - be excellent to each other.

Message too long. Click here to view full text.

Edited last time by Chobitsu on 04/12/2021 (Mon) 21:57:42.

Elfdroid Sophie Dev Thread 2 Robowaifu Enthusiast 03/26/2021 (Fri) 19:51:19 No.9216 [Reply] [Last]
The saga of the Elfdroid-pattern Robowaifu continues! Previous (1st) dev thread starts here >>4787 At the moment I am working to upgrade Sophie's eye mechanism with proper animatronics. I have confirmed that I'm able to build and program the original mechanism so that the eyes and eyelids move, but that was the easy part. Now I have to make this already 'compact' Nilheim Mechatronics design even more compact so that it can fit snugly inside of Sophie's head. One big problem I can see coming is building in clearance between her eyeballs, eyelids and eye sockets so that everything can move fully and smoothly. I already had to use Vaseline on the eyeballs of the first attempt because the tolerances were so small. If the eyelids are recessed too deep into her face, then she looks like a lizard-creature with a nictitating membrane. But if the eyelids are pushed too far forward then she just looks bug-eyed and ridiculous. There is a middle ground which still looks robotic and fairly inhuman, but not too bad (besides, a certain degree of inhuman is what I'm aiming for, hence I made her an elf). Links to file repositories below. http://www.mediafire.com/folder/nz5yjfckzzivz/Robowaifu_Resources https://drive.google.com/drive/folders/18MDF0BwI9tfg0EE4ELzV_8ogsR3MGyw1?usp=sharing https://mega.nz/folder/lj5iUYYY#Zz5z6eBy7zKDmIdpZP2zNA

Message too long. Click here to view full text.

Edited last time by Chobitsu on 04/17/2021 (Sat) 01:17:14.
134 posts and 76 images omitted.
Open file (33.35 KB 600x600 screw.jpg)
>>10347 There's also some confusion when machine screws are mentioned.
>>10348 not him. >pic kek and helpful at the same time. nice one.
>>10323 Look forward to seeing your progress with dear Elfdroid Sophie, Anon.
Open file (299.61 KB 941x726 smug_robo.GIF)
>>10323 >but it's the upper eyelids that do most of the movement or "fluttering". Yes that's a pretty good bit of observation, Anon. The upper lids are incredibly important to expressing emotions properly. Definitely deserves an inordinate amount of attention, that little detail.

Open file (158.51 KB 770x1000 $_10.jpeg)
I built a WaifuEngine would people want support me via Patreon? Em Elle E 05/11/2021 (Tue) 09:04:56 No.10361 [Reply]
Hey guys, I have been working on this project for a while now, beta supposedly in August. but basically it's a Waifu Engine, I have seen that a lot of the hardware projects have not gone very far and there are some software projects going on here, that lack the Waifu Aesthetic. So I thought I would build a system to solve that issue. I have gotten offline speech synthesis (voice cloning) to work as well as created my own offline bot framework. I am building this project to 1. to eventually raise funds for my wife who has Retinitis Pigmentosa, she will eventually go blind ... sadly, and its for gene therapy that could help who knows!. 2. To leave my job at a place where I am wasting my talent away, doing boring M.L Engineering work for a FANG company. 3. To build an interact-able companion for everyone, for these rough covid times. This is more of a survey, I just wanted to gauge the interest. So would you support me via Patreon? what would it take for that support?

Message too long. Click here to view full text.

Edited last time by Chobitsu on 05/11/2021 (Tue) 10:11:40.
14 posts omitted.
>>10376 >explaining your desire, replying to one of his posts*
>>10378 Hey good work anon it reminds me of the days when I used to play with OpenGL, the nostalgia factor is there!
>>10382 Thanks! I know it sounds immodest, and that's not intentional, but that bit of work is literally the simplest OpenGL-specific C++ code I've ever seen, that goes all the way down to the shader programming and asset import/mesh construction level. I worked hard to simplify things within it, simply b/c that's literally the single best way to deal with complexity (>>9641). The very fact it's so simple is exactly why it runs so fast on such potato-tier hardware.
>>10385 It makes sense your approach, if it works on hardware that is not the greatest then it will be able to be used everywhere, and yeah I have seen complexity be pretty bad before, where do you plan to take your work ? what are the next steps?
>>10388 >where do you plan to take your work ? what are the next steps? Well, I'm the OP of that thread and outlined a couple of ideas. I was chugging along with laying the groundwork of high-performance rendering and environmentals, got asset import working, but wanted to have our own, independynt skeleton system for our system (remember the goal for it is to be an actual simulator, not just an animation system. think 'physics-linked-to-AI-training system') and that led me down the bunny trail of having to learn linear algebra basics sufficient to devise my own FK/IK skeletal system. After a month or two, I picked up enough to probably go on with. But as there were other things calling for my time I shelved it with the intention of picking it back up when the time seemed right.

Speech Synthesis general Robowaifu Technician 09/13/2019 (Fri) 11:25:07 No.199 [Reply] [Last]
We want our robowaifus to speak to us right?

en.wikipedia.org/wiki/Speech_synthesis
https://archive.is/xxMI4

research.spa.aalto.fi/publications/theses/lemmetty_mst/contents.html
https://archive.is/nQ6yt

The Taco Tron project:

arxiv.org/abs/1703.10135
google.github.io/tacotron/
https://archive.is/PzKZd

No code available yet, hopefully they will release it.

github.com/google/tacotron/tree/master/demos
https://archive.is/gfKpg
210 posts and 101 images omitted.
>>9179 Ahh, didn't know about that one, thanks Anon.
Open file (156.72 KB 555x419 overview.webm)
Open file (50.42 KB 445x554 conformer.png)
A novel voice converter that outperforms FastSpeech2 and generates speech faster. Although it doesn't do speech synthesis from text it introduced a convolution-augmented Transformer that could easily be adapted into FastSpeech2 and FastPitch to improve the quality of synthesized speech. https://kan-bayashi.github.io/NonARSeq2SeqVC/
>>10159 Quality sounds excellent. Thanks Anon.
Hey I am looking for the dev that did this work https://gitlab.com/robowaifudev/waifusynth I am working on something similar except with Hi-FiGan. I am looking for a collaborator on my project, it is explained more here. >>10377 The gist of it is, I am building a desktop wall paper you can chat with more on my thread
>>9121 >>10383 >robowaifudev

ROBOWAIFU U Robowaifu Technician 09/15/2019 (Sun) 05:52:02 No.235 [Reply] [Last]
In this thread post links to books, videos, MOOCs, tutorials, forums, and general learning resources about creating robots (particularly humanoid robots), writing AI or other robotics related software, design or art software, electronics, makerspace training stuff or just about anything that's specifically an educational resource and also useful for anons learning how to build their own robowaifus. >tl;dr ITT we mek /robowaifu/ school.
Edited last time by Chobitsu on 05/11/2020 (Mon) 21:31:04.
78 posts and 48 images omitted.
Open file (341.64 KB 894x631 EM.png)
Expectation maximization is an iterative method to find maximum likelihood estimates of parameters in statistical models, where the model depends on unobserved latent variables. https://en.wikipedia.org/wiki/Expectation%E2%80%93maximization_algorithm How EM is useful solving mixture models: https://www.youtube.com/watch?v=REypj2sy_5U How it works: https://www.youtube.com/watch?v=iQoXFmbXRJA Longer lecture on EM algorithms for machine learning: https://www.youtube.com/watch?v=rVfZHWTwXSA EM was applied to hindsight experience replay (which improves expectations of future states from past failures) to greatly improve the learning efficiency and performance, particularly in high-dimensional spaces: https://arxiv.org/abs/2006.07549 Hindsight Experience Replay: https://arxiv.org/abs/1707.01495
I haven't come across a good article or post on pre-training neural networks but I think it's a really important subject for anyone doing machine learning. Recently when trying to convert the T5 model into an autoencoder I made the mistake of forgetting to pre-train it on autoencoding before converting the hidden state into a variational autoencoder. Because of this both the decoder was unable to decode anything useful and it was getting random input from the untrained VAE, making it extraordinarily difficult to train. After fixing this I also locked the parameters of the T5 encoder and decoder to further improve training efficiency by training the VAE specifically on producing the same hidden state output as its hidden state input so the decoder doesn't become skewed learning how to undo the VAEs inaccuracy. Once the VAE reaches a reasonable accuracy then I will optimize the whole model in tandem while retaining the VAEs consistency loss. Pre-training is also really important for reinforcement learning. I can't remember the name of the paper right now but there was an experiment that had an agent navigate a maze and collect items, but finding a reward from a randomly initialized network is nearly impossible, so before throwing the agent into the main task they taught it with auxiliary tasks such as how to smoothly control looking around the screen and how to predict how the scene changes as it moves around. A similar paper to this was MERLIN (Memory, RL, and Inference Network) which was taught how to recognize objects, memorize them and control looking around before being thrown into the maze to search for different objects: https://arxiv.org/abs/1803.10760 For learning to happen efficiently a network has to learn tasks in a structured and comprehensive way, otherwise it's like trying to learn calculus before knowing how to multiply or add. The problem has to be broken down into smaller simpler problems that the network can learn to solve individually before tackling a more complicated problem. Not only do they have to broken down but they need to be structured in a hierarchy, so the final task can be solved with as few skills as possible. The issue of pre-training, transfer learning and how to do it properly will become more important as machine learning tackles more and more complicated tasks. The subject itself could deserve its own thread one day, but for now just being aware of this will make your experiments a lot easier.
Open file (96.49 KB 356x305 roboticist4.jpg)
Open file (104.97 KB 716x199 roboticist1.jpg)
Open file (359.55 KB 500x601 roboticist9_0.jpg)
>Mark Tilden on “What is the best way to get a robotics education today?” https://robohub.org/mark-tilden-on-what-is-the-best-way-to-get-a-robotics-education-today/
Synthesis of asynchronous circuits >Abstract >The majority of integrated circuits today are synchronous: every part of the chip times its operation with reference to a single global clock. As circuits become larger and faster, it becomes progressively more difficult to coordinate all actions of the chip to the clock. Asynchronous circuits do not suffer from this problem, because they do not require global synchronization; they also offer other benefits, such as modularity, lower power and automatic adaptation to physical conditions. >The main disadvantage of asynchronous circuits is that techniques for their design are less well understood than for synchronous circuits, and there are few tools to help with the design process. This dissertation proposes an approach to the design of asynchronous modules, and a new synthesis tool which combines a number of novel ideas with existing methods for finite state machine synthesis. Connections between modules are assumed to have unbounded finite delays on all wires, but fundamental mode is used inside modules, rather than the pessimistic speed-independent or quasi-delay-insensitive models. Accurate technology-specific verification is performed to check that circuits work correctly. >Circuits are described using a language based upon the Signal Transition Graph, which is a well-known method for specifying asynchronous circuits. Concurrency reduction techniques are used to produce a large number of circuits that conform to a given specification. Circuits are verified using a bi-bounded simulation algorithm, and then performance estimations are obtained by a gate-level simulator utilising a new estimation of waveform slopes. Circuits can be ranked in terms of high speed, low power dissipation or small size, and then the best circuit for a particular task chosen. >Results are presented that show significant improvements over most circuits produced by other synthesis tools. Some circuits are twice as fast and dissipate half the power of equivalent speed-independent circuits. Examples of the specification language are provided which show that it is easier to use than current specification approaches. The price that must be paid for the improved performance is decreased reliability, technology dependence of the circuits produced, and increased runtime compared to other tools.

HOW TO SOLVE IT Robowaifu Technician 07/08/2020 (Wed) 06:50:51 No.4143 [Reply]
How do we eat this elephant, /robowaifu/? This is a yuge task obviously, but OTOH, we all know it's inevitable there will be robowaifus. It's simply a matter of time. For us (and for every other Anon) the only question is will we create them ourselves, or will we have to take what we're handed out by the GlobohomoBotnet(TM)(R)(C)? In the interest of us achieving the former I'll present this checklist from George Pólya. Hopefully it can help us begin to break down the problem into bite-sized chunks and make forward progress. >--- First. UNDERSTANDING THE PROBLEM You have to understand the problem. >What is the unknown? What are the data? What is the condition? Is it possible to satisfy the condition? Is the condition sufficient to determine the unknown? Or is it insufficient? Or redundant? Or contradictory? >Draw a figure. Introduce suitable notation. >Separate the various parts of the condition. Can you write them down? Second.

Message too long. Click here to view full text.

Edited last time by Chobitsu on 07/08/2020 (Wed) 07:17:36.
40 posts and 6 images omitted.
>>10331 >zim I already know that program. Thanks for the reminder. Now I'll look into it again, since it's still around. (Wasn't using it bc switched computers and my old disc is encrypted and I don't remember the exact PW. That's why I forgot about the program. One thing I want in the future, is the script or the OS making a textfile with all programs intalled. So one can easily recreate the same OS.)
>>10335 >One thing I want in the future, is the script or the OS making a textfile with all programs intalled. So one can easily recreate the same OS.) My apologies that I can't remember it Anon, but a few years back when I was still on Linux Mint, there was an explicit tool that would run through your program setups and system config, and then record that out to a re-installation script. The explicit intent was to quickly and simply allow an Anon to nuke (or lose) a box, but be able to reinstall everything fresh from scratch with practically no muss or fuss. Again, sorry I don't remember it, but again, it was available in the Linux Mint repos. (Therefore, possibly in the upstream Ubuntu / Debian ones).
>>10331 Wow that sounds amazing Anon, thanks.
Open file (68.08 KB 1182x763 wall of information.PNG)
N00b with 0 practical experience with AI with a bit of an idea. I was gonna put this in the AI design thread, but seeing as it's more a structural question than a nitty-gritty AI question, thought it'd do here. Say you have a chatbot style AI developed. It can take in external information in text, and return information back to the user in text. Before the output text reaches the user, it's run through a script that checks for commands, and when it detects one, triggers an action that the robowaifu body carries out. These actions aren't manually completed by the AI, and instead are pre-scripted or carried out by a dedicated movement AI. Is it possible to train the chatbot AI to consistently understand how to send out commands accurately? How do you incorporate that sort of thing into training data? And, in another way, is it possible to take a robowaifu's senses and pipe them into a chatbot's interface via text in the same manner? Pic related is a better way of explaining it. Is this model feasible, or would an in/out system like this hamper training speed to a no longer viable amount? I know that there's obviously more steps in the chain to this (for one, an always-open microphone will confuse the AI into thinking you're always talking to it, so there has to be an "are you talking to me?" filter in the path), but given this rough draft, is such a model possible with the technology that the average anon has (barring RW@home that other anons have suggested)?
>>10357 I'm not knowledgeable enough ATP to answer your AI-specific questions, but the >And, in another way, is it possible to take a robowaifu's senses and pipe them into a chatbot's interface via text in the same manner? question I can pretty confidently answer with a 'yes', since it really involves little more than sending properly-written signaling data to the display. >diagram I really like your diagram Anon, one minor suggestion: I'd say you could combine the two blocks containing 'Typo Correction' into a single 'Typo Correction/Error Checking' block, that sits before the 'Text Analyzer' block. >Is this model feasible, or would an in/out system like this hamper training speed to a no longer viable amount? Yes, I think that's likely to be a reasonable approximation at this point lad. It will take many, many more additions (and revisions) to flesh it out fully in the end. But you're certainly on the right track I'd say. >is such a model possible with the technology that the average anon has Since a general definition of 'average anon' is pretty much an impossibility, I'd suggest a rough, reasonably adequate, target user definition as being: An Anon who has one or two SBCs and some rechargeable batteries, dedicated specifically to his robowaifu's exclusive use. If it takes anything more than this hardware-wise to work out the AI/chat part of a robowaifu's systems, then that would basically exclude the (much-higher numbers of) impoverished/low-income men around the world (>>10315, >>10319). I'd suggest that it be a fundamental goal here on /robowaifu/ to attempt the AI/Chat system be targeted specifically for the Raspberry Pi SBC. Not only would that be a good end-product goal to target, but it also has advantages for us as designers and developers as well. (>>4969)

Message too long. Click here to view full text.


AI Design principles and philosophy Robowaifu Technician 09/09/2019 (Mon) 06:44:15 No.27 [Reply]
My understanding of AI is somewhat limited, but personally I find the software end of things far more interesting than the hardware side. To me a robot that cannot realistically react or hold a conversation is little better than a realdoll or a dakimakura.

As such, this is a thread for understanding the basics of creating an AI that can communicate and react like a human. Some examples I can think of are:

>ELIZA
ELIZA was one of the first chatbots, and was programmed to respond to specific cues with specific responses. For example, she would respond to "Hello" with "How are you". Although this is one of the most basic and intuitive ways to program a chat AI, it is limited in that every possible cue must have a response pre-programmed in. Besides being time-consuming, this makes the AI inflexible and unadaptive.

>Cleverbot
The invention of Cleverbot began with the novel idea to create a chatbot using the responses of human users. Cleverbot is able to learn cues and responses from the people who use it. While this makes Cleverbot a bit more intelligent than ELIZA, Cleverbot still has very stilted responses and is not able to hold a sensible conversation.

>Taybot
Taybot is the best chatbot I have ever seen and shows a remarkable degree of intelligence, being able to both learn from her users and respond in a meaningful manner. Taybot may even be able to understand the underlying principles of langauge and sentence construction, rather than simply responding to phrases in a rote fashion. Unfortunately, I am not sure how exactly Taybot was programmed or what principles she uses, and it was surely very time-intensive.

Which of these AI formats is most appealing? Which is most realistic for us to develop? Are there any other types you can think of? Please share these and any other AI discussion in this thread!
35 posts and 10 images omitted.
>>10306 Ahh, I see. Thanks. I posted it, but only understood the basic claims that it's somewhat better than a transformer. 1000+ GPU Days isn't useful for us right now, though the coming GPUs seem to be 2.5 times faster and what they're using now will be available to us in some time. Up to three high end GPUs seem to be doable for one PC, based on what I've read in the hardware guide I posted somewhere here (Meta, I guess).
>The machine learning community in the past decade has greatly advanced methods for recognizing perceptual patterns (e.g., image recognition, object detection), thanks to advancements in neural network research. >However, one defining property of advanced intelligence – reasoning – requires a much deeper understanding of the data beyond the perceptual level; it requires extraction of higher-level symbolic patterns or rules. Unfortunately, deep neural networks have not yet demonstrated the ability to succeed in reasoning. >In this workshop, we focus on a particular kind of reasoning ability, namely, mathematical reasoning. Advanced mathematical reasoning is unique in human intelligence, and it is also a fundamental building block for many intellectual pursuits and scientific developments. We believe that addressing this problem has the potential to shed light on a path towards general reasoning mechanisms, and hence general artificial intelligence. Therefore, we would like to bring together a group of experts from various backgrounds to discuss the role of mathematical reasoning ability towards the path of demonstrating general artificial intelligence. In addition, we hope to identify missing elements and major bottlenecks towards demonstrating mathematical reasoning ability in AI systems. >To fully address these questions, we believe that it is crucial to hear from experts in various fields: machine learning/AI leaders who assess the possibility of the approach; cognitive scientists who study human reasoning for mathematical problems; formal reasoning specialists who work on automated theorem proving; mathematicians who work on informal math theorem proving. We hope that the outcome of the workshop will lead us in meaningful directions towards a generic approach to mathematical reasoning, and shed light on general reasoning mechanisms for artificial intelligence. https://mathai-iclr.github.io/papers/
>>10350 This here in particular seems to excite people: >20. Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets
>>10350 > Therefore, we would like to bring together a group of experts from various backgrounds to discuss the role of mathematical reasoning ability towards the path of demonstrating general artificial intelligence. This no doubt will be a major breakthrough 'towards the path', but I have the sense from history, my own experience observing these type group's behavior in current year, and the general agenda of the corporate-controlled media that all the focus in any announcement towards success with this will likely be promoting very heavily the one following word: >demonstrating The spin and hyperbole machines will all be in overdrive proclaiming "SCIENTISTS but not the engineers who actually built the thing :^) ACHIEVE MAJOR BREAKTHROUGH'' Better than human intelligence created in the lab Even if they manage to breakdown a few general principles and manage a specified mathematical reasoning ability as a result -- it does no such thing as show 'better than human intelligence'. I realize this is just a presupposition (though a quite likely one IMO), and therefore a strawman. But there are already lots of things in the real world that can out-perform humans; cardinal birds & commercial jets for instance. But there is far, far, more to being a human being than simply figuring out that 2 + 2 = 4, or even F = ma. In line with the general materialist world-view of most of these spin-doctors, I'm confident enough they almost all will proclaim (ironically enough, in this case) that "None of that other stuff means 'being a human'. It's just Darwin." Mark my words. Thanks Anon. I hope they succeed at this and keep the results actually open-source in deed (not just word as with the OpenAI team). It will be a nice advancement of our goals if they do.
Open file (662.29 KB 1199x2048 ML bingo.jpeg)
>>10353 <Scientists achieve major breakthrough >but it can only be verified with $1,000,000 of compute >but it can't be verified because they refuse to release their source code/model because it's too dangerous >but we won't reproduce it because its carbon footprint is too big >but it's entrenching bias in AI If it became standard to release source code and models, 99.9% of papers in ML would never survive because people could easily test it on something else and show that it doesn't work like they said it does. ML in academia has become a game of smoke and mirrors and an ecosystem of papers built on unverified claims, and the peer review process is akin to pin the tail on the donkey due to the large volume of garbage papers. Most of the progress being made is in the research labs of corporations actually trying to get results because it affects their bottom line, and even then a lot of the hiring they do is just so their competition can't have that talent. Most of the research being done is just to pass the time until the company actually needs something solved. >>10351 Pretty sure this has already been known using regularization to prune neural networks, particularly lasso regularization and network pruning more so than weight decay. The fewer parameters a network needs to solve a particular amount of training data, the more parameters it has free to learn more training data and the better it generalizes. Usually there's a hill to climb and descend in validation loss before reaching peak performance, which they mention but misrepresent by cherry-picking papers. Beyond toy problems like this it never reaches >99%. And it certainly doesn't need to be said that more data works better. Other red flags are no significant ablation studies, no test set dissimilar from the validation and training set to show that it actually generalizes, and oversensitivity to hyperparameters (aka if you don't use this exact learning rate on this exact training data, it doesn't work.) Be very cautious of the ML hype train. They're like people who change their waifus from season to season, tossed to and fro with no direction. The only exception is if there's code going viral that people are playing around with and getting interesting results on other problems.

Open file (14.96 KB 280x280 wfu.jpg)
Beginners guide to AI, ML, DL. Beginner Anon 11/10/2020 (Tue) 07:12:47 No.6560 [Reply]
I already know we have a thread dedicated to books,videos,tutorials etc. But there are a lot of resources there and as a beginner it is pretty confusing to find the correct route to learn ML/DL advanced enough to be able contribute robowaifu project. That is why I thought we would need a thread like this. Assuming that I only have basic programming in python, dedication, love for robowaifus but no maths, no statistics, no physics, no college education how can I get advanced enough to create AI waifus? I need a complete pathway directing me to my aim. I've seen that some of you guys recommended books about reinforcement learning and some general books but can I really learn enough by just reading them? AI is a huge field so it's pretty easy to get lost. What I did so far was to buy a non-english great book about AI, philosophycal discussions of it, general algorithms, problem solving techniques, history of it, limitations, gaming theories... But it's not a technical book. Because of that I also bought a few courses on this website called Udemy. They are about either Machine Learning or Deep Learning. I am hoping to learn basic algorithms through those books but because I don't have maths it is sometimes hard to understand the concept. For example even when learning linear regression, it is easy to use a python library but can't understand how it exactly works because of the lack of Calculus I have. Because of that issue I have hard time understanding algorithms. >>5818 >>6550 Can those anons please help me? Which resources should I use in order to be able to produce robowaifus? If possible, you can even create a list of books/courses I need to follow one by one to be able to achieve that aim of mine. If not, I can send you the resources I got and you can help me to put those in an order. I also need some guide about maths as you can tell. Yesterday after deciding and promising myself that I will give whatever it takes to build robowaifus I bought 3 courses about linear alg, calculus, stats but I'm not really good at them. I am waiting for your answers anons, thanks a lot!
44 posts and 91 images omitted.
Open file (220.57 KB 1199x540 IMG_20210331_191630.jpg)
Open file (52.91 KB 1404x794 IMG_20210331_191334.jpg)
> completely removing the background of a picture (robust PCA) > PCA's main goal: dimensionality reduction. >You can take a bunch of features that describe an object and, using PCA, come up with the list of those that matter the most. >You can then throw away the rest without losing the essence of your object. https://nitter.dark.fail/svpino/status/1377255703933501445
>>9375 That's pretty powerful. I imagine glowniggers are using this idea extensively for surveillance isolation. Not only would this work with a 'pre-prepared' empty background plate for extraction, but a separate system could conceivably create (and keep updated under varying lighting conditions, say) an 'empty' plate from a crowded scene simply by continuously sweeping the scene and finding areas that don't change much frame-to-frame. These blank sections can then all be stitched together to create the base plate to use during main extraction process. Make sense? Ofc, a robowaifu can use this exact same technique for good instead to stay alert to important changes in a scene, alerting her master to anything she sees that might be an impending threat, or even take action herself to intervene. Simplification is the key to both understanding, and to efficiency in visual processing and other areas.
Edited last time by Chobitsu on 05/10/2021 (Mon) 01:01:13.
Related: GPT-2 for beginners >>9371
Illustrated guide to transformers, a step by step introduction: https://youtu.be/4Bdc55j80l8
Edited last time by Chobitsu on 05/10/2021 (Mon) 00:59:53.

Humanoid Robot Projects Videos Robowaifu Technician 09/18/2019 (Wed) 04:02:08 No.374 [Reply] [Last]
I'd like to have a place to accumulate video links to various humanoid – particularly gynoid – robotics projects are out there. Whether they are commercial scale or small scale projects, if they involve humanoid robots post them here. Bonus points if it's the work of a lone genius.

I'll start, Ricky Ma of Hong Kong created a stir by creating a gynoid that resembled Scarlett Johansson. It's an ongoing project he calls an art project. I think it's pretty impressive even if it can't walk yet.

https://www.invidio.us/watch?v=ZoSfq-jHSWw
55 posts and 12 images omitted.
>>5136 Self-made valves for artificial (hydraulic) muscles, by Automaton Robotics: https://youtu.be/USFIZjUE4Us Full playlist: https://youtube.com/playlist?list=PLwBMfPfJtwyYZIH_rtEjXBHnixbhgD6Tw
For those who really only want to try out something cheap, even without a 3D printer (foamboard), but therefore soon, here some inspiration (from India or so, I guess): https://youtu.be/r58zgVHeq5s https://youtu.be/sqbxRnn8heU https://drive.google.com/file/d/1Sa3j7YisYc7zQRS9hs_QfiiJxAgJfoZP/view
>>10315 >those manic drilling-sequences kek. Impoverished men deserve to have robowaifus too. To the extent possible, IMO we here on /robowaifu/ should strive to find ways to economize on the resources needed to construct our robowaifus. And then share everything we know freely ofc. It's one of the reasons I've been working towards designs with very inexpensive construction materials such as drinking straws as structural members. The point is for literally thousands of men around the entire world making literally millions of robowaifus of every form imaginable. While countries that don't have an abundance of affluent White female liberals won't have anywhere near the pozz and feminism that we here in the West do, still these men are also burdened by it to. Or soon will be, as the Globohomo tightens it's death-grip on everyone. Men in these countries should be encouraged by us to learn the skills they need to do what they can with what they have. Thanks for finding & posting this here Anon, appreciated.
>>10179 Entertaining to watch, and that hand is killer literally.
im trying to make armitage real, anyone who wants to assist me is welcom

Report/Delete/Moderation Forms
Delete
Report

Captcha (required for reports)

no cookies?