/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality.

Disappearing posts should be fixed. Let me know if the issue persists on irc.rizon.net @ #julayworld.


Warrant canary has FINALLY been updated.


Roadmap: file restoration script within a few days, Final Solution alpha in a couple weeks.

Sorry for not being around for so long, will start getting back to it soon.

Max message length: 6144

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB

More

(used to delete files and postings)


ROBOWAIFU U Robowaifu Technician 09/15/2019 (Sun) 05:52:02 No.235
In this thread post links to books, videos, MOOCs, tutorials, forums, and general learning resources about creating robots (particularly humanoid robots), writing AI or other robotics related software, design or art software, electronics, makerspace training stuff or just about anything that's specifically an educational resource and also useful for anons learning how to build their own robowaifus. >tl;dr ITT we mek /robowaifu/ school.
Edited last time by Chobitsu on 05/11/2020 (Mon) 21:31:04.
>FOUNDATIONS OF COMPUTATIONAL AGENTS
Online textbook

https://artint.info/2e/html/ArtInt2e.html
General Sciences tread in our own /pdfs/.
>>>/pdfs/60
>>1759
>All books related to software, programming, and technology go here.
>>>/pdfs/22
Bumping this since we need more resources.
Open file (32.78 KB 385x499 vol1.jpg)
Open file (35.14 KB 385x499 vol2.jpg)
Open file (35.66 KB 405x500 vol3.jpg)
A kind Anon over on /tech/ let us all know that ACM is making (at least some portions of) their digital library available for free download. >>>/tech/2455 This is a rather surprising turn of events, and I would encourage all you researchers here on /robowaifu/ to get while the getting is good. Alright, here's a quite pertinent on-topic triplet to get this party started: The Handbook of Multimodal-Multisensor Interfaces: Foundations, User Modeling, and Common Modality Combinations - Volume 1April 2017 https://dl.acm.org/doi/book/10.1145/3015783 The Handbook of Multimodal-Multisensor Interfaces: Signal Processing, Architectures, and Detection of Emotion and Cognition - Volume 2October 2018 https://dl.acm.org/doi/book/10.1145/3107990 The Handbook of Multimodal-Multisensor Interfaces: Language Processing, Software, Commercialization, and Emerging Directions - Volume 3 July 2019 https://dl.acm.org/doi/book/10.1145/3233795
Open file (27.07 KB 405x500 410T+Pt58jL.jpg)
Conversational UX Design: A Practitioner's Guide to the Natural Conversation FrameworkApril 2019 https://dl.acm.org/doi/book/10.1145/3304087
>This is an introduction to the theory and practice of artificial intelligence. It uses an intelligent agent as the unifying theme throughout and covers areas that are sometimes underemphasized elsewhere. These include reasoning under uncertainty, learning, natural language, vision and robotics. The book also explains in detail some of the more recent ideas in the field, including simulated annealing, memory-bounded search, global ontologies, dynamic belief networks, neural nets, inductive logic programming, computational learning theory, and reinforcement learning.
Reinforcement Learning: An Introduction >Reinforcement learning, one of the most active research areas in artificial intelligence, is a computational approach to learning whereby an agent tries to maximize the total amount of reward it receives while interacting with a complex, uncertain environment. In Reinforcement Learning, Richard Sutton and Andrew Barto provide a clear and simple account of the field's key ideas and algorithms. This second edition has been significantly expanded and updated, presenting new topics and updating coverage of other topics. >Like the first edition, this second edition focuses on core online learning algorithms, with the more mathematical material set off in shaded boxes. Part I covers as much of reinforcement learning as possible without going beyond the tabular case for which exact solutions can be found. Many algorithms presented in this part are new to the second edition, including UCB, Expected Sarsa, and Double Learning. Part II extends these ideas to function approximation, with new sections on such topics as artificial neural networks and the Fourier basis, and offers expanded treatment of off-policy learning and policy-gradient methods. Part III has new chapters on reinforcement learning's relationships to psychology and neuroscience, as well as an updated case-studies chapter including AlphaGo and AlphaGo Zero, Atari game playing, and IBM Watson's wagering strategy. The final chapter discusses the future societal impacts of reinforcement learning.
Text Data Management and Analysis - A Practical Introduction to Information Retrieval and Text Mining June 2016 https://dl.acm.org/doi/book/10.1145/2915031
Open file (177.81 KB 1103x1360 71GpBnPuMCL.jpg)
Open file (260.68 KB 1103x1360 81ftddxsqvL.jpg)
The VR Book - Human-Centered Design for Virtual Reality October 2015 https://dl.acm.org/doi/book/10.1145/2792790
A Framework for Scientific Discovery through Video Games July 2014 https://dl.acm.org/doi/book/10.1145/2625848
Frontiers of Multimedia Research December 2017 https://dl.acm.org/doi/book/10.1145/3122865
MIT 6.034 Artificial Intelligence, Fall 2010 >In these lectures, Prof. Patrick Winston introduces the 6.034 material from a conceptual, big-picture perspective. >Topics include reasoning, search, constraints, learning, representations, architectures, and probabilistic inference. https://www.youtube.com/playlist?list=PLUl4u3cNGP63gFHB6xb-kVBiQHYe_4hSi Really good introduction to artificial intelligence. It starts getting interesting in the 3rd lecture when he shows how to use goal trees to create AI that can explain why it performed an action and how to solve goals.
>>2371 > to create AI that can explain why it performed an action and how to solve goals. that sounds like it would be really valuable. thanks anon i'll get a copy.
>>2381 Most of those pages are empty. Jan Peters has done a lot of work though on merging AI with robotics: https://scholar.google.com/citations?hl=en&user=-kIVAcAAAAAJ&view_op=list_works
>>2383 my apologies then. i'll make time to methodically edit it. the couple of examples i did check first were fine.
>>2250 A reinforcement learning course by David Silver, who was the student of Richard Sutton and co-lead researcher for AlphaGo that came up with the idea for MuZero: https://www.youtube.com/watch?v=2pWv7GOvuf0&list=PLqYmG7hTraZBiG_XpjnPrSNw-1XQaM_gB
>>2449 Thanks Anon, grabbing a copy now.
>Deep Residual Learning for Image Recognition >Abstract >Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [41] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. >The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions 1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation. https://github.com/FrancescoSaverioZuppichini/ResNet
Open file (34.90 KB 1130x635 residual.png)
Open file (105.38 KB 640x360 event-based sensors.png)
Open file (27.37 KB 242x415 weber-fechner law.png)
>"What is Neuromorphic Event-based Computer Vision?," a Presentation from Ryad B. Benosman https://www.youtube.com/watch?v=dR8pff_MyL8 Sparse event-based processing is the future of AI. It requires orders of magnitude less power and processing time than conventional approaches, it's far more robust to a wide range of dynamic environments and provides a larger signal-to-noise ratio for training. Though this talk only covers computer vision and there isn't much research in this field yet, the same concept applies to everything else. OpenAI's GPT2 model for example wastes tons of time doing meaningless calculations when only a fraction of 1% of it contains useful processing for any particular given context. To take a gamedev analogy it's the difference between millions of objects checking for updates in a loop vs. using callbacks to only notify objects when something actually happens. Sparse models can also be combined with reinforcement learning and be rewarded for using less processing time so they not only find a solution but also an efficient one, which is another area needing much more research but most of the spiking neural network researchers aren't a fan of backpropagation. Some work creating differentiable spiking neural networks is being done with Spike Layer Error Reassignment in Time (SLAYER) which I think will become really important once neuromorphic computing takes off and becomes commonplace. Another important detail to keep in mind is that the brain processes events mainly by changes in magnitude. Our eyes are able to work in the dark of midnight and also in bright sunlight. We can discern the sound of soft cloth rubbing together and also roaring jet engines. There's evidence that the brain uses logarithmic coding schemes to do this. Further reading Logarithmic distributions prove that intrinsic learning is Hebbian: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5639933/ Sparse networks from scratch: https://timdettmers.com/2019/07/11/sparse-networks-from-scratch/ Sparse matrices in PyTorch: https://towardsdatascience.com/sparse-matrices-in-pytorch-be8ecaccae6 Pruning networks in PyTorch: https://pytorch.org/tutorials/intermediate/pruning_tutorial.html SLAYER for Pytorch: https://github.com/bamsumit/slayerPytorch
>>2526 > To take a gamedev analogy it's the difference between millions of objects checking for updates in a loop vs. using callbacks to only notify objects when something actually happens. That really makes a lot of sense, actually. Hopefully this can make our robowaifus a) able to do the right kinds of things at all, and b) do them efficiently on little SBC & microcontroller potato-boards. Thanks, grabbing a copy of the video now Anon.
>>2526 >"[with this system] you can process 100K kilohertz in realtime." What did he mean by this? What is 'process'? What kind of hardware will do this processing? Regardless, this is a very promising approach to perf optimization, a subject I think we're all very aware is a vital one to our success.
Open file (106.61 KB 1200x675 SpiNNaker.jpg)
>>2533 He's one of the researchers collaborating with SpiNNaker, a neuromorphic computing platform for spiking neural networks. According to a paper the ATIS can send data to a regular CPU to process or to neuromorphic hardware and it has a pixel update frequency up to 1 MHz. I'm not sure how the software works but spike data is usually binary and just reports the time of events or it can be floating point with an amplitude. https://arxiv.org/pdf/1912.01320.pdf >>2529 What excites me the most for the future is applying machine learning to compiler optimization. Having a program that could compile sparse tensor operations efficiently would blow GPUs out of the water, not because of any computational might but because they can be optimized in ways that dense matrices simply can't. We'll likely be able to run models that are much more powerful than GPT2 off a 50-cent 16-MHz chip of today. And for less than $50 someone will be able to put together a motionless doll that can hold a decent conversation and play a wicked game of Go by calling out moves. I estimate this will happen within the next 2-4 years. Sometimes I feel physically anxious we're only a few years away from people sleeping with plushies that can defeat human world Go champions and hold conversations better than most people, and it's just going to keep accelerating from there. The window of opportunity to learn all this shit and implement it into something is so small and it will shape how everything unfolds.
>>2558 >We'll likely be able to run models that are much more powerful than GPT2 off a 50-cent 16-MHz chip of today. That will be an actual breakthrough advance if it happens. As you indicate things are accelerating. Don't let it make you tense Anon. Just stay focused on the prize, we'll make it.
Archived Stephen Wolfram Science & Technology Q&A Livestream. Not strictly /robowaifu/-related, but the man is a literal genius and knows a ton of shit tbh. https://www.invidio.us/watch?v=pemBieAUqAw
https://4chan-science.fandom.com/wiki//sci/_Wiki There's a /sci/ wiki full of textbook recommendations on every subject going from beginner to advanced, along with prerequistes for entire topics on some pages. My personal goal is being able to understand >Dayan and Abbott - Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems (DO NOT attempt to read this unless you have taken Multivariable Calculus, Differential Equations, Linear Algebra, Electricity and Magnetism, and a Probability course that uses calculus.)
I started reading Information Theory: A Tutorial Introduction by James V. Stone and it seems to be a decent book so far that explains everything in a simple way to get an insight to how it works before diving into the complex mathematics in other textbooks. It also has a chapter on mutual information and a section on Kullback-Leibler divergence that's commonly used for variational autoencoders. The book requires some basic understanding of probability and some sections require differential equations and multivariable calculus but are presented with diagrams in a way more so to familiarize yourself with the concepts.
>>2970 Thanks Anon! Would you mind curating a small on-topic selection from the list here as well? I think it would make a nice addition here tbh.
>>2972 >It also has a chapter on mutual information and a section on Kullback-Leibler divergence that's commonly used for variational autoencoders. Excellent. I will be checking it out then. Your descriptions of the efficiencies being achieved via MIM have sparked a keen interest in the topics for me atm.
by Peter Van Weert and Marc Gregoire. Concise, with a solid focus on the modern C++17 standard library. You won't find any C-style C++ here. 308pp. BTW, the link in the book to the publisher is still for the previous version. Here's the correct code files location. https://github.com/Apress/cpp17-standard-library-quick-ref
Edited last time by Chobitsu on 05/11/2020 (Mon) 22:32:16.
>>2973 https://pastebin.com/ENeAEKfZ Okay, I made it. I don't know 99% of this stuff myself, I just picked out what seemed useful and relevant. I hope there's no typos.
>>3004 Sorry, I can't get to cuckbin via tor. Privatebin?
>>3007 thank you very kindly Anon. much appreciated. :^)
Open file (49.22 KB 900x628 CAM_man.jpg)
>>3007 yea this is a really interesting list, i'll have a good time digging through this. >FUN FACT: Carver Mead, the pioneer of modern VLSI and many other breakthroughs, is also generally recognized as the Father of Neuromorphic Computing.
I'm compiling a thread at the moment of significant advancements in AI and found a recent article demonstrating why it's so important to keep up to date with progress: >We’re releasing an analysis showing that since 2012 the amount of compute needed to train a neural net to the same performance on ImageNet classification has been decreasing by a factor of 2 every 16 months. Compared to 2012, it now takes 44 times less compute to train a neural network to the level of AlexNet (by contrast, Moore’s Law would yield an 11x cost improvement over this period). Our results suggest that for AI tasks with high levels of recent investment, algorithmic progress has yielded more gains than classical hardware efficiency. https://openai.com/blog/ai-and-efficiency/ Think about that. By 2028 we will have algorithms 4000x more efficient than AlexNet, so it's not so important what you learn but how quickly you can learn and do so continuously. The incomprehensible learning methods taught to us in public school created by the globalists are not going to help us any in this exponential growth. If you ever feel bored, frustrated or lost with learning you're either following their learning process or brainwashed by it somehow. You instinctively know it's not what you want to be doing. Follow that intuition and find what you truly wanna learn more than you wanna sleep or eat. They intentionally designed education in such a way to confuse us for the purpose of creating specialized workers who are only able to follow instructions within their own field and cannot think for themselves, lest they question the authority of those taught in special private schools who give them the orders and designs. They do not want people who are capable of working by themselves, especially not together outside their control. This article explains how to approach learning and formulate knowledge into flashcards for accelerated learning: https://www.supermemo.com/en/archives1990-2015/articles/20rules It's a bit long but it's worth every word to read for the amount of time it'll save you and enhance your life. There have been dozens of people who have learned 2000+ kanji and Japanese to a conversational level in two months with sentence flashcards. This shit is powerful. I recommend using Anki for creating flashcards. It automatically spaces the cards for optimal learning and eventually you only see them every few months once you remember them. It's like doing power training for your memory and it will save you a lot of time studying and from restudying topics you need to know that you've forgotten from not touching in such a long time. Anki: https://apps.ankiweb.net/ sudo apt install anki
>>3031 >I'm compiling a thread at the moment of significant advancements in AI Looking forward to this tbh.
>>3031 >Think about that. By 2028 we will have algorithms 4000x more efficient than AlexNet I'm assuming that prediction pre-assumes that there will be a consistent rate of advancement in algorithm designs over that same period? Not to be a skeptic, but there was at least a physical basis behind Moore's Law. Does any such standard serve here behind this prediction? Insightful leaps are more like the proverbial bolt out of the blue aren't they, typically-speaking Anon? Regardless, it's quite exciting to see the progress happening. Seeing an objective classification of the progress certainly enhances that. Onward! :^)
>>3054 Yeah, algorithms will hit an entropy limit eventually. Entropy is the basis. I'm not a mathematician that can come up with a proof for the theoretical maximum efficiency to determine when this will be but from my experience working with AI and the depth of my reading, which isn't even that deep compared to the massive volume of papers being published every day, the trend can easily continue for another 8 years. If you look at how rapidly accuracy is improving on natural language processing while decreasing the amount of parameters needed by nearly 2 orders of magnitude (arXiv:2003.02645), it's quite incredible the rate of progress that's being made and these techniques haven't even been tried on other domains yet that will bring new insights and improvements. There's so much great research that isn't being utilized yet like Kanerva machines (arXiv:1804.01756), sparse transformers (arXiv:1904.10509), mutual information machines (arXiv:1910.03175), generative teaching networks (arXiv:1912.07768), large scale memory with product keys (arXiv:2002.02385), neuromodulated machine learning (arXiv:2002.09571), neuromodulated plasticity (arXiv:2002.10585) and exploration (arXiv:2004.12919). Neural networks are extremely inefficient and in some cases their sparsity can be as low as 0.5% without losing any accuracy, which means 99.5% of the calculations are being wasted. On top of that backpropagation is extremely slow, requiring to see the entire training set at least 10 times, while Hebbian learning and Bayesian update rules have shown the capacity to learn training examples in one-shot and generalize to unseen training data. In the case of generative teaching networks, the learner networks don't even need to see the actual training data at all and actually outperform networks trained on the real training data. So we have a long way to go yet to make machine learning more optimal.
>>3057 >and these techniques haven't even been tried on other domains yet that will bring new insights and improvements. Yes. Coding designs contained inside DNA (there are at least six different levels of coding that have been identified thus far) are surely an area that should also benefit greatly from these advances, I'd expect. It wouldn't surprise me if in decades to come the payback into AI will be even larger in the 'other direction'. >Neural networks are extremely inefficient and in some cases their sparsity can be as low as 0.5% without losing any accuracy, which means 99.5% of the calculations are being wasted. Wow, that quite a surprising statistic, actually. >In the case of generative teaching networks, the learner networks don't even need to see the actual training data at all and actually outperform networks trained on the real training data. Yea I kind of got that from the recent paper about the retro-game AI playing doom iirc. Which is pretty remarkable, actually. >So we have a long way to go yet to make machine learning more optimal. Haha, OK you've convinced me. >Soon you will have a 'living' doll in your room who can outperform every living player on your favorite vidya. And she can also cook, clean, act, sing, and dance. even those kinds of things... What a time to be alive! :^)
Open file (19.79 KB 474x265 donald_knuth.jpeg)
Open file (28.79 KB 500x431 big0321751043.jpg)
>ctrl+f "Knuth" >no results This simply will not do, /robowaifu/! https://www-cs-faculty.stanford.edu/~knuth/musings.html
>>3049 Not sure the best way to distill this into a thread but I've finished collecting 100+ research papers: https://gitlab.com/kokubunji/research-sandbox There's a few dozen more I'd like to add but I'm getting burnt out reading through my collection. I've covered most of the important stuff for AI relevant to robowaifus.
>>3177 Thanks, brother for all the hard work. Just cloned it. Get some rest.
>>3177 This list is.amazing. I don't even how you did this Anon.
>>3177 Very fitting jpg. Amazing work anon, this will serve a great help.
Since we may have newfriends who don't know about it yet, there is a video-downloading tool called youtube-dl https://ytdl-org.github.io/youtube-dl/index.html You should always be keeping local copies of anything important to you. In can be removed w/o a single notice, as you should be well aware of by now. Youtube-dl is very important in this regard and is pretty easy to use from the terminal. youtube-dl https://www.youtube.com/watch?v=pHNAwiUbOrc with download the best-quality copy of this hobbyist robot arm homemade video locally to your drive.
>>3177 any thoughts how you plan to make a thread out of this yet Anon? i face a similar more complex challenge making the RDD thread.
>>3220 Not yet. Busy taking my bank account out of the red. I'm probably gonna split it up into different topics and annotate the most important papers with prerequisites so people can figure them out without reading a thousand papers.
>>3221 >I'm probably gonna split it up into different topics and annotate the most important papers with prerequisites so people can figure them out without reading a thousand papers. Sounds like a good idea. Look forward to it.
>>3219 Thanks Anon, looks interesting.
Here's some kind of glossary for AI, Machine Learning, ... https://deepai.org/definitions
>>4616 That looks really helpful Anon, thank you.
Open file (168.73 KB 830x971 ClipboardImage.png)
Very approachable book explaining the internal workings of computers. Awful title, good read.
Here is a learning plan for getting into Deep Learning, which got some appreciation on Reddit: https://github.com/Emmanuel1118/Learn-ML-Basics - It also includes infos about which math basics are needed first. However, I also want to point out that DL seems not to be the best option in every case. Other ML approaches like Boosting might be better for our use cases: https://youtu.be/MIPkK5ZAsms
Open file (229.05 KB 500x572 LeonardoDrawing.jpg)
'''Can Programming Be Liberated from the von Neumann Style? A Functional Style and Its Algebra of Programs''' According to Alexander Stepanov (in the forward to The Boost Graph Library, 2001) This man John Backus and this Turing Award lecture paper were inspirational to the design of the STL for C++. The STL underpins the current state of the art in generic programming that must be both highly expressive+composable, but must also perform very fast as well. Therefore, indirectly so does John Backus’s FP system and for that we can be grateful.
Open file (24.66 KB 192x358 Backus.jpg)
>>5191 Backus also invented FORTRAN (back when that was a first of it's kind for programming portability), and is one of the smartest men ever in the entire history of computing. https://ethw.org/John_Backus
>>5173 Started watching this, its pretty good.
Open file (650.07 KB 1030x720 thetimehathcome.mp4)
Anyone know some good tutorials for getting started in Godot with 3D? I just wanna have materials and animations load correctly and load a map with some basic collision checking to run around inside.
Open file (70.70 KB 895x331 b2_its_time.png)
>>5379 I haven't looked into Godot yet myself, but I'd like to at some point. Please let us know if you locate something good Anon. >that vid leld.
Open file (1.98 MB 1280x720 how2b.mp4)
>>5380 It's painful but if I find anything good I'll post it here. Blender videos are 'how to do X in 2 minutes', but Godot videos are 30 minutes of mechanical keyboard ASMR and explaining you should watch the previous video to understand everything while they're high on helium. There seems to be errors with importing animations of FBX models in 3.2.2 but GLTF works mostly okay. I don't think it's too much of an issue because the mesh of this model has been absolutely destroyed by my naive tinkering and lazy weight painting. FBX corrupted my project somehow but when I start fresh with GLTF all the textures load and everything works great. I accidentally merged by distance all the vertices and destroyed her face but if anyone wants to play around with the 2B model I used, here you go: https://files.catbox.moe/1wamgg.glb Taken from a model I couldn't get to import into Blender correctly: https://sketchfab.com/3d-models/nierautomata-2b-cec89dce88cf4b3082c73c07ab5613e7 I'll fix it up another time or maybe find another model that's ready for animation.
Open file (24.65 KB 944x333 godot3_logo.png)
Found a great site for Godot tutorials and a channel that goes along with it. Text: https://kidscancode.org/godot_recipes/g101/ Videos: https://www.youtube.com/c/KidscancodeOrg/playlists

Report/Delete/Moderation Forms
Delete
Report

Captcha (required for reports)

no cookies?