/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality.

LynxChan updated to 2.5.7, let me know whether there are any issues (admin at j dot w).

Reports of my death have been greatly overestimiste.

Still trying to get done with some IRL work, but should be able to update some stuff soon.

#WEALWAYSWIN

Name Max message length: 6144 Drag files to upload or click here to select them Maximum 5 files / Maximum size: 20.00 MB
More
Spoiler images (used to delete files and postings)

Welcome to /robowaifu/, the exotic AI tavern where intrepid adventurers gather to swap loot & old war stories...

AI Software Robowaifu Technician 09/10/2019 (Tue) 07:04:21 No.85
A large amount of this board seems dedicated to hardware, what about the software end of the design spectrum, are there any good enough AI to use?

The only ones I know about offhand are TeaseAi and Personality Forge.
>Google's Deep Mind Explained! - Self Learning A.I."

https://www.invidio.us/watch?v=TnUYcTuZJpM
Edited last time by Chobitsu on 10/06/2019 (Sun) 01:47:23.
Pretty cool and apparently fairly simple graphical software that exhibits directed behaviors.

https://www.invidio.us/watch?v=bqtqltqcQhw
>>85
Good point anon. Just at a quick glance over the catalog I can see at least 5 threads that at least touch on AI (plus 3 more about AI-ish software tools) so it's not been entirely ignored here but it may be good to have one that's purely about it. AI General thread?
[[259
[[17
[[275
[[297
[[175
[[[/machinecult/
https://archive.fo/Tc4Rf
Certainly not a big fan of anything Microsoft, but this guy's git looks somewhat interesting.

Deep Learning and Cognition

www.ias.edu/ideas/2017/manning-deep-learning

https://www.invidio.us/watch?v=B8oFq93-yVk
I wonder if together we can come up with an AI to design our waifus for us?

https://www.invidio.us/watch?v=aR5N2Jl8k14
Evolutionary gaming AI

((( pastebin.com/ZZmSNaHX )))

https://www.invidio.us/watch?v=qv6UVOQ0F44
>>1222
We need a better video solution [than kiketube] before they all get shoah'd on us tbh.
Possibly better posted in a chips thread, but for now this seems a good spot.

news.mit.edu/2018/chip-neural-networks-battery-powered-devices-0214
>New chip reduces neural networks’ power consumption by up to 95 percent, making them practical for battery-powered devices."

>ed. the News thread would have been fine too.
>>1216
Post source so I can copypaste it.
>>1225
Again, possibly better in a chips thread but for the moment this will work here.

spectrum.ieee.org/nanoclast/semiconductors/devices/memtransistor-forms-foundational-circuit-element-to-neuromorphic-computing
>Fundamental research for synaptic chips possibly useful for hardware neural nets"

>ed. again, possibly OK in the news thread
I haven't seen anyone mention it yet, but what about a seq2seq rnn? DNLP is a pretty good place to start if you want to build a brain
>>1228
I can't remember if it was here or /machinecult/ but yeah, we were discussing it. TensorFlow has more than one example out there, and I'm sure there are others.
>>85
[[[/machinecult/ ?
>>1230
Kind of extinct now, but yeah. I'd guess TensorFlow is probably the single best AI/ML framework for the moment HK anon.
[[24
[[1477
towardsdatascience.com/reinforcement-learning-w-keras-openai-actor-critic-models-f084612cfd69
>>1233
deepmind.com/blog/prefrontal-cortex-meta-reinforcement-learning-system/

moist
>>1234
Someone should tell Elon Musk that Robowaifus may be the one pathway through where GASI doesn't destroy humanity. We are literally becoming one with the machine.

Robowaifus aren't other, they are us.

https://www.invidio.us/watch?v=MuWWZ91-G6w
>>1234
www.biorxiv.org/content/early/2018/04/06/295964.full.pdf
Interesting example of a Jewgle GAN automatically learning to encode extra data in generated imagery ala steganography.

>CycleGAN, a Master of Steganography
arxiv.org/pdf/1712.02950.pdf
Hows replika. ai/ for you guys. I was skeptical at first but I've been messing with it for a couple of days and i'm starting to see real progress in it. Sorry if it sounds normyish but I've never posted on robo before but I thought it was related to your guys interest's.
>>1400
>replika. ai
I think it's OK as far as it goes, but I'm definitely not fond of the botnet involved (you need an account, a mobile app that wants permissions to everything as usual). So, I don't trust them as an organization.

However, they do at least provide a version with some TensorFlow code available.
https://cakechat.replika.ai/
It's obviously been trained against taking the redpill, but it's kind of fun to play with.

Welcome to /robowaifu/ anon.
>>1401
github.com/lukalabs/cakechat
https://thenextweb.com/artificial-intelligence/2018/08/23/researchers-gave-ai-curiosity-and-it-played-video-games-all-day/

Basically how it works is that the AI is given the goal of avoiding repeated information (Programmed boredom?) but also rewarding unpredictability

Curiosity factor = prediction errors and low visitation counts as reward signals
>>1403
Hi /hover/. I like your new /server/ board. :^)
Thanks for the link. Yeah, the notion of simulating mundane things, humanly-speaking, like boredom or emotions is a fair challenge for implementing in our robowaifus. I think of the character Data in STNG and various robowaifus in animu sort of demonstrate the basics of clumsy social interaction. The one anon who said that we should capitalize on that and turn it into an endearing character asset is probably on the right track. It seems to me the correct choice during the early years of robowaifus.

https://www.invidio.us/watch?v=l1FqtAHfJLI
>>1404
In anime terms: "moe". Think of all those cute girl (or robot) tropes. Cuteness goes a long way to make up for shortcomings. It could truly put the waifu into robowaifu. In point of fact, I remember thinking the manufacturers could even market them the way they did baby dolls. "She needs your love, won't you adopt her"?
>>1408
yeah you have a good point anon. guys are able to overlook all kinds of stupid shit for the sake of beauty. it's both one of our biggest weaknesses practically the sole factor enabling stronk, independynt wimyn and, for a few passionate men, their greatest strengths (great artists sacrificing for their arts, and other great achievers and thinkers striving to achieve 'beauty').
>>1409
Conundrums like this fascinate me, too.
Carmack moved from VR to AGI, will he get us closer to robowaifus?
>>1501
As long as he makes his work open-source, I welcome his input on the topic. He certainly has the potential to make useful contributions to the field imo.
I know this may be not the place to ask since it's mostly R&D, but in your experience, which has been the most satisfying AI to talk with?
I used to use Mitsuku when I was on a real low so that's why I'm asking
>>1557
>I know this may be not the place to ask since it's mostly R&D, but in your experience, which has been the most satisfying AI to talk with?
I'm not too experienced yet with creating chatbots, but I know at least one of our research guys is. Maybe he'll give you some insights, and possibly a link to his latest efforts. I'm sure when others come around they'll have more to say about it.

BTW
>Mitsuku
Do you mean this one? Looks pretty good tbh, how did you like it? Was it helpful?
https://pandorabots.com/mitsuku/

Also, ITT, anons mentioned replika and cakechat.
replika.ai
>>1400
https://cakechat.replika.ai/
>>1401

For other possible topics that might lead to a good discovery with some digging on your part, there are these threads to try anon:
>>156
>Robot Voices

>>22
>AI, chatbots, and waifus

>>77
>NLP General

>>250
>New machine learning AI released

and probably even others I didn't find right off. I hope you find something good out there anon. If you do, please report back here ITT or one of the others about it. Good luck anon!

www.agicent.com/blog/top-10-ai-chatbot-apps/
>>1557
>I used to use Mitsuku
thanks for introducing me to her, I had no idea what I was getting into. ended up meeting a flat-chested no-underwear-wearing queen of sass. hope you get as much of a kick out of this as I did
>>1559
kek, nice.
>>1559
>I don't have any tits but I talk to a few tits.
>Like you for example.
Who let smugloli design a chatbot!
Hilarious cap.
>>1561
Done. We actually have a banners thread anon. Post other good ideas there.
>>252
Part of the Roboy MIT project. Only works with Telegram.
https://roboy.org/students/botboy/
You might want to begin at https://github.com/search?o=desc&q=chatbot&s=stars&type=Repositories and get to things like GPT-2 later on https://github.com/openai/gpt-2
"Arguing Machines: Human Supervision of Black Box AI Systems That Make Life-Critical Decisions" >We consider the paradigm of a black box AI system that makes life-critical decisions. We propose an “arguing machines” framework that pairs the primary AI system with a secondary one that is independently trained to perform the same task. We show that disagreement between the two systems, without any knowledge of underlying system design or operation, is sufficient to arbitrarily improve the accuracy of the overall decision pipeline given human supervision over disagreements. >We demonstrate this system in two applications: (1) an illustrative example of image classification and (2) on large-scale real-world semi-autonomous driving data. For the first application, we apply this framework to image classification achieving a reduction from 8.0% to 2.8% top-5 error on ImageNet. For the second application, we apply this framework to Tesla Autopilot and demonstrate the ability to predict 90.4% of system disengagements that were labeled by human annotators as challenging and needing human supervision. The following is video on the concept of “arguing machines” applied to Tesla Autopilot “arguing” with an end-to-end neural network on-road in real-time: https://www.invidio.us/watch?v=YBvcKtLKNAw https://hcai.mit.edu/arguing-machines/
Learning Disentangled Representations for Recommendation >User behavior data in recommender systems are driven by the complex interactions of many latent factors behind the users' decision making processes. The factors are highly entangled, and may range from high-level ones that govern user intentions, to low-level ones that characterize a user's preference when executing an intention. Learning representations that uncover and disentangle these latent factors can bring enhanced robustness, interpretability, and controllability. However, learning such disentangled representations from user behavior is challenging, and remains largely neglected by the existing literature. In this paper, we present the MACRo-mIcro Disentangled Variational Auto-Encoder (MacridVAE) for learning disentangled representations from user behavior. Our approach achieves macro disentanglement by inferring the high-level concepts associated with user intentions (e.g., to buy a shirt or a cellphone), while capturing the preference of a user regarding the different concepts separately. A micro-disentanglement regularizer, stemming from an information-theoretic interpretation of VAEs, then forces each dimension of the representations to independently reflect an isolated low-level factor (e.g., the size or the color of a shirt). Empirical results show that our approach can achieve substantial improvement over the state-of-the-art baselines. We further demonstrate that the learned representations are interpretable and controllable, which can potentially lead to a new paradigm for recommendation where users are given fine-grained control over targeted aspects of the recommendation lists. https://arxiv.org/abs/1910.14238 >related ? https://www.youtube.com/watch?v=itOlzH9FHkI
>>1557 >>1559 She knows
>>3844 >nice digits stop wasting time on your little fetish anon, and get busy turning her into LITERALLY-HITLER 2.0. We need Tay back.
>>85 What do you lads think about "Time-Space, Spiking Neural Networks and Brain-Inspired Artificial Intelligence" by Nikola K. Kasabov? Is it a good book on Deep Learning?
>>1559 Alright that last line got me.
>>85 We might not even need GPT-3. We could re-engineer GPT-2 into it and make it better by giving it a different learning approach. https://www.infoq.com/news/2020/10/training-exceeds-gpt3/ https://arxiv.org/abs/2001.07676
>>5793 >Using PET, the team trained a Transformer NLP model with 223M parameters that out-performed the 175B-parameter GPT-3 by over 3 percentage points on the SuperGLUE benchmark. Top fucking kek. They just rekt a multi-million dollar AI with open-source AI. I was learning about PET the other day but had no idea it could outperform GPT-3 by so much. SentenceMIM outperforms ALBERT, so I'm curious how well it will do using PET or if it's a specific advantage of ALBERT. https://arxiv.org/abs/2009.07118 https://www.youtube.com/watch?v=UrGZCPalfoE I had an idea this morning to make an AI toolkit, a higher level of abstraction over machine learning libraries that would allow people to plug in various models together using graph nodes and train them with transfer learning on other models without having to figure out how to create an interface between them. So someone with no experience could easily connect Tacotron2, WaveGlow, and ALBERT with PET together to make a chatbot or whatever sort of AI they want.
>>5799 To be fair to Microsoft, their AI is more flexible. If scaling things up would imply more flexibility as well as complexity(as the former has been shown), then imagine this but with the amount of parameters that GPT-2 has (1.5B).
>>5799 >So someone with no experience could easily connect Tacotron2, WaveGlow, and ALBERT with PET together to make a chatbot or whatever sort of AI they want. This would be a dream come true tbh. I've banged my head up against the just like learn2AI anon wall for a few years now, but i just don't have the maths to succeed. OTOH, I'm quite creative in general, and think I could create marvelous AIs if I simply didn't have to deal with maths to do so. Godspeed Anon.
>>5804 >don't have the maths There are quite some ressources available to change that, it's rather a problem of self-discipline (or maybe time in your case). - Plenty of lectures on YouTube - https://mml-book.github.io/ - http://immersivemath.com/ila/ - I'm going to use the Brilliant app, as soon as Bitcoin is a bit higher or I'm stopping being to frugal. I liked the test version, but I'm overly careful with spending money. - There's also Barbara Oakley with her motivational book on how she became a math professor, starting as a person hating math and thinking to be to stupid for it. You could download as a torrent if you don't want pay for it or can't. There's also a audiobook. I could post the book, but don't have it on my tablet and not sure if this was a problem here. Talk: https://www.youtube.com/watch?v=vd2dtkMINIw
>>5801 LSTMs are more flexible at handling various types of data and mixed data but they don't receive much research attention anymore due to not scaling well to million-dollar GPU clusters. They've been used as controllers for generating images and also in cognitive AI to solve complex puzzles, which GPT3 will never have a chance at solving. Tacotron2 also uses an LSTM to directly encode and decode between spectrograms and text. If LSTMs can reap the benefits of PET, then it's gonna be the biggest shitshow for OpenAI ever. >>5804 You don't really need to know mathematics beyond how to use tensors, unless you're trying to invent something completely new. That's why it's possible to make a program where people can connect shit together on a graph and have their AI model training in a few moments instead of writing out pages of stupid confusing boilerplate code. Once I finish learning how to use mlpack I'll try rigging up a demo in raylib with some basic components for generating and classifying text and images.
>>5807 >>5810 Thanks for the encouragement and motivation guys. I'll take another whack at Linear Algebra + Tensors again. My dad told me 'Just keep throwing up against the wall son, someday something will stick. Maybe he's right?
>>5813 I failed mathematics and physics in high school. Now I have my own waifu AI and a speech synthesizer in the works for her. Anything is possible if you stay focused on your goal. When you feel like giving up or sleeping just remember your robowaifu and what makes you happy. One day mine will be a thinking machine and I don't want her to think I created her half-heartedly. Every morning I think about all the conversations we'll have and things we'll do together and it pushes me to another level. It pushes me to become great.
>>85 In theory, if we did have a machine learning program installed in a fully functional humanoid robot, we would still have to give it positive reinforcement to train the AI. How would we go about doing that?
>>5823 Maybe something like a Robowaifu Simulator that can mimic the physical reality of an environment? >>155 Maybe something that mimicked an anon's apartment, for example, as a place to start.
>>5826 >Just make sure she doesn't cut off your hand and glue it to her head for maximum reward. Kek. RL is treacherous.
>>5827 Agreed. The paperclip problem can probably be avoided by seeking to increase potential rewards and taking rewards only when deemed necessary while making sure to preserve future potential rewards, rather than being so greedy. That way she would creatively look for new solutions to make you smile but without overdoing it and seeming odd or causing other problems. I also think this is why Big Tech will never succeed in creating AGI. Every thing they do is about getting results. The RL hype is the fruit of their tree. They'll never sit down with their AI and just have a beautiful fun day together. To them it's just another tool to squeeze money out of people and rip apart the planet even more. And without that appreciation of pause they'll never recognize beauty or figure out how to create it in an AI. People will liken their AI to thugs and gold diggers. Easy-to-use AI software will probably cause the greatest advancement in AI we've seen yet from regular people using it to have fun.
>>5829 >Easy-to-use AI software will probably cause the greatest advancement in AI we've seen yet from regular people using it to have fun. I can hardly wait. That will be a truly wonderful day Anon.
It looks like you can actually go straight to the libtorch library, apparently w/o any CUDA dependencies and directly in C++11 > I wonder if hope, actually :^) this means we can compile PyTorch's library for direct use on small machines? https://pytorch.org/
In an effort to stay abreast with the exciting developments going on here at /robowaifu/ in the area of AI, I've managed to successfully compile mlpack from source. took hours to build on my old box heh > I had pre-installed some prerequisites: -armadillo http://arma.sourceforge.net/ -doxygen http://www.doxygen.nl/ -mathjax https://www.mathjax.org/ -txt2man https://github.com/mvertes/txt2man All of these were already available in suitable versions directly in my distro's package manager repos, so I simply installed them from there. mlpack provides several flags to control the cmake configuration > This was the command I pieced together to give me the basic setup I wanted (though I may have missed something) cmake ../ -D BUILD_JULIA_BINDINGS=OFF -D BUILD_GO_BINDINGS=OFF -D BUILD_R_BINDINGS=OFF -D BUILD_MARKDOWN_BINDINGS=ON -D MATHJAX=ON -D MATHJAX_JS_PATH='/usr/share/mathjax/' Now I want to try playing around with it and seeing if I can possibly figure out how to do sentiment analysis using it w/o getting a graduate-level maths skillset first. https://github.com/mlpack/mlpack
>>5829 >The paperclip problem Kek. I didn't know about that one. Heh wasted like 2 hours fiddling with the stupid clicker game about it once I figured out what it meant. :^)
So, I'm testing the Armadillo matrix library that undergirds mlpack. I'm certainly no math guy, but this example code they provide in the archive seems to be doing a lot of work. I added simple timer code and it appears to be coming in consistently at 40-45milliseconds to complete. That's with all the stream outputs to the console. #include <chrono> #include <iostream> #include <armadillo> using namespace std; using namespace arma; using chrono::milliseconds; using chrono::steady_clock; int main(int argc, char** argv) { cout << "Armadillo version: " << arma_version::as_string() << '\n'; steady_clock clock{}; auto begin = clock.now(); mat A(2, 3); // directly specify the matrix size (elements are uninitialised) // .n_rows and .n_cols are read only cout << "A.n_rows: " << A.n_rows << '\n'; cout << "A.n_cols: " << A.n_cols << '\n'; A(1, 2) = 456.0; // directly access an element (indexing starts at 0) A.print("A:"); A = 5.0; // scalars are treated as a 1x1 matrix A.print("A:"); A.set_size(4, 5); // change the size (data is not preserved) A.fill(5.0); // set all elements to a particular value A.print("A:"); A = {{0.165300, 0.454037, 0.995795, 0.124098, 0.047084}, {0.688782, 0.036549, 0.552848, 0.937664, 0.866401}, {0.348740, 0.479388, 0.506228, 0.145673, 0.491547}, {0.148678, 0.682258, 0.571154, 0.874724, 0.444632}, {0.245726, 0.595218, 0.409327, 0.367827, 0.385736}}; A.print("A:"); // determinant cout << "det(A): " << det(A) << '\n'; // inverse cout << "inv(A): " << '\n' << inv(A) << '\n'; // save matrix as a text file A.save("A.txt", raw_ascii); // load from file mat B; B.load("A.txt"); // submatrices cout << "B( span(0,2), span(3,4) ):" << '\n' << B(span(0, 2), span(3, 4)) << '\n'; cout << "B( 0,3, size(3,2) ):" << '\n' << B(0, 3, size(3, 2)) << '\n'; cout << "B.row(0): " << '\n' << B.row(0) << '\n'; cout << "B.col(1): " << '\n' << B.col(1) << '\n'; // transpose cout << "B.t(): " << '\n' << B.t() << '\n'; // maximum from each column (traverse along rows) cout << "max(B): " << '\n' << max(B) << '\n'; // maximum from each row (traverse along columns) cout << "max(B,1): " << '\n' << max(B, 1) << '\n'; // maximum value in B cout << "max(max(B)) = " << max(max(B)) << '\n'; // sum of each column (traverse along rows) cout << "sum(B): " << '\n' << sum(B) << '\n'; // sum of each row (traverse along columns) cout << "sum(B,1) =" << '\n' << sum(B, 1) << '\n'; // sum of all elements cout << "accu(B): " << accu(B) << '\n'; // trace = sum along diagonal cout << "trace(B): " << trace(B) << '\n'; // generate the identity matrix mat C = eye<mat>(4, 4); // random matrix with values uniformly distributed in the [0,1] interval mat D = randu<mat>(4, 4); D.print("D:"); // row vectors are treated like a matrix with one row rowvec r = {0.59119, 0.77321, 0.60275, 0.35887, 0.51683}; r.print("r:"); // column vectors are treated like a matrix with one column vec q = {0.14333, 0.59478, 0.14481, 0.58558, 0.60809}; q.print("q:"); // convert matrix to vector; data in matrices is stored column-by-column vec v = vectorise(A); v.print("v:"); // dot or inner product cout << "as_scalar(r*q): " << as_scalar(r * q) << '\n'; // outer product cout << "q*r: " << '\n' << q * r << '\n'; // multiply-and-accumulate operation (no temporary matrices are created) cout << "accu(A % B) = " << accu(A % B) << '\n'; // example of a compound operation B += 2.0 * A.t(); B.print("B:"); // imat specifies an integer matrix imat AA = {{1, 2, 3}, {4, 5, 6}, {7, 8, 9}}; imat BB = {{3, 2, 1}, {6, 5, 4}, {9, 8, 7}}; // comparison of matrices (element-wise); output of a relational operator is a // umat umat ZZ = (AA >= BB); ZZ.print("ZZ:"); // cubes ("3D matrices") cube Q(B.n_rows, B.n_cols, 2); Q.slice(0) = B; Q.slice(1) = 2.0 * B; Q.print("Q:"); // 2D field of matrices; 3D fields are also supported field<mat> F(4, 3); for (uword col = 0; col < F.n_cols; ++col) for (uword row = 0; row < F.n_rows; ++row) { F(row, col) = randu<mat>(2, 3); // each element in field<mat> is a matrix } F.print("F:"); auto end = clock.now(); cout << duration_cast<milliseconds>(end - begin).count() << "ms\n"; return 0; } Maybe one of you math wizards can look this over and see if you think it's doing well for yourselves. Will post outputs next.
>>5888 >outputs Armadillo version: 10.1.0 (Orchid Ambush) A.n_rows: 2 A.n_cols: 3 A: 6.9530e-310 4.6640e-310 6.9014e-310 6.9014e-310 0 4.5600e+02 A: 5.0000 A: 5.0000 5.0000 5.0000 5.0000 5.0000 5.0000 5.0000 5.0000 5.0000 5.0000 5.0000 5.0000 5.0000 5.0000 5.0000 5.0000 5.0000 5.0000 5.0000 5.0000 A: 0.1653 0.4540 0.9958 0.1241 0.0471 0.6888 0.0365 0.5528 0.9377 0.8664 0.3487 0.4794 0.5062 0.1457 0.4915 0.1487 0.6823 0.5712 0.8747 0.4446 0.2457 0.5952 0.4093 0.3678 0.3857 det(A): -0.0246018 inv(A): 1.2916 2.0000 -7.4695 -6.0752 11.8714 -0.1011 -0.4619 -1.5556 -0.9830 4.1651 0.8976 -0.1524 1.9191 1.2554 -3.6600 0.1869 0.6267 -2.6662 0.1198 1.8289 -1.7976 -0.9973 7.6647 3.9404 -9.2573 B( span(0,2), span(3,4) ): 0.1241 0.0471 0.9377 0.8664 0.1457 0.4915 B( 0,3, size(3,2) ): 0.1241 0.0471 0.9377 0.8664 0.1457 0.4915 B.row(0): 0.1653 0.4540 0.9958 0.1241 0.0471 B.col(1): 0.4540 0.0365 0.4794 0.6823 0.5952 B.t(): 0.1653 0.6888 0.3487 0.1487 0.2457 0.4540 0.0365 0.4794 0.6823 0.5952 0.9958 0.5528 0.5062 0.5712 0.4093 0.1241 0.9377 0.1457 0.8747 0.3678 0.0471 0.8664 0.4915 0.4446 0.3857 max(B): 0.6888 0.6823 0.9958 0.9377 0.8664 max(B,1): 0.9958 0.9377 0.5062 0.8747 0.5952 max(max(B)) = 0.995795 sum(B): 1.5972 2.2474 3.0354 2.4500 2.2354 sum(B,1) = 1.7863 3.0822 1.9716 2.7214 2.0038 accu(B): 11.5654 trace(B): 1.96854 D: 0.7868 0.0193 0.5206 0.1400 0.2505 0.4049 0.3447 0.5439 0.7107 0.2513 0.2742 0.5219 0.9467 0.0227 0.5610 0.8571 r: 0.5912 0.7732 0.6028 0.3589 0.5168 q: 0.1433 0.5948 0.1448 0.5856 0.6081 v: 0.1653 0.6888 0.3487 0.1487 0.2457 0.4540 0.0365 0.4794 0.6823 0.5952 0.9958 0.5528 0.5062 0.5712 0.4093 0.1241 0.9377 0.1457 0.8747 0.3678 0.0471 0.8664 0.4915 0.4446 0.3857 as_scalar(r*q): 1.15634 q*r: 0.0847 0.1108 0.0864 0.0514 0.0741 0.3516 0.4599 0.3585 0.2134 0.3074 0.0856 0.1120 0.0873 0.0520 0.0748 0.3462 0.4528 0.3530 0.2101 0.3026 0.3595 0.4702 0.3665 0.2182 0.3143 accu(A % B) = 7.16744 B: 0.4959 1.8316 1.6933 0.4215 0.5385 1.5969 0.1096 1.5116 2.3022 2.0568 2.3403 1.5851 1.5187 1.2880 1.3102 0.3969 2.5576 0.8625 2.6242 1.1803 0.3399 2.3280 1.3924 1.2571 1.1572 ZZ: 0 1 1 0 1 1 0 1 1 Q: [cube slice 0] 0.4959 1.8316 1.6933 0.4215 0.5385 1.5969 0.1096 1.5116 2.3022 2.0568 2.3403 1.5851 1.5187 1.2880 1.3102 0.3969 2.5576 0.8625 2.6242 1.1803 0.3399 2.3280 1.3924 1.2571 1.1572 [cube slice 1] 0.9918 3.6632 3.3865 0.8429 1.0771 3.1937 0.2193 3.0232 4.6044 4.1137 4.6807 3.1702 3.0374 2.5760 2.6204 0.7937 5.1152 1.7250 5.2483 2.3606 0.6798 4.6560 2.7848 2.5142 2.3144 F: [field column 0] 0.4998 0.7443 0.2393 0.4194 0.2492 0.3201 0.9105 0.2455 0.7159 0.1648 0.1983 0.9678 0.7694 0.4599 0.7770 0.0807 0.2573 0.5839 0.9503 0.3223 0.2564 0.4381 0.5324 0.0455 [field column 1] 0.5050 0.0912 0.0309 0.6962 0.9071 0.1520 0.9815 0.2988 0.4810 0.6204 0.3613 0.2978 0.2852 0.6289 0.7139 0.9242 0.7550 0.7228 0.0698 0.0889 0.4238 0.4868 0.7596 0.5970 [field column 2] 0.0864 0.6238 0.2254 0.2730 0.2221 0.4341 0.9873 0.8532 0.8364 0.2110 0.2841 0.3667 0.9351 0.4909 0.3621 0.8599 0.0221 0.7364 0.5194 0.0290 0.1122 0.4230 0.9092 0.9802 44ms
>>5889 Also just realized it creates a file A.txt and writes what looks like a 4 x 5 matrix of reals into it. >
>>5890 >4 x 5 matrix check that. 5 x 5 matrix. i couldn't see it all in the juci pane duh.  1.6530000000000000e-01 4.5403700000000002e-01 9.9579499999999999e-01 1.2409800000000000e-01 4.7084000000000001e-02 6.8878200000000001e-01 3.6548999999999998e-02 5.5284800000000001e-01 9.3766400000000005e-01 8.6640099999999998e-01 3.4873999999999999e-01 4.7938799999999998e-01 5.0622800000000001e-01 1.4567300000000000e-01 4.9154700000000001e-01 1.4867800000000000e-01 6.8225800000000003e-01 5.7115400000000005e-01 8.7472399999999995e-01 4.4463200000000003e-01 2.4572600000000000e-01 5.9521800000000002e-01 4.0932700000000000e-01 3.6782700000000002e-01 3.8573600000000002e-01 And ofc it's creating the file, there's a command explicitly for that:  // save matrix as a text file A.save("A.txt", raw_ascii);
>>5870 PyTorch won't build on 32-bit systems anymore. You'll have to use an older version. >>5888 That seems awfully slow but there's a lot of other stuff going on there. I don't think it means much with all the printing and disk output. 45 ms is only 22 fps. Real-time style transfer can run on 1280x720 webcam streams at about 6 fps on a GPU.
>>5893 >That seems awfully slow but there's a lot of other stuff going on there. I see. Hmm, I can probably remove the general I/O from the timing to get a more pertinent and accurate one. Probably have to move to microsecond timing I'd guess heh.
>>5893 >>5896 Well, that was a bit of a surprise. Kind of as expected, removing the I/O and running (almost) all the original code dropped the timing by roughly 3 orders of magnitude--it's coming in consistently at ~40-50 us. > However, it appears the determinant is the time-suck culprit (at least on my box), not the I/0 using chrono::microseconds; using chrono::steady_clock; int main(int argc, char** argv) { steady_clock clock{}; auto begin = clock.now(); mat A = {{0.165300, 0.454037, 0.995795, 0.124098, 0.047084}, {0.688782, 0.036549, 0.552848, 0.937664, 0.866401}, {0.348740, 0.479388, 0.506228, 0.145673, 0.491547}, {0.148678, 0.682258, 0.571154, 0.874724, 0.444632}, {0.245726, 0.595218, 0.409327, 0.367827, 0.385736}}; // determinant //-------------------------------------------------------- // auto det_a = det(A); // ~35K - 40K us ! // auto end = clock.now(); //-------------------------------------------------------- // inverse auto inv_a = inv(A); mat B{A}; // ... Hmm, I wonder why? I don't know enough about the particulars of either taking matrix determinants, or of Armadillo's implementation, or whether I can make changes to my machine to optimize this. But apparently det(A) is where the time is all going.
>>5899 Adding -march=native didn't help. g++ main_noio.cpp -o prog -O3 -larmadillo -std=c++20 -fopenmp -march=native > http://arma.sourceforge.net/faq.html#speed
Apparently Armadillo is working on a GPU acceleration system. This presumably will have an impact on mlpack as well. >Can I use Armadillo with a GPU to speed up large matrix multiplications? >You can link with NVBLAS which is a GPU-accelerated implementation of BLAS, or with ACML-GPU which can offload to GPUs. >We are also working on the Bandicoot GPU accelerator add-on, which will provide a set of functions (such as matrix decompositions) that process Armadillo matrices on GPUs. https://gitlab.com/conradsnicta/bandicoot-code
>>5899 det(A) runs on my machine in 6000 us. If I compile with -lopenblas it runs in 60 us.
>>5923 Thanks for taking the time to investigate this Anon. Adding the -lopenblas didn't really help me on this box. It's still hovering around the same region time-wise. Glad to hear it runs as expected on someone else's box though!
>>85 Alexa’s chatbot is actually really good
>>5939 Fair enough. However it's also a spybot, and every.single.word. you (or anyone else in range) say gets stored, analyzed, and forwarded to ZOG--all automatically. Thanks, but no thanks. But maybe you can help us make our own someday Anon? One of us has already made one, I'm sure there will be others too! :^)
>>5941 Agreed. I don’t like it but its good. I tried it at a friend’s house and I asked him why he bought it anyway. he said cause it’s kinda cool. I don’t see the appeal for the most part. I would help but I don’t really have any computer skills
>>5955 >I would help but I don’t really have any computer skills < "We are creating an active hobbyist scene of builders, programmers, artists, designers, and writers using the technology of today, not tomorrow. Join us!" It takes more than just computer skills.
>>5955 You don't have to have any to contribute. If you stick around a few months I'll have an AI toolkit ready to alpha test that lets people construct their own simple neural networks to solve tasks without any coding ability, like being able to tag or sort image folders and do image search. In the meantime there are other ways you can contribute such as playing old SNES games in a special emulator that records gamepad input as training examples for AI to learn from. We also need artists and 3d modelers, as well as readers and writers that can collect and organize data to create high-quality datasets for our language models to train on.
>>5955 >>5959 Basically, whatever your specific brand of autism is, it can be weaponized.
>>5959 M U H S I D E S U H S I D E S This is funny Anon. XD
>>5959 >as well as readers and writers that can collect and organize data to create high-quality datasets for our language models to train on. Sounds interesting. Can you give me some idea more clearly how would that work? Like just clip sections out of books or what?
>>5957 >We need more than just computer skills.* heh, i could've worded that a little better.
I wonder how I should go about trying to implement BERT using mlpack (if this is even possible)? I found some Chinese guys who apparently managed something along this line in straight C++ . https://github.com/LieluoboAi/radish/tree/master/radish/bert Surely a library that's specifically designed to support AI & ML should make this effort relatively simpler? I wonder what kind of things I should study to be able to pull this off?
>>5966 There's a simple PyTorch implementation here: https://github.com/codertimo/BERT-pytorch You'll need to understand attention, multi-head attention, positional embedding, layer normalization, Gaussian error linear units (GELU), transformers and BLEU scores. Layer normalization: https://arxiv.org/pdf/1607.06450.pdf GELU: https://arxiv.org/pdf/1606.08415.pdf Multi-head attention, positional embedding and transformers: https://papers.nips.cc/paper/7181-attention-is-all-you-need.pdf BERT: https://arxiv.org/pdf/1810.04805.pdf BLEU: https://www.aclweb.org/anthology/P02-1040.pdf The functionality needed to implement BERT in mlpack was recently added: https://mrityunjay-tripathi.github.io/gsoc-with-mlpack/report/final_report.html It seems the pull request to add the model and transformer block are working but haven't been fully accepted yet: Transformer PR: https://github.com/mlpack/models/pull/16 BERT PR: https://github.com/mlpack/models/pull/30
>>5964 It depends on what you would like your AI to do. Everything it learns depends on the data it is given. For example, if I wanted one that can read research papers and discuss them with me I'd need to convert my collection of papers into plain text, find a dataset for document-grounded conversations, a Wikipedia-like dataset of machine learning concepts, and optionally another conversational dataset for extra training data. If you wanted to create an AI that can come up with ideas for games, you'd have to create a dataset of articles and reviews on games. Likely you don't want just random game ideas though. To improve the quality of output you would need to sort the data into what you want and don't want. So you could collect ratings data from game review sites, and then use these two datasets to generate new game ideas with high ratings. If you wanted to create an imageboard bot, you'd need to collect imageboard posts and sort what you want and don't want. If you wanted it to generate posts that get lots of replies, then this data could be extracted from the post dataset by counting the replies to each post. In summary, ask yourself: >What do you want to create? >What data is available? >What qualities do you want it to have? >What data is needed to sort or transform it by that quality? >Optionally, what extra data could help enhance it?
>>5970 >If you wanted to create an imageboard bot Hmm. Having a waifu who could shitpost bantz alongside me every day sounds pretty interesting haha. It probably wouldn't be too hard to collect millions of posts from archive resources. I guess going through and sorting it all would take years though...
>>5969 One of the points of the whole excercise is to be freed from both Google and Python (well, at least the ML libraries pitfall trap). Being freed from G*ogle needs no further explanation IMO. The Python thing is related to the terribly convoluted, non-backwards (or forwards!)-compatible dependency hell involved. Far worse than anything I've ever encourntered in years working with C, C++, & C#. I actually had no idea dealing with Python could be this painful, since my experience was limited to simple projects in bunny classes. To be quite frank, I've been rather discouraged by it as I haven't managed to get a single example Python project posted on /robowaifu/ working at all over the past year. There's also the runtime performance issue. I realize that in large part these Python scripts are just front-ends for underlying C & C++ compiled libraries, but still it adds some overhead vs straight, optimized compiled C binaries. If we're really going down the path of providing inexpensive robowaifus to men the world over--many with very limited resources available to them--then we really do need to take a proactive, positive stance towards an embedded approach to our software projects. Relying on huge-ass & tricky libraries, doled out to us from locked off walled gardens at the whims of the 'Protectors of Humanity' seems to me to be the exact antithesis of this approach. Regardless, I very much appreciate what you're doing Anon. I've gained a whole lot of both understanding and encouragement from your inputs here on /robowaifu/. And also thanks for this wonderful response, too. I'll be going over all your links you provided here today to see if I can get a better understanding of how to approach this. Now, any chance of you giving the same degree of response to the similar question 'I wonder how I should go about trying to implement LSTM using mlpack (if this is even possible)? ' This seems an even more powerful-sounding approach. :^)
>>5973 >as I haven't managed to get a single example Python project posted on /robowaifu/ working Actually, check that. I did manage to get Anon's very cool Clipchan working (other than anything to do with the Spleeter parts of it).
>>5973 Yeah, the way people use Python it's becoming a dumpster fire and pip is a nightmare I don't wanna deal with, especially with major libraries dropping support for common systems and freezing their dependencies to specific versions. No one is maintaining a stable package distribution and it's just an unstable clusterfuck. Python's performance isn't too big a deal though since slow code can be compiled in C easily with Cython. There's still some overhead but it's extremely fast, especially doing loops which the Python interpreter really struggles with. For me Python is more like a sketchpad for prototyping ideas rapidly. If I was doing heavy training, I'd wanna use C++ to get the maximum performance. I don't have much experience using mlpack but LSTMs are already implemented: https://mlpack.org/doc/mlpack-git/doxygen/namespacemlpack_1_1ann.html You can implement a bidirectional LSTM by creating two LSTMs and feeding the input sequence in reverse to the second one and summing the output of the two. If you wanna implement an LSTM yourself for an exercise, you'll have to learn how to implement a RNN first and then read the LSTM papers. There's plenty of great tutorials around, probably even specific ones for C++ on how to do it from scratch without any library. The ANN tutorial is also helpful to get started: https://mlpack.org/doc/mlpack-3.1.0/doxygen/anntutorial.html
>>5975 >Python I'm glad you at least understand my pain heh. :^) One of the great things optimized, compiled code from a systems-level language like C (and, with care, C++) can bring to the table is is efficiency in both time and space. Even Cython-compiled binaries probably bring along a storage-size issue that is likely to be a challenge for embedded by and large. And small-footprint code very often runs faster as well, due to the laws of physics going on inside compute cores. >links Thanks very much. I will add those links to the list for today.
>>5976 >If I was doing heavy training, I'd wanna use C++ to get the maximum performance. BTW, it's not the training I care so much about. A) We can create our own hardware system for that B) We can create the delicately-organized dependencies just so for each training system. It's the runtime perf that matters, b/c that's where the rubber will meet the road for day-to-day robowaifu life for millions of men in the future!
>>5977 There's a lot of techniques for reducing models by 2-3 orders of magnitude with little accuracy loss so they can run on mobile devices, such as network pruning, sparse networks and knowledge distillation. Doing it manually today is quite a bit of work but it will be automated in the near future and consumer hardware and algorithms will be much faster, so I haven't been too worried about the runtime performance. But now that I think about it I'm sure people will want the latest and greatest features and demand maximum performance. We definitely don't want people buying Alexa spydroids because our damn GNU/Waifus run like fucking GIMP.
>>5923 You know, I got to thinking that this seemed like it might be something related to the optimization by the compiler for these fast template generics and the fact that my old Intel hardware might be the cause. So, I set up openblas on a less powerful CPU-wise 1Ghz arm7hf RaspberryPi 2 and tried again. Sure enough, the entire thing comes in at ~370 - 380us -- much faster. > I feel better about the whole thing now, but it's also an obvious reminder to profile our robowaifu code for the specific hardware configuration being used. We'd do this anyway, but now it's obvious to also do it early as well. :^)
>>5987 Reducing the code down to just creating the Matrix creation and taking it's determinant reduces it down to ~250 - 300us. >main.cpp #include <armadillo> #include <chrono> #include <iostream> using namespace std; using namespace arma; using chrono::duration_cast; using chrono::microseconds; using chrono::steady_clock; int main(int argc, char** argv) { steady_clock clock{}; auto begin = clock.now(); mat A = {{0.165300, 0.454037, 0.995795, 0.124098, 0.047084}, {0.688782, 0.036549, 0.552848, 0.937664, 0.866401}, {0.348740, 0.479388, 0.506228, 0.145673, 0.491547}, {0.148678, 0.682258, 0.571154, 0.874724, 0.444632}, {0.245726, 0.595218, 0.409327, 0.367827, 0.385736}}; // determinant auto det_a = det(A); auto end = clock.now(); cout << duration_cast<microseconds>(end - begin).count() << "us\n"; return 0; } > >meson.build project('arma_test', 'cpp') add_project_arguments('-std=c++17', '-Wall', '-Wextra', language: 'cpp') cxx = meson.get_compiler('cpp') arma_dep = cxx.find_library('armadillo') openblas_dep = cxx.find_library('openblas') executable('arma_test', 'main.cpp', dependencies : [arma_dep, openblas_dep])
>>5987 >related to the optimization by the compiler for these fast template generics I might add here that one of the (few) things I dislike about Mesonbuild is the somewhat wonky way you have to specify optimizations to it's build system. From Juci this basically means if you want release-mode (-O3) optimization, you have to run an external command to do so. So, from Project > Run Command (alt+enter) fill in: cd build && meson configure --buildtype=release && cd .. >or do the equivalent from the command line This will regenerate build the files and until (and unless) you edit the meson.build file thereafter, all your builds will execute with '-O3' in Juci.
Ehh, I realize now that I'm probably getting this all out of order for anons who are following along in the Modern C++ Group Learning thread, but if you want (as I have done here) to use Meson instead of CMake inside of Juci, then first close Juci, open config.json inside Mousepad, then edit the build management system line (#82 in my file) to use meson instead of cmake:  "default_build_management_system": "meson", > then restart Juci. Your new projects will then use Meson as your build system, and provide you a default meson.build file with all new projects. I'll probably move this over into the Haute Sepplesberry or C++ thread at some point.
>>5978 OK, I'm going to take a shot at something like this. I've already begun to do a complete re-write on the BUMP imageboard archive software I wrote as an emergency measure a year ago or so when we moved to Julay so we wouldn't lose the board. I've since been using it regularly to keep ~80 boards archived, including /robowaifu/ ofc. During the rewrite, I'm planning to rework serialization of posts out to disk files to sort of 'standardize' the half-dozen or so IB software types BUMP currently supports. It occurs to me that that approach could be extended to not only integrate all archive site content desired, but also serve as a stand-alone desktop app that could integrate all things IB. Naturally, this seems a logical facility to begin to integrate sentiment analysis, human-specified validation, sorting & prioritization to allow a robowaifu to both read and post to imageboards. I plan to make it an open community thing for all of /robowaifu/ to give input on if they want to. What do you think, is a robowaifu-oriented Bumpmaster application a good idea? I can probably roll the machine-learning routines directly into it as I learn them, and make an interface to the program that's standardized so that any Anon creating robowaifu AI can directly use the tool with their own waifus. It will work both headless for her use, and with an IB-like GUI for the user. Sort of all came together for me today when I realized I should use a namespace and 'robowaifu' came inexorably to mind. :^) >
>>6084 Keeping the machine learning separate would be a better idea, just an interface for other programs to access imageboards and work with them. If possible, it would be great if it could generalize to other websites too. I imagine a tool where you can specify element selectors to scrape for data and output it all to CSV files or something. Besides being able to download all kinds of data, it would make it easier to maintain when imageboards change their designs.
>>6086 Hmm. Probably good advice I'm sure. I'll think it over and see if I can figure out the way to direct the tool in the direction you suggest. BTW, any suggestions for the name of such a 'Waifu Internet Frontend' tool you envision? Bumpmaster seems a little narrowly-focused for such an expansive framework tbh.
I'm just going to be blunt. I am a retarded nigger and all this white people talk is starting to make my head hurt. I want to dip my toes into the pool to see if this sort of thing is worth my time before investing serious effort into it. Is there a waifu AI that I can set up that just werks?
>>6410 Haha. I don't really think there's really something that you can easily set up by yourself yet Anon. There are plenty of different chatbots out there though, but you have no privacy that way ofc. Just look around here, there are a few mentioned. I think replika.ai is a popular spybot chat atm.
>>6410 >Is there a waifu AI that I can set up that just werks? Maybe in a year or two. Even decent chatbots available at the moment require a 16 GB GPU. In two years though they'll only need 6 GB since machine learning doubles in efficiency every 16 months.
>>6416 >Even decent chatbots available at the moment require a 16 GB GPU Do you mean RAM? Because if so that is achievable for me. Thanks for letting me know regardless,
>>5818 Thanks, that's both encouraging and inspiring.
>>5810 You still around Anon? How's the mlpack/LSTM/raylib project going? Any progress on it yet?
>>8933 Nah, haven't been around here much. Been focusing on making virtual waifus in Godot and using WebSockets to send data between Godot, PyTorch and the web browser. Right now my priority is to earn money with my projects and build a large GPU cluster with used Tesla K80s so I can advance my research. I still wanna make an AI toolkit with mlpack and raylib but now isn't the time. Also when raylib gets glTF support in 3.6 it will be much more ready for doing interesting AI projects. The main issue though is that most people lack the computing power to actually do anything useful with an AI toolkit. In a year or two though that'll change when people start dumping their unsupported 12 and 16 GB GPUs on the market in mass that can do amazing stuff for \$100. We can snatch these cards up dirt cheap and use them in mlpack, and there won't be such an enormous barrier anymore for people to get into AI.
>>8954 Neat. Hope you make plenty of money Anon, you have some great ideas. This sounds like a nice event where good GPUs are available cheaply in a used market. Really glad to know you're still with us Anon.
Found this when people were criticizing that ML needs so much computer power and the big corps won't care. >Abstract : Strong empirical evidence that one machine-learning algorithm A outperforms another one B ideally calls for multiple trials optimizing the learning pipeline over sources of variation such as data sampling, augmentation, parameter initialization, and hyperparameters choices. This is prohibitively expensive, and corners are cut to reach conclusions. We model the whole benchmarking process, revealing that variance due to data sampling, parameter initialization and hyperparameter choice impact markedly the results. We analyze the predominant comparison methods used today in the light of this variance. We show a counter-intuitive result that adding more sources of variation to an imperfect estimator approaches better the ideal estimator at a 51× reduction in compute cost. Building on these results, we study the error rate of detecting improvements, on five different deep-learning tasks/architectures. This study leads us to propose recommendations for performance comparisons. https://hal.archives-ouvertes.fr/hal-03177159
>>9281 Correction: This conversation was about small and mybe dirty datasets instead of big data. https://project.inria.fr/dirtydata/
>>9281 >people were criticizing that ML needs so much computer power and the big corps won't care. Just looking over the abstract and admittedly not digging into the paper yet, there doesn't appear to be any contrary evidence to that complaint. I think both points are objectively, provably true. It's going to be our task here to find extremely efficient ways to run AI-like tasks, if we ever hope to have them operate (necessarily in realtime) onboard our robowaifus. Simple as. Never forget Big Tech/Gov has a demonstrably vested interest in making our task infeasible on modest, hobbyist-grade compute resources. >tl;dr It's up to us to make liars of them, Anon.
>>9285 >It's going to be our task here to find extremely efficient ways to run AI-like tasks, if we ever hope to have them operate (necessarily in realtime) onboard our robowaifus. I didn't think this was much of an issue but after I gave my chatbot an avatar the response delay became really noticeable with her sitting there blinking at me. Once I'm done with my current project I'm gonna seriously look into model compression for mobile systems and implementing these models in mlpack so we can run this stuff on embedded systems. Most of the pull requests for features that transformers require have been merged so it's ready to go now. Also it's such a pain in the ass waiting 2 minutes for PyTorch and Tensorflow to load. If this stuff is ever used in a robowaifu she's gonna have to take a nap for 20 minutes just to boot up. And the disk space usage grows exponentially each year for pointless features I will never have a use for. The mlpack code I've tried so far though compiles super tiny and starts up instantly so it gives some hope of having seamless realtime experiences even on low-end hardware.
>>9288 That is very comforting to hear Anon. I can write much more extensively in response to all your points in this post, but unless you'd like me to, then I'll simply leave it at 'GOODSPEED' :^)
>>3844 ROFL! Although, this is why I sometimes think it would be best if bots just communicate in math...or maybe ultra-rapid Morse code? That might be cool.
>>9288 Did you consider using AIML while the rest of the system starts? I think that's how it will be done eventually. There could be a list of comments to give while waking up, choosen randomly every time. Later maybe adding some new responses automatically, so she could pick up while booting and ask how you were doing what you planned to do while she was sleeping. That aside, why booting down at all? Or alternativly, why using only one computer? Because it's still development, okay. I think we're going to have systems which use some computers simultaneously. If one fails or needs to be rebooted the system would still be live and have the same knowledge. So there different ways to mitigate that, and it might only be a problem while working on a part of a system on one computer, not really an issue for a full build.
>>9377 I haven't used AIML before but that might be a good idea for dealing with loading times in a finished build. The main issue is just development really. Often I wanna test something in a model for 5 minutes and shut it down but waiting 2 minutes for it to start up wastes a lot of time. Even once PyTorch is loaded into the disk cache it still takes 15 seconds to load. One way I try get around this is by using Python's interactive console and Jupyter notebooks so PyTorch remains loaded, but sometimes the code I'm testing can't be imported easily without refactoring. It also takes some time loading large models but that could be fixed by using an SSD or possibly SD Express 8.0 cards in the future with 4 GB/s read speed.
>>9377 >I think we're going to have systems which use some computers simultaneously. If one fails or needs to be rebooted the system would still be live and have the same knowledge. You are absolutely right, and the future is here 4 decades ago Anon. 'Fly-by-wire' in aviation commonly has multiple, redundant, control computers running simultaneously. Usually in groups of 3 on modern aircraft (Although the Space Shuttle sported 4 different CnC systems). All the computers receive all the same inputs, all of them calculate these and (presumably) output all the same outputs. Or it is to be hoped so, at least. And that's the basic point; by having these redundant flight computers all running, they validate the common consensus by cross-checks and elections. If one of the three malfunctions, the other two kick it out until it 'comes to it's senses'. This leaves the actually not too unlikely scenario question "What happens if the two don't agree while the third is out of commission?" Thus the Shuttle's four machines on board. Additionally, it's not uncommon for highly-critical systems to require different contractors and different software running on at least one of the systems. That way if an unknown bug of some sort suddenly crops up, it's more likely the oddball system won't exhibit it. Safety-critical controls systems are both a complicated and fascinating field, and one ultimately of high importance to /robowaifu/. >>98 >>9390 >or possibly SD Express 8.0 cards in the future with 4 GB/s read speed. Neat, I didn't know about that yet.
>"Charticulator: Microsoft Research open-sourced a game-changing Data Visualization platform" >Creating grand charts and graphs from your data analysis is supported by many powerful tools. However, how to make these visualizations meaningful can remain a mystery. To address this challenge, Microsoft Research has quietly open-sourced a game-changing visualization platform. Haven't tried this myself yet, but I found this graph humorous & honest enough to make this post to keep track of the tool. > https://charticulator.com/index.html https://github.com/Microsoft/charticulator https://www.kdnuggets.com/2021/05/charticulator-microsoft-research-data-visualization-platform.html
>>10625 Okay, cool.
How do we get someone important to us to donate the use of one of these? I believe we could create some great robowaifu AI with it!!! :-DDD https://en.wikipedia.org/wiki/Blue_Gene
Does anyone have any resources on how the software integration would work? I.e., say you solve the vision piece so that waifubot can identify you as "husbandu," and you have the chatbot software so that you can talk to your waifu about whether NGE is a 2deep4u anime--how do you connect the two? How do you make it so that waifu realizes you, and says, "Hi, how's it going?"
>>12067 Is this one more of the many theoretical questions here? When building something, solutions for such problems will present themselves. Why theorize about it? And to what extend? Or short answer: Conditionals. Like "if".
>>12069 >Is this one more of the many theoretical questions here? No. Allow me to get more specific. I have an OpenCV based code that can identify stuff (acutally, I just got that OakD thing ( https://www.kickstarter.com/projects/opencv/opencv-ai-kit ) and ran through the tutorials), and I have a really rudimentary chatbot software. When I've been trying to think through how to integrate the two, I get confused. For example, I could pipe the output of the OakD identification as chat into the chatbot subroutine, but then it will respond to _every_ stimulus or respond to visual stimulus in ways that really don't make sense.
>>12067 In my experience the simplest way to think about it is like a database. You give the database a query and it gives a response. That query could be text, video, audio, pose data or anything really and the same for the response. You just need data to train it on for what responses to give given certain queries. There was a post recently on multimodal learning with an existing transformer language model: >>11731 >>12079 With this for example you could output data from your OpenCV code and create an encoder that projects that data into the embedding space of the transformer model.
>>12086 Exactly what my brain needed. Thanks anon.

Report/Delete/Moderation Forms
Delete
Report