jueves, 14 de septiembre de 2017

Moonwalking with Einstein

https://www.goodreads.com/book/show/6346975-moonwalking-with-einstein
So I finished this one, and I can say it was a nicely put book about the subculture of memory athletes, together with some insights on the most used techniques by themselves to remember insane number of random numbers, or words, or poems.

Long time ago (around 2008) I read an oreilly book called Mind Hacks, which teached some techniques to 'overclock' one's memory, or recall, or mental arithmetic. I remembered the "1 is a bun, 2 is a shoe" placeholders, but here I learned about PAO, the major system and the Dominic System, to be able to remember random numbers (I now know my credit card by heart, after all those years).

I'm now working on my own PAO list (I'm using a mixture of Dominic System and arbitrary associations), and I hope to get to some decent proficency of memorization of numbers and tasks using the technique of loci.

A chapter on Savants, mentioning Brainman (highly recommended), and references to Tony Buzan and random folklore from the 90's make it a very enjoyable fast read.

Overall I liked the book quite a lot, it was fun to read, and I definitely got something from it. I'm afraid none of these techniques used  by Memory Athlets will be very useful to retain the kind of information I usually need to remember like programming APIs, data from books (like memorization techniques, heh), or some data that needs to have more context than just a random number.

Anyway, I'm convinced already that the fact that converting things to vivid images and placing them in a memory palace greatly improves recalling. 

Next stop, How to develop a perfect memory. And I think that will be enough for this streak of self-improvement.


lunes, 11 de septiembre de 2017

Xamarin on linux

So the whole thing starts wanting to try Xamarin, for a meetup I'm going next Saturday.

The story goes in the following way:

  1. download and install VirtualBox
  2. get some Windows CD/iso. (Windows 7 in my case, to make it the least bloated)
  3. Install windows7 in a Virtualbox.
    1. Realize that you have no space left because Windows needs about 16Gb
  4. Remove some files from my hard drive until I have 25 free Gb. Not very happy to do that.
  5. Install windows7 in a Virtualbox with a 25Gb hd ".vdi" file.
  6. Try to store the vdi file in a usb drive so at least I have a quick backup to restore from.
  7. Realize I can't move 4Gb+ files to my usb drive because it's vfat (because windows compatibility)
  8. /shrug and think "it will be fine" no backup
  9. Get visual studio comunity edition. 
  10. Try to install it.... 
  11. wait 
  12. wait
  13. wait
  14. Install xamarin & android sdk from the installer.
  15. Fuck, it needs 8Gb+, and I don't have them available.
  16. Try to increase the hdd volume. The instructions are scary. And for windows hosts.
  17. cold sweat
  18. Share a folder with the guest OS.
  19. Can't install software there.
  20. Flip table
  21. Realize I don't want to have anything to do with a system that requires 30Gb+ of bloat to write a single hello world. That I cannot ever move from my hd because it doesn't survive in a vfat hd (that I have in this format ONLY to make it compatible with windows)
  22. Remove the whole crap. 
  23. Go learn something useful and fun. It's not that there're no candidates (Rust, Clojure, Reverse Engineering, MachineLearning....)

viernes, 1 de septiembre de 2017

Birds of a feather lisp together

Now that I have some time in my hands (and I already mis Lisp), I'm watching several old Lisp talks, and stumbled upon This event.

On December 3 and 4 of 2004, the Computer Science Department at Indiana University hosted this conference on the occasion of Dan's sixtieth birthday.

Guy Steele's talk is great, as always. Nothing surprising there.  Talking about Dan's ideas and Dan itself (and giving a feeling about Dan being isomorphic to what Gilad Bracha says about Luca Cardelli here).

Gerald J Sussman's talk is also very nice, again, as usual.

And all talks I saw have that lispy emotion that we love.

But something that stroke me the most (that after reading 'The information' I realize more) was when Guy Steele talks about the time he read a draft of GEB. It's also a warm fuzzy feeling. Reading GEB leaves some trace in the reader forever. You never read a book the same way, or look at reality in the same way. Same happens with SICP (in process analysis).

Then I realized that in the picture of the event, there are the 3 of them. gls, gjs, and Douglas Hofstadter. Because obviously, they are "friends".  And the realization that the authors of my 2 favourite books ever hang around sometimes.

I also remember some Alan Kay talk where he says something like: "a few hours ago I found Guy Steele in a corridor in this event and we talked about blablabla.....". 100% natural :)

Then, while researching a bit for this post, I browsed wikipedia for Hofstadter. And another "proof" that "among certain level of smartness, there's just 1 degree of separation between any two people".

[...]he organized a larger symposium entitled "Spiritual Robots" at Stanford University, in which he moderated a panel consisting of Ray KurzweilHans MoravecKevin KellyRalph MerkleBill JoyFrank DrakeJohn Holland and John Koza

jueves, 31 de agosto de 2017

Everyone welcome Wilfred to the emacs hall of fame.

The recent emacs' hall of fame: Magnars, Malabarba, abo-abo..... and now, we have Wilfred Hughes.

Thank you all for inspiring us, each one with different styles, influences and strategies. Kudos!


jueves, 17 de agosto de 2017

grep for two words in the same file

Let's go for some more csv (or any text file) fixing, grepping and slicing and dicing.

The problem is simple:
Find a file (among a vast amount of them) that contains 2 or more words, not necessarily in the same line.

That makes piping greps onto other greps useless. The solution is quite easy, but it might not be obvious:

grep -l word1 **/*csv(.) | xargs grep -l word2 

Thanks for watching.

miércoles, 16 de agosto de 2017

guerrilla csv and xlsx

I like to have a huge toolbox so that I can always find the right tool to do any task. But I'm also a big fan of composability, and orthogonality.  So it's a bit like vim vs emacs, or small languages vs big languages, or scheme vs CL, or Python vs Perl.

On the command line, I also like to find tools that compose.  Although pipes and xargs are the way to compose commands, the interfaces have to be compatble, by using stdin/stdout, or file names ( <() comes to the rescue by helping with the plumbing).

So today I had to count the appearances of a given word in different xlsx files. Each xlsx had many sheets, and we only want to count the appearances in column 9.  It was kind of a checksum to make sure that all appearances of  $KEYWORD were still there.  So, task the task is:

Aggregate counts of appearances of 'keyword' in the ninth column in all sheets of each one of those excels.  Get the sum per file.

Apparently, after 5 minutes of typing in trance, this did the trick.

for i in **/*xlsx ; do echo $i ; csvfix write_dsv -f 9  <(xlsx2csv.py --all $i ) G 'keyword' WC ; done


We can't get much further with debugging this. The pity with these kind of approaches is that they either solve your problem in the first shoot, or it gets exponentially difficult to treat for special cases, or add debugging info.

I got to, at least, compare the results themselves using vimdiff.

vimdiff <(csvfix write_dsv -f 9  <(xlsx2csv.py --all file1.xlsx ) G 'keyword') \
        <(csvfix write_dsv -f 9  <(xlsx2csv.py --all file2.xlsx ) G 'keyword')


This is totally not rocket science, but I love the feeling of power and accomplishment you get when this magic incantations work.  You run that, you get the result, you use the result, and you throw the whole thing away.

And you keep doing what you were doing.  Or go write a post about that.