August 6th, 2005

balaclava

Tags

So I went through my memories on LJ and tagged everything, since I mostly used the memories feature as a means of tagging. One of the posts I saw reminded me that I was going to give theora the nickname "Pepper".

I'm not sure if I want to waste the rest of the day going through all of my entries and tagging them. Reading through some of the memories has actually kind of inspired me to do something creative, like programming or writing or something.
jotto

Jotto

So, I went back to look at my "smart" Jotto AI, and I found a problem with my logic. So I fixed it, and I have it running now. The smart AI takes a while to run, so it probably won't be finished until tomorrow morning. I'll be curious to see if this fares any better than the other "smart" one or, more importantly, than the dumb one.
jotto

Jotto

Spent some time today thinking (and just recently talking) about ideas for further Jotto AI development. I ended up running my corrected AI algorithm on a smaller dictionary, and it appears to be quite a bit better than the dumb AI. When the algorithm has a possible word list lower than a certain size, it turns on. Before that, it just mimics the dumb AI. Right now, that size is set at 1000 in order to have the algorithm complete in a somewhat reasonable time. When the dictionary starts out at 12,000 words, there are 3-4 guesses before it gets down below 1000. The smaller dictionary has about 4,000 words, so there are only about 2 guesses before it gets down below 1,000. This leads me to believe that the algorithm might actually do really well without a size limit. Once I upgrade my computer, I might try running it on my computer, which I assume could be faster than jmac.org. I suppose I could also try translating the program into some other language (I'm guessing C) to speed up the execution time.

But the thinking I did today about the algorithm has it expanding into a neural network type of thing. I don't feel like going into the whole algorithm right now, but I'll just say that I'm thinking that an algorithm that has 6 parameters (and actually, maybe those 6 parameters would have different values when the possible word list is at different sizes, so there might actually be something like 30 parameters), and a neural network (if I understand it correctly, as I've never actually implemented a neural network) would be a good way to find (close to) optimal values for those parameters. And then, of course, there's look-ahead, like the chess-playing programs use. Combine those two approaches, and we're talking some serious CPU cycles, I think. And lots of iterations in order to find the (close to) optimal parameter values.

I don't imagine it will come as much surprise that I'm currently re-reading Godel, Escher, Bach. I have another Hofstadter book that I haven't read, Fluid Concepts and Creative Analogies, and I think I'm going to read that next. And then I'll probably get his Le Ton beau de Marot, which he talks about in the preface to the 20th anniversary edition of GEB. And probably his collection Metamagical Themas after that. Yes, I tend to get obsessed by an author, but it's nice that this one will also get me thinking more about creative things (programming and writing) instead of just escaping into a story.
mathematics

Bongard problems

Speaking of Hofstadter, I remembered that I sent in some timing information to Harry Foundalis after reading about Bongard problems for the first time in GEB. I'm not sure exactly when that was, but my name shows up on his list of contributors (his site shows up on the third page of Google results when searching for my name). I don't know if I knew all or any of those people back when I first did this, but I actually know four of the other people on the list, one of whom may be reading this post.