Anki

Anki is a program which makes remembering things easy. Because it’s a lot more efficient than traditional study methods, you can either greatly decrease your time spent studying, or greatly increase the amount you learn. Anyone who needs to remember things in their daily life can benefit from Anki. Since it is content-agnostic and supports … Continue reading Anki

Anki is a program which makes remembering things easy. Because it’s a lot more efficient than traditional study methods, you can either greatly decrease your time spent studying, or greatly increase the amount you learn.

Anyone who needs to remember things in their daily life can benefit from Anki. Since it is content-agnostic and supports images, audio, videos and scientific markup (via LaTeX), the possibilities are endless.
For example:

  • Learning a language
  • Studying for medical and law exams
  • Memorizing people’s names and faces
  • Brushing up on geography
  • Mastering long poems
  • Even practicing guitar chords!

Tor

Tor is free software for enabling anonymous communication. The name is an acronym derived from the original software project name The Onion Router,[7] however the correct spelling is “Tor”, capitalizing only the first letter.[8] Tor directs Internet traffic through a free, worldwide, volunteer network consisting of more than seven thousand relays[9] to conceal a user’s … Continue reading Tor

Tor is free software for enabling anonymous communication. The name is an acronym derived from the original software project name The Onion Router,[7] however the correct spelling is “Tor”, capitalizing only the first letter.[8] Tor directs Internet traffic through a free, worldwide, volunteer network consisting of more than seven thousand relays[9] to conceal a user’s location and usage from anyone conducting network surveillance or traffic analysis. Using Tor makes it more difficult for Internet activity to be traced back to the user: this includes “visits to Web sites, online posts, instant messages, and other communication forms”.[10] Tor’s use is intended to protect the personal privacy of users, as well as their freedom and ability to conduct confidential communication by keeping their Internet activities from being monitored.

Onion routing is implemented by encryption in the application layer of a communication protocol stack, nested like the layers of anonion. Tor encrypts the data, including the destination IP address, multiple times and sends it through a virtual circuit comprising successive, randomly selected Tor relays. Each relay decrypts a layer of encryption to reveal only the next relay in the circuit in order to pass the remaining encrypted data on to it. The final relay decrypts the innermost layer of encryption and sends the original data to its destination without revealing, or even knowing, the source IP address. Because the routing of the communication is partly concealed at every hop in the Tor circuit, this method eliminates any single point at which the communicating peers can be determined through network surveillance that relies upon knowing its source and destination.

An adversary might try to de-anonymize the user by some means. One way this may be achieved is by exploiting vulnerable software on the user’s computer.[11] The NSA has a technique that targets outdated Firefox browsers codenamed EgotisticalGiraffe,[12] and targets Tor users in general for close monitoring under its XKeyscore program.[13] Attacks against Tor are an active area of academic research,[14][15] which is welcomed by the Tor Project itself.[16]

Leonardo Sticks

Leonardo da Vinci (1452-1519) invented a great many machines to do an extraordinary number of things. On two pages of sketches Leonardo described a roofing system for spanning large areas without internal support. He shows wooden beams laced together in a particular way so that they are self-supporting, and says this idea can be used to […]

Leonardo da Vinci (1452-1519) invented a great many machines to do an extraordinary number of things. On two pages of sketches Leonardo described a roofing system for spanning large areas without internal support. He shows wooden beams laced together in a particular way so that they are self-supporting, and says this idea can be used to cover a space without internal support, quickly and simply without complicated joints or special tools. The structures are shallow domes that are built starting from a center, supporting themselves on the ends of new sticks added to the edges. Leonardo says that the beams should be tied together with ropes and covered with strips of woven wool. He probably had in mind a shady cover for a space like a marketplace or military camp. There is no record that any of them were ever built.
In 1989 Dutch sculptor Rinus Roelofs was working on ways to divide a sphere and found a system that was simple and elegant. He recognized that in addition to dividing a sphere into solid pieces, he could also make the joints of that division into wooden sticks that interlaced to form the sphere. He invented sticks with two notches to help with alignment. They didn’t need to be tied together as Leonardo’s beams did: their weight alone held them in place.

The voting paradox

The voting paradox (also known as Condorcet’s paradox or the paradox of voting) is a situation noted by the Marquis de Condorcet in the late 18th century, in which collective preferences can be cyclic (i.e., not transitive), even if the preferences of individual voters are not cyclic. This is paradoxical, because it means that majority […]

The voting paradox (also known as Condorcet’s paradox or the paradox of voting) is a situation noted by the Marquis de Condorcet in the late 18th century, in which collective preferences can be cyclic (i.e., not transitive), even if the preferences of individual voters are not cyclic. This is paradoxical, because it means that majority wishes can be in conflict with each other. When this occurs, it is because the conflicting majorities are each made up of different groups of individuals.

Thus an expectation that transitivity on the part of all individuals’ preferences should result in transitivity of societal preferences is an example of a fallacy of composition.

Deep learning

Deep learning (deep machine learning, or deep structured learning, or hierarchical learning, or sometimes DL) is a branch ofmachine learning based on a set of algorithms that attempt to model high-level abstractions in data by using multiple processing layers with complex structures or otherwise, composed of multiple non-linear transformations.[1][2][3][4][5] Deep learning is part of a […]

Deep learning (deep machine learning, or deep structured learning, or hierarchical learning, or sometimes DL) is a branch ofmachine learning based on a set of algorithms that attempt to model high-level abstractions in data by using multiple processing layers with complex structures or otherwise, composed of multiple non-linear transformations.[1][2][3][4][5]

Deep learning is part of a broader family of machine learning methods based on learning representations of data. An observation (e.g., an image) can be represented in many ways such as a vector of intensity values per pixel, or in a more abstract way as a set of edges, regions of particular shape, etc.. Some representations make it easier to learn tasks (e.g., face recognition or facial expression recognition[6]) from examples. One of the promises of deep learning is replacing handcrafted features with efficient algorithms for unsupervised or semi-supervised feature learning and hierarchical feature extraction.[7]

Research in this area attempts to make better representations and create models to learn these representations from large-scale unlabeled data. Some of the representations are inspired by advances in neuroscience and are loosely based on interpretation of information processing and communication patterns in a nervous system, such as neural coding which attempts to define a relationship between various stimuli and associated neuronal responses in the brain.[8]

Various deep learning architectures such as deep neural networks, convolutional deep neural networks, deep belief networks andrecurrent neural networks have been applied to fields like computer vision, automatic speech recognition, natural language processing, audio recognition and bioinformatics where they have been shown to produce state-of-the-art results on various tasks.

Alternatively, deep learning has been characterized as a buzzword, or a rebranding of neural networks.[9][10]

OpenBUGS

BUGS is a software package for performing Bayesian inference Using Gibbs Sampling. The user specifies a statistical model, of (almost) arbitrary complexity, by simply stating the relationships between related variables. The software includes an ‘expert system’, which determines an appropriate MCMC (Markov chain Monte Carlo) scheme (based on the Gibbs sampler) for analysing the specified … Continue reading OpenBUGS

BUGS is a software package for performing Bayesian inference Using Gibbs Sampling. The user specifies a statistical model, of (almost) arbitrary complexity, by simply stating the relationships between related variables. The software includes an ‘expert system’, which determines an appropriate MCMC (Markov chain Monte Carlo) scheme (based on the Gibbs sampler) for analysing the specified model. The user then controls the execution of the scheme and is free to choose from a wide range of output types.

Versions…

There are two main versions of BUGS, namely WinBUGS and OpenBUGS. This site is dedicated to OpenBUGS, an open-source version of the package, on which all future development work will be focused. OpenBUGS, therefore, represents the future of the BUGS project. WinBUGS, on the other hand, is an established and stable, stand-alone version of the software, which will remain available but not further developed. The latest versions of OpenBUGS (from v3.0.7 onwards) have been designed to be at least as efficient and reliable as WinBUGS over a wide range of test applications. Please see here for more information on WinBUGS. OpenBUGS runs on x86 machines with MS Windows, Unix/Linux or Macintosh (using Wine).

Note that software exists to run OpenBUGS (and analyse its output) from within both R and SAS, amongst others.

For additional details on the differences between OpenBUGS and WinBUGS see the OpenVsWin manual page.

Go engine

The challenge is daunting. In 1994, machines took the checkers crown, when a program called Chinook beat the top human. Then, three years later, they topped the chess world, IBM’s Deep Blue supercomputer besting world champion Garry Kasparov. Now, computers match or surpass top humans in a wide variety of games: Othello, Scrabble, backgammon, poker, … Continue reading Go engine

The challenge is daunting. In 1994, machines took the checkers crown, when a program called Chinook beat the top human. Then, three years later, they topped the chess world, IBM’s Deep Blue supercomputer besting world champion Garry Kasparov. Now, computers match or surpass top humans in a wide variety of games: Othello, Scrabble, backgammon, poker, even Jeopardy. But not Go. It’s the one classic game where wetware still dominates hardware.

An interview with Martin Müller

David Ormerod: To start with please tell us a bit about yourself and your research interests. How did you learn of Go and how did you become involved in computer Go?

martin mueller computer go picture

Martin Müller: I am a professor in the Department of Computing Science at the University of Alberta in Edmonton, Canada.

My research interests are in heuristic search, studying how to solve large, complex problems using computer searches.

The main application areas studied in my research group are games such as Go, and automated planning.

In recent years, Monte Carlo search methods have been our main focus – both for games and for planning. As part of my game-related activities, I am the leader of the team developing the open source software Fuego, which was the first program to defeat a top professional in an even game on 9×9.

I learned Go when I was 15 years old and played a lot in my teens and early twenties. I am a 5, 6 or 7 Dan amateur player, depending on the country. My biggest success was probably taking 2nd place at the US Go congress open tournament in 1998.

I became interested in computer Go as an undergraduate in my home country of Austria, through my supervisor. This was around 1985. I have stayed with the topic ever since, doing a Diploma thesis, a PhD and a few postdocs, before getting my current job.

What’s Monte Carlo?

Most people with any interest at all in computer Go know that the strongest programs these days use a ‘Monte Carlo’ algorithm, but many people don’t know much more about it than that.

Could you briefly explain where the term Monte Carlo came from and what it means in this context?

The term Monte Carlo refers to an affluent suburb of Monaco which is famous for its Casino. Monte Carlo methods use statistics collected from randomized simulations as a way to analyze complex systems which are too hard to ‘solve’ by other means.

They were first developed for nuclear physics and atomic bomb research in the 1940s. Nowadays they are very widely used, but their application to games such as Go took off just a few years ago.

Now that computers are powerful enough, Monte Carlo methods are used across a wide variety of disciplines.

For example, I’ve used them at work to help with risk analysis. It’s often difficult to explain to people why this approach works though, because it seems so counterintuitive at first.

Do you have a good analogy to explain how a large enough number of random simulations can provide a useful answer to a question?

Statistical sampling, which is at the core of Monte Carlo methods, is a very powerful technique.

For example, think about opinion polls. Any single random person who you ask about their opinion may be completely crazy, but if you ask one thousand people, who are carefully selected to represent the overall population, then you get quite a good idea of the general mood and can use that to make informed decisions.

This is why we keep getting more and more of those pesky phone calls doing surveys at dinner time!

How computer Go programs improved

It’s been more than five years since UCT (an extension of Monte Carlo search) was first applied to Go, but the strongest programs were still at the kyu level not that long ago (at least on 19×19 boards).

In contrast, the strongest programs these days are dan level and they seem relatively sharp, even in tactical situations.

To what extent do they make use of heuristics for shape, tesuji, life and death, the opening and so on?

Many programs use learned local patterns such as 3×3 for simple shape, and they modify the playouts to avoid some bad tactical moves.

Also, when there is a single important fight going on, the full board search will be able to analyze it quite deeply, and do well in the tactics. The problems start when there are several fights going on at the same time.

For the opening, some programs simply use large scale patterns to imitate popular openings played by human experts. But usually those are not absolute rules. These moves simply get a bonus, but the search can override them. So it is better than the hard coded ‘expert systems’ of the 1980s.

What other changes and improvements have helped computers get to their current mid-dan level on larger boards since then?

I think many factors are involved. Better patterns and rules as above, better search, better parallel scaling, several years of testing, debugging and tuning the programs, and better hardware all help.

What are the pros and cons of combining a knowledge based approach with a Monte Carlo approach?

Crazy Stone is a program that plays the game of Go (Weiqi, Baduk), by Rémi Coulom.

It is one of the first computer Go programs to utilize a modern variant of the Monte-Carlo tree search. It is part of the Computer Go effort. In January 2012 Crazy Stone was rated as 5 dan on KGS, in March 2014 as 6 dan.

Coulom began writing Crazy Stone in July 2005, and at the outset incorporated the Monte Carlo algorithm in its design. Early versions were initially available to download as freeware from his website, albeit no longer.[2] Pattern recognition and searching was added in 2006, and later that year Crazy Stone took part in its first tournament, winning a gold medal in the 9×9 competition at the 11th Computer Olympiad.[2] Coulom subsequently entered the program into the 12th Computer Olympiad the following year, winning bronze in the 9×9 and silver in the 19×19 competitions.

However, Crazy Stone’s most significant accomplishment was to defeat Kaori Aoba, a professional Japanese 4 dan, in an 8-stone handicap match in 2008. In doing so, the engine became the first to officially defeat an active professional in Japan with a handicap of less than nine stones. Three months later, on 12 December 2008, Crazy Stone defeated Aoba again in a 7-stone match.[3]

In March 2013, Crazy Stone beat Yoshio Ishida, Japanese honorary 9-dan, in a 19×19 game with four handicap stones.[4]

On March 21, 2014, at the second annual Densei-sen competition, Crazy Stone defeated Norimoto Yoda, Japanese professional 9-dan, in a 19×19 game with four handicap stones by a margin of 2.5 points.

Crazy Stone computer Go program defeats Ishida Yoshio 9 dan with 4 stones

Crazy Stone, a computer Go program by Rémi Coulom, defeated Ishida Yoshio9p with a four stone handicap, as part of the inaugural Denseisen at the 6thComputer Go UEC Cup in Japan (March 20, 2013).

The Computer vs the computer

It was an ironic showdown between the computer and ‘The Computer’.

Ishida was nicknamed ‘The Computer’ in his prime, because of the accuracy of his counting and endgame skills.

Ishida Yoshio

Ishida Yoshio picture

Born in 1948, Ishida is now 64 years old.

However, back in the 70s, Ishida won the prestigious Honinbotitle for an impressive five consecutive years, making him one of the top players of that era.

After the game, Ishida said that he thought the program was a ‘genius’ and marvelled at the calmness and flexibility of its moves.


Zen is a strong Go engine by an individual Japanese programmer Yoji Ojima (cluster parallelism is added by Hideki Kato). On KGS several bots run engine maintaining ranks between 3d and 5d: Zen19, Zen19b, Zen19D and Zen19n. Zen was the first bot to hold a KGS 3d rating for more than 20 rated games in a row, and a blitz version seems to be holding 5 dan ratings in 2011. It was also the first to hold a 2d and 1d rating for more than 20 games, respectively. Hardware used to run Zen19 on KGS: Mac Pro 8 core, Xeon 2.26GHz.

It won the 2009 Computer Olympiad in Pamplona, Spain, running on the slowest hardware among the competitors. It also won the 2011 Olympiad in Tilburg.

Zen was released commercially under the name Tencho no Igo Zenith Go on 18 September 2009. Version 2 release on August 27, 2010 and version 3 release on 30 September 2011. Website for the software (Japanese) [ext] http://soft.mycom.co.jp/pcigo/tencho3/index.html

See latest go software updates for current version information.


In 2011, several different experiments of Zen started playing on KGS:

Name Rating Time Hardware KGS Archive
Zen19N 4D 20 Minutes + 30 seconds Byo-Yomi Mac Pro 8 cores, Xeon 2.26 GHz [ext] Zen19N
Zen19B 5D 15 seconds per move Mac Pro 8 cores, Xeon 2.26 GHz [ext] Zen19B
Zen19D 6D 15 seconds per move Mini-cluster of 6 PCs [ext] Zen19D
Zen19S 5D 20 Minutes + 30 seconds Byo-Yomi Mini-cluster of 6 PCs [ext] Zen19S
Zen19 5D 15 seconds per move [ext] Zen19

The only version active in 2014 has been Zen19S