-
Notifications
You must be signed in to change notification settings - Fork 3
Expand file tree
/
Copy pathindex.html
More file actions
120 lines (104 loc) · 4.68 KB
/
index.html
File metadata and controls
120 lines (104 loc) · 4.68 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
<html> <head>
<title>Tom Minka</title>
</head>
<body>
<p>
<a href="pictures/Tom xmas.jpg"><img src="pictures/Tom xmas 25.jpg" align=left></a>
<h1 align=center><span class="bighead">Tom Minka</span></h1>
<h3 align=center>PhD, Computer Science</h3>
<h3 align=center><a href="https://www.microsoft.com/en-us/research/theme/machine-intelligence/">Machine Intelligence Group</a></h3>
<h3 align=center><a
href="http://research.microsoft.com/aboutmsr/labs/cambridge/">Microsoft
Research (Cambridge, UK)</a></h3>
</p>
<p>
Hi! I work in the field of Bayesian statistical inference, and I develop
efficient algorithms for use in machine learning, computer vision, text
retrieval, and data mining. My goal is to make Bayesian inference a
standard tool for processing information.
</p>
<p>
To make Bayesian inference easier to understand, I've written <a
href="papers/">papers</a> which illustrate Bayesian methods on important
problems in machine learning, computer vision, and text retrieval.
</p>
<p>
What makes Bayesian inference special is that it takes into account
all possible states of nature, not just the one that is the most
likely. At first glance, this seems to require a lot of computation.
I've addressed this issue by developing new computational methods,
including the <a href="papers/ep/">Expectation Propagation</a>
algorithm. With this algorithm, you can obtain the benefits of
Bayesian inference with a typically small additional cost over
non-Bayesian methods.
</p>
<p>
Bayesian inference also requires good models. I've taken two different
approaches to this. The first is to <a href="papers/viz.html">visualize
data</a> in order to determine an appropriate model. I have developed
step-by-step methods for visualizing data, taught in my classes at CMU.
My second approach is to analyze successful non-Bayesian methods in
computer vision and text retrieval, and determine what model assumptions
would lead to those methods. This "reverse-engineering" process is usually
quite instructive, and by improving the recovered models you can improve on
their results.
</p>
<p>
My main project these days is Bayesian inference for matchmaking for online multiplayer games.
See <a href="https://www.microsoft.com/en-us/research/project/truematch/">TrueMatch</a>
and <a href="https://www.microsoft.com/en-us/research/project/trueskill-ranking-system/">TrueSkill</a>.
</p>
<p>
I also work on <a href="http://dotnet.github.io/infer">Infer.NET</a>, a software library for inference in graphical models.
Most of my research in message-passing algorithms goes into it.
</p>
<h2><a href="papers/">Research papers</a></h2>
<h2>Software</h2>
<ul>
<li><a href="software/matlab.html">Tips on accelerating matlab</a>
<li><a href="https://github.com/tminka/lightspeed/">lightspeed</a> matlab toolbox
<li><a href="https://github.com/tminka/fastfit/">fastfit</a> toolbox for fitting Dirichlet distributions
<li><a href="https://github.com/tminka/bpm/">Bayes Point Machines</a>
<li><a href="http://www.media.mit.edu/~tpminka/software/maps/">Drawing maps</a>
<li><a href="http://www.media.mit.edu/~tpminka/software/compoisson/">Generalized Poissons</a>
</ul>
Software associated with a paper can be found on its abstract page.
<h2>Teaching and tutorials</h2>
<ul>
<li>
A <a href="http://www.media.mit.edu/~tpminka/statlearn/glossary/">Statistical Learning/Pattern
Recognition Glossary</a>
<li>
<a href="http://www.media.mit.edu/~tpminka/courses/36-350/">Data Mining</a>, at <a href="http://www.cmu.edu/">CMU</a>, Fall 2003,
<a href="http://www.media.mit.edu/~tpminka/courses/36-350.2002/">2002</a>, and
<a href="http://www.media.mit.edu/~tpminka/courses/36-350.2001/">2001</a>
<li>
<a href="http://www.media.mit.edu/~tpminka/courses/36-315/">Statistical Graphics and Visualization</a>,
at CMU, Spring 2003 and
<a href="http://www.media.mit.edu/~tpminka/courses/36-315.2002/">2002</a>
<li>
<a href="http://www.media.mit.edu/~tpminka/courses/10-602/">Statistical Approaches to Learning
and Discovery</a>, at CMU, Spring 2001
<li>
<a
href="http://vismod.www.media.mit.edu/vismod/classes/mas622-98/">Pattern
Recognition</a>, at MIT, Spring 1998
<li>
<a href="http://www.media.mit.edu/~tpminka/statlearn/">Statistical Learning Reading Group</a>, at MIT,
1997
<li>
<a href="http://www.media.mit.edu/~tpminka/patterns/">Software Patterns</a>, at MIT, January 1997
<li>
<A HREF="http://www.media.mit.edu/~tpminka/PLE/">Programming Language Exploration</A>, at MIT, January 1996
</ul>
Also see <a href="http://www.media.mit.edu/~tpminka/">my MIT page</a>.
<p>
<a href="minka-cv.pdf">My Curriculum Vitae</a>
<p>
<a href="http://msrweb/people/minka/default.aspx">My internal web site</a>
<hr>
<address>
Tom Minka,
email: (my last name) -at- microsoft.com
</address>
</body> </html>