Hi, I’m Mike Deigan. I do phi­los­o­phy.

Cur­rent­ly I’m the Mel­lon Post­doc­tor­al Fel­low in Phi­los­o­phy at Rut­gers. I did a PhD at Yale, BPhil at Ox­ford, and BA at UNC Chapel Hill. Here’s my cv. Here’s my teach­ing page. And here’s my blog.

I would wel­come com­ments and ques­tions about any of the pa­pers on this page. You can email me at

Pub­li­ca­tions
Off­set­ting Harm
(forth­com­ing*) Ox­ford Stud­ies in Nor­ma­tive Ethics
*pend­ing vet­ting process
pdf |
To make sense of the per­mis­si­bil­i­ty of off­set­ting, the stan­dard de­on­to­log­i­cal con­straint against do­ing harm should be re­placed with one against un­off­set harm in­creas­es.
A Plea for In­ex­act Truth­mak­ing
(2019) Lin­guis­tics and Phi­los­o­phy
pdf | of­fi­cial ver­sion | poster |
Con­tra Kit Fine, truth­mak­er se­man­tics should de­fine ex­act truth­mak­ing in terms of in­ex­act truth­mak­ing, rather than vice ver­sa.
Coun­ter­fac­tu­al Don­keys Don’t Get High
(2018) Pro­ceed­ings of Sinn und Be­deu­tung 22
pdf | of­fi­cial ver­sion | slides |
Uni­ver­sal en­tail­ments of coun­ter­fac­tu­al don­key sen­tences aren’t as uni­ver­sal as as­sumed in the lit­er­a­ture. This makes de­riv­ing them from a con­tex­tu­al­ly pro­vid­ed sim­i­lar­i­ty or­der­ing more at­trac­tive than de­riv­ing them from a se­man­ti­cal­ly-en­cod­ed as­sign­ment sen­si­tiv­i­ty.
Coun­ter­fac­tu­al Dou­ble Lives
(2017) Pro­ceed­ings of the 21st Am­s­ter­dam Col­lo­qui­um
pdf | of­fi­cial ver­sion | slides |
Coun­teri­den­ti­cals (“If I were you...”) and re­lat­ed more or­di­nary coun­ter­fac­tu­als show we should re­ject the Krip­ke-Ka­plan or­tho­doxy about in­dex­i­cals. Lewisian coun­ter­part the­o­ry looks like the way to go.
In Progress
Stu­pe­fy­ing [R&R]
pdf |
S stu­pe­fies A when A ac­cepts S’s as­ser­tion with­out un­der­stand­ing it. I ar­gue stu­pe­fy­ing is an im­por­tant means both for good (co­op­er­a­tive, joint­ly ra­tio­nal in­quiry) and bad (ma­nip­u­la­tion), in ways that cur­rent mod­els of con­ver­sa­tion do not ac­count for.
Ques­tions Should Have An­swers
pdf |
I pro­pose and de­fend a norm that links won­der­ing, be­lief, and abil­i­ties to con­ceive: for any ques­tion one won­ders, one must be able to con­ceive po­ten­tial an­swers to it that one doesn’t re­ject. Some­times the best way to avoid vi­o­lat­ing this norm is to re­vise your be­liefs, but some­times it’s bet­ter to re­vise your con­cepts so that you are un­able to won­der the ques­tion, or so that you can con­ceive a new an­swer.
Don’t Trust Fodor’s Guide in Monte Car­lo
pdf |
Ac­tu­al­ly 𝜑-ing doesn’t im­ply abil­i­ty to 𝜑 when the right kind of ran­dom­ness is in­volved. This kind of ran­dom­ness is in­volved in the sam­pling meth­ods that many cog­ni­tive sci­en­tists think we ac­tu­al­ly use in learn­ing. So, con­tra Fodor, we can learn con­cepts by hy­poth­e­sis test­ing with­out cir­cu­lar­i­ty.
Bad Con­cepts, Bi­lat­er­al Con­tents
pdf | slides |
We can take con­cepts to be in­con­sis­tent with­out go­ing in for in­fer­en­tial­ism. Bi­lat­er­al­ism is all that’s need­ed. But a puz­zle re­mains: what’s de­fec­tive about in­con­sis­tent con­cepts?

Hav­ing a Con­cept Has a Cost
Email me for draft | hand­out |
There is epis­temic val­ue to un­der­stand oth­ers “from the in­side”. This means that mere­ly pos­sess­ing a con­cept has an epis­temic cost, even ide­al­iz­ing away our com­pu­ta­tion­al lim­i­ta­tions.
Con­cep­tu­al Quar­an­tine
Email me for draft | hand­out | slides |
A cer­tain kind of con­cep­tu­al frag­men­ta­tion is epis­tem­i­cal­ly ide­al.
A Nor­ma­tive Ac­count of Epis­temic Emo­tions
with Juan S. Piñeros Glass­cock
Email me for draft |
The cat­e­go­ry of epis­temic emo­tions should be de­fined in nor­ma­tive rather than in­stru­men­tal terms.
Par­tial­i­ty and Ob­jec­tive Val­ue
Email me for draft | slides |
We can hold both (i) that there is a ra­tio­nal re­quire­ment to pre­fer out­comes that are ob­jec­tive­ly best and (ii) that one can ra­tio­nal­ly pre­fer in ways that are par­tial to­wards one­self and one’s loved ones. But to do so, we can’t just ger­ry­man­der the ob­jec­tive bet­ter­ness or­der­ing over worlds. We need to re­think what it is an or­der­ing of.