PhD Scholar,
Dept. of Comp. Sci. & Engg.

IIT Delhi

410, SIT Building,
IIT Delhi

Delhi - 110016, India.

Call : +91-8826859252

email : happy.mittal@cse.iitd.ac.in

I am a PhD scholar in computer science department at Indian Institute of Technology, Delhi, working with Dr. Parag Singla. I joined here in July 2013. I completed M.tech in Computer Science at computer science department at IIT Delhi in June 2013. Previously, I completed B.tech in Information Technology at YMCA University of Science and Technology, Faridabad in the year 2011.

I work in the area of Statistical relational learning. My thesis topic is **"Exploiting symmetries in Probabilistic Graphical Models"**.

I am also a part of Data Analytics & Intelligence Research (DAIR), which is a fledgling research group at IITD-CSE focused on combining and integrating various fields of data sciences such as machine learning, data management, and data mining towards the goal of building intelligent software systems.

Currently, I am doing internship with Kristian Kersting at Technical University of Dortmund. There, I am working on learning tractable deep neural networks, and learning more interpretable object embeddings.

Please find my Curriculum Vitae here.

My primary research area is **"Exploiting symmetries in Probabilistic Graphical Models"**. Markov Logic is a statistical relational
language to express both "relationship between entities" and "uncertainity in the relationships" in a unified way. It expresses
relations between entities (such as Friends) by first order logic formulas and uncertainity is specified by associating weight with each formula.
First order formulas naturally come with symmetry between the objects (by quantifiers), and we exploit those symmetries to make
inference tractable. This technique, called Lifted Inference, has recently gained much popularity among SRL (Statistical Relational Learning) community.
Currently we are exploiting symmetry in learning algorithms also.

- Fine Grained Weight Learning in Markov Logic Networks. Happy Mittal, Shubhankar Suman Singh, Vibhav Gogate and Parag Singla. Sixth International Workshop on Statistical Relational Learning (StarAI 2016), New York, USA. Poster
- Analysis and characterisation of comparison shopping behaviour in the mobile handset domain. Mona Gupta, Happy Mittal, Parag Singla and Amitabha Bagchi. Elec. Comm. Res., May 2016.

- Lifted Inference Rules With Constraints. Happy Mittal, Anuj Mahajan, Vibhav Gogate and Parag Singla. Twenty Ninth Annual Conference on Neural Information Processing Systems (NIPS), 2015. Montreal, Canada. Supplementary material, Slides, Poster

- New Rules for Domain Independent Lifted MAP Inference. Happy Mittal, Prasoon Goyal, Vibhav Gogate and Parag Singla. Twenty Eighth Annual Conference on Neural Information Processing Systems (NIPS), 2014. Montreal, Canada. Supplementary material, Slides, Poster
- Characterizing Comparison Shopping
Behavior: A Case Study . Mona Jain, Happy Mittal, Parag Singla and Amitabha
Bagchi. Workshop on Big Data Customer Analytics (BDCA), 2014.
*Co-located with ICDE-14*. Chicago, IL, USA.

- Markov Logic(Domingos et al)
- Lifted MAP inference for Markov Logic Networks(Sarkhel et al [AISTATS 2014])
- Understanding Belief Propagation and its Generalizations(Yedidia [2001])
- Lifted First-Order Belief Propagation(Singla and Domingos [2008])
- Counting Belief Propagation(Kersting et al [2009])
- Approximate Lifted Belief Propagation(Singla et al [2014])
- Belief Propagation and Revision in Networks with Loops
- A simple approach to Bayesian network computations
- Exploting symmetry in Lifted CSPs
- First-order probabilistic inference(David Poole [2003])
- Lifted First-Order Probabilistic Inference(Braz et al [2005])
- Lifted Inference Seen from the Other Side : The Tractable Features(Jha et al [NIPS 2010])
- On the optimality of solutions of the max-product belief propagation algorithm in arbitrary graphs
- Maximum Weight Matching via Max-Product Belief Propagation
- Turbo Decoding as an Instance of Pearl's "Belief Propagation" Algorithm
- On the Completeness of First-Order Knowledge Compilation for Lifted Probabilistic Inference
- Probabilistic Theorem Proving(Gogate and Domingos [2011])
- Integration of SAT and CP Techniques
- Tractability through Exchangeability: A New Perspective on Efficient Probabilistic Inference
- Computing query probability with incidence algebras(Dalvi et al [2010])
- Lifted Variable Elimination: Decoupling the Operators from the Constraint Language (Taghipour et al [2013])
- Lifted Variable Elimination with Arbitrary Constraints(Taghipour et al [2012])
- Generalized Counting for Lifted Variable Elimination(Taghipour and Davis [2012])
- Lifted Inference and Learning in Statistical Relational Models(Guy Van Den Broeck PhD Thesis [2013])
- Completeness results for Lifted Variable Elimination(Taghipour et al [AISTATS 2013])
- Lifted Probabilistic Inference with Counting Formulas(Milch et al [AAAI08])