Happy Mittal

PhD Scholar, Dept. of Comp. Sci. & Engg.
IIT Delhi

Work Location :

410, SIT Building, IIT Delhi
Delhi - 110016, India.
Call : +91-8826859252
email : happy.mittal@cse.iitd.ac.in

About me

I am a PhD scholar in computer science department at Indian Institute of Technology, Delhi, working with Dr. Parag Singla. I joined here in July 2013. I completed M.tech in Computer Science at computer science department at IIT Delhi in June 2013. Previously, I completed B.tech in Information Technology at YMCA University of Science and Technology, Faridabad in the year 2011.

I work in the area of Statistical relational learning. My thesis topic is "Exploiting symmetries in Probabilistic Graphical Models".

I am also a part of Data Analytics & Intelligence Research (DAIR), which is a fledgling research group at IITD-CSE focused on combining and integrating various fields of data sciences such as machine learning, data management, and data mining towards the goal of building intelligent software systems.

Currently, I am doing internship with Kristian Kersting at Technical University of Dortmund. There, I am working on learning tractable deep neural networks, and learning more interpretable object embeddings.

Please find my Curriculum Vitae here.

About My Research

My primary research area is "Exploiting symmetries in Probabilistic Graphical Models". Markov Logic is a statistical relational language to express both "relationship between entities" and "uncertainity in the relationships" in a unified way. It expresses relations between entities (such as Friends) by first order logic formulas and uncertainity is specified by associating weight with each formula. First order formulas naturally come with symmetry between the objects (by quantifiers), and we exploit those symmetries to make inference tractable. This technique, called Lifted Inference, has recently gained much popularity among SRL (Statistical Relational Learning) community. Currently we are exploiting symmetry in learning algorithms also.

Publications

[2016]

[2015]

[2014]

Some Important Links

Some Useful Papers

  1. Markov Logic(Domingos et al)
  2. Lifted MAP inference for Markov Logic Networks(Sarkhel et al [AISTATS 2014])
  3. Understanding Belief Propagation and its Generalizations(Yedidia [2001])
  4. Lifted First-Order Belief Propagation(Singla and Domingos [2008])
  5. Counting Belief Propagation(Kersting et al [2009])
  6. Approximate Lifted Belief Propagation(Singla et al [2014])
  7. Belief Propagation and Revision in Networks with Loops
  8. A simple approach to Bayesian network computations
  9. Exploting symmetry in Lifted CSPs
  10. First-order probabilistic inference(David Poole [2003])
  11. Lifted First-Order Probabilistic Inference(Braz et al [2005])
  12. Lifted Inference Seen from the Other Side : The Tractable Features(Jha et al [NIPS 2010])
  13. On the optimality of solutions of the max-product belief propagation algorithm in arbitrary graphs
  14. Maximum Weight Matching via Max-Product Belief Propagation
  15. Turbo Decoding as an Instance of Pearl's "Belief Propagation" Algorithm
  16. On the Completeness of First-Order Knowledge Compilation for Lifted Probabilistic Inference
  17. Probabilistic Theorem Proving(Gogate and Domingos [2011])
  18. Integration of SAT and CP Techniques
  19. Tractability through Exchangeability: A New Perspective on Efficient Probabilistic Inference
  20. Computing query probability with incidence algebras(Dalvi et al [2010])
  21. Lifted Variable Elimination: Decoupling the Operators from the Constraint Language (Taghipour et al [2013])
  22. Lifted Variable Elimination with Arbitrary Constraints(Taghipour et al [2012])
  23. Generalized Counting for Lifted Variable Elimination(Taghipour and Davis [2012])
  24. Lifted Inference and Learning in Statistical Relational Models(Guy Van Den Broeck PhD Thesis [2013])
  25. Completeness results for Lifted Variable Elimination(Taghipour et al [AISTATS 2013])
  26. Lifted Probabilistic Inference with Counting Formulas(Milch et al [AAAI08])