I will talk about two problems in semantic inference. First, I will describe a method for parsing natural language questions into logical forms, that can be mapped to information stored in structured knowledge bases. The method relies on deterministic mapping of syntactic dependency trees to logical forms, that could be in turn used for knowledge base inference. Our approach is inherently multilingual, only relying on automatic dependency parses. The second problem I will focus on is natural language inference (NLI). Here, the goal is to determine whether two sentences entail or contradict each other, or has no relationship. I will present a new "decomposable neural attention model", that is easily parallelizable on new computer architectures such as GPUs, and reaches state-of-the-art results on a recent NLI dataset, but using almost an order of magnitude less model parameters than previous work.