Three decades ago, the confluence of learning, algorithms and complexity led to the definition of precise mathematical frameworks for studying computational learning, followed by the development of powerful learning algorithms such as boosting and support vector machines. In today's bigdata era, as we design new algorithms to learn from massive amounts of data, there is a need to re-visit some of the fundamental questions at this confluence: What are the right models of computational complexity
that we should consider? What are the tradeoffs between computational
and statistical complexity that must be taken into account? What are new learning models that could be appropriate, and how does one design provably good algorithms under these models? This symposium will bring together leading researchers in the areas of learning, algorithms and complexity to present and discuss the latest developments in these exciting areas, with a goal of both brainstorming on new research directions and exciting the next generation of mathematical/computer scientists in India to think across boundaries. The target audience will mostly be graduate students in computer science, and the program will be a mix of introductory, tutorial-style lectures and advanced research talks.
Click here to view more details on Symposium