DiscoverBoston Computation Club11/30/24: LB4TL: A Smooth Semantics for Temporal Logic to Train Neural Feedback Controllers with Navid Hashemi
11/30/24: 	LB4TL: A Smooth Semantics for Temporal Logic to Train Neural Feedback Controllers with Navid Hashemi

11/30/24: LB4TL: A Smooth Semantics for Temporal Logic to Train Neural Feedback Controllers with Navid Hashemi

Update: 2024-12-01
Share

Description

Navid Hashemi recently defended his PhD at USC and is about to begin a post-doc at Vanderbilt.  His research focuses on the intersection of Artificial Intelligence and Temporal Logics, with applications in Formal Verification of Learning Enabled Systems and Neurosymbolic Reinforcement Learning.  Today Navid joined us for a really exciting presentation about his work on metrizable logics for reinforcement learning, and a technique for verification thereof based on the over-approximation of reachable sets using ReLU.

Comments 
In Channel
loading
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

11/30/24: 	LB4TL: A Smooth Semantics for Temporal Logic to Train Neural Feedback Controllers with Navid Hashemi

11/30/24: LB4TL: A Smooth Semantics for Temporal Logic to Train Neural Feedback Controllers with Navid Hashemi

Max von Hippel