Automated detection of social interactions in the natural environment has resulted in promising advances in organizational behavior, consumer behavior, and behavioral health. Progress, however, has been limited since the primary means of assessing social interactions today (i.e., audio recording) has several issues in field usage such as microphone occlusion, lack of speaker specificity, and high energy drain, in addition to significant privacy concerns. In this paper, we present mConverse, a new mobilebased system to infer conversation episodes from respiration measurements collected in the field from an unobtrusively wearable respiratory inductive plethysmograph (RIP) band worn around the user's chest. The measurements are wirelessly transmitted to a mobile phone, where they are used in a novel machine learning model to determine whether the wearer is speaking, listening, or quiet. Our model incorporates several innovations to address issues that naturally arise in the noisy field environment such as confounding events, poor data quality due to sensor loosening and detachment, losses in the wireless channel, etc. Our basic model obtains 83% accuracy for the three class classification. We formulate a Hidden Markov Model to further improve the accuracy to 87%. Finally, we apply our model to data collected from 22 subjects who wore the sensor for 2 full days in the field to observe conversation behavior in daily life and find that people spend 25% of their day in conversations.