This project is an investigation into how to utilize android platform to develop an application for indeterminate music generation and improvisation, by taking advantage of the phone's mobility, ubiquity and accessibility. It mainly explores the possibilities of making chance music through the sensors embedded in phone as well as the operation of the touching interface based on the algorithmic and logical programming. On this basis, it figures out some possible solutions toward collective performance to some extent. The concept is that anyone is believed to have access to participating in electronic music activity via this app. Thus, the notation is to make music production software as accessible as possible. For instance, performers who do not want to rely on inflexible traditional interface excessively or users who do not have sufficient music theories may find it very convenient and useful. Generally speaking, the aim of the study is to describe how such a musical mobile system is developed, to point out the important decisions to make when inventing this application, to identify challenges in implementing sensors, and to give ideas for interactive musical work. Because this software is mainly developed from a programming platform namely MIT app inventor, where coding blocks are based on Java, the main research question then is to explore the idea that how to program a musical system of unpredictable sonic atmosphere with limited audio engine and really basic media programming components. In other words, I manage to constrain myself to make smartphone composition through fundamental but simple functionalities. In addition, how to redesign sound samples and app interface in order to provide users with a unique approach to experience the application rather than pressing corny buttons rigidly on the screen is also a great challenge. With these contributions, possible ways to realize interactive mobile phone music are proposed. In order to achieve this purpose, an android application was developed with user-friendly interface to manipulate the sounds as well as the implementation of smartphone sensors. Besides, to ameliorate the timbre and tone color of the audio source, the system triggers and arranges a set of samples which are produced through Csound or Max MSP and modified afterwards. To make sure the system works firmly, different versions of this app is being tested in terms of choosing samples, adjusting of interface and sensors. Upgraded version is being improved to adapt the strategy of cooperative system from both musical and technical perspectives.