|   
                
                  |  |  
			 
				|  |  |  
				|  |  
				|  | 
				  
					| Currently, the research areas in the Tsinghua-HP Joint Lab include: 
 
 |  |  |  
				| » | Automatic classification of digital photos |  
				|  |  Based just on visual analysis, photos in a collection can be sorted into categories such as portraits, group photos, babies, sports, etc. This capability could prove extremely helpful to people or organizations that store thousands of photos - either online or on a PC - without having to label every picture. |  
				|  |  |  
				| » | Improved face identification |  
				|  |  "Show me all the photos of my grandmother."A user could enter a query to find all the photos of a certain individual and the system would retrieve them - even if the individual was in photos with other people or his or her face was partially covered. The technology is also being extended to include video. |  
				|  |  |  
				| » | Video-based audience analysis |  
				|  |  Owners of digital signage could use vision technologies to identify how many people stopped by their display, how long they stayed and what their facial expressions revealed --- pleased, upset, surprised, bored, etc. |  
				|  |  |  
				| » | Video digital warehouse |  
				|  |  Videos could be analyzed, sorted, stored and retrieved based on graphic features such as frame, shots and scenes, as well as by content. Researchers will explore the use of very large parallel database technology as an enabler for multimedia data warehouses. |  
				|  |  |  
				| » | Video search and recommendation for Internet-based video communities |  
				|  |  With the proliferation of videos posted online, an algorithm is being developed that would identify similar subject matter to recommend to viewers within a given user community. For example, someone who watched a certain sports video might be offered similar clips. |  
				|  |  |  
				| » | Music analysis and retrieval |  
				|  |  A user could tell the system, "find me more music like this," and play a sample. The system would then provide recommendations, based on rhythm, melodies, vocals, instruments or other audio elements. |  
 
			  
				|  |  |  
				|  |  |  
				|  | Director 
 
 
				Steering Committee
					  |  
 |  |  
 |  |  |  
					  | Prof. Xiao-yan ZHU (Tsinghua University)
 |  | Dr. Min Wang (HP Labs China)
 |  |  |  
					  |  |  
				Prof. Bo ZHANG, Tsinghua University
				Prof. Zhi-sheng NIU, Tsinghua University
				Prof. Wen-guang CHEN, Tsinghua University
				Dr. Mei-chun HSU, HP 
				Dr. Qian LIN, HP
				Ms. Alicia CHEN, HP Collaborative Professors
				 
				Prof. Hai-zhou AI, Tsinghua University
				Prof. Xiao-qing DING, Tsinghua University
				Prof. Shi-qiang YANG, Tsinghua University
				Prof. Bo ZHANG, Tsinghua University
				Prof. Chang-shui ZHANG, Tsinghua University
				Prof. Jie ZHOU, Tsinghua University |  |  | 
	|  |  
	| 
				
					|  | Worldwide sites |  |  
			 |  
 |  |