WorldCat Identities

Amini, Massih-Reza

Overview
Works: 56 works in 86 publications in 3 languages and 841 library holdings
Roles: Author, Other, Thesis advisor, Opponent, htt
Publication Timeline
.
Most widely held works by Massih-Reza Amini
Learning with Partially Labeled and Interdependent Data by Massih-Reza Amini( )

12 editions published in 2015 in English and held by 322 WorldCat member libraries worldwide

This book develops two key machine learning principles: the semi-supervised paradigm and learning with interdependent data. It reveals new applications, primarily web related, that transgress the classical machine learning framework through learning with interdependent data. The book traces how the semi-supervised paradigm and the learning to rank paradigm emerged from new web applications, leading to a massive production of heterogeneous textual data. It explains how semi-supervised learning techniques are widely used, but only allow a limited analysis of the information content and thus do not meet the demands of many web-related tasks. Later chapters deal with the development of learning methods for ranking entities in a large collection with respect to precise information needed. In some cases, learning a ranking function can be reduced to learning a classification function over the pairs of examples. The book proves that this task can be efficiently tackled in a new framework: learning with interdependent data. Researchers and professionals in machine learning will find these new perspectives and solutions valuable. Learning with Partially Labeled and Interdependent Data is also useful for advanced-level students of computer science, particularly those focused on statistics and learning
Recherche d'information : applications, modèles et algorithmes by Massih-Reza Amini( Book )

8 editions published between 2013 and 2017 in French and held by 230 WorldCat member libraries worldwide

Le premier ouvrage francophone sur les algorithmes qui sous-tendent les technologies de big data et les moteurs de recherche! Depuis quelques années, de nouveaux modèles et algorithmes sont mis au point pour traiter des données de plus en plus volumineuses et diverses. Cet ouvrage présente les fondements scientifiques des tâches les plus répandues en recherche d'information (RI), tâches également liées au data mining, au décisionnel et plus généralement à l'exploitation du big data. La deuxième édition de cet ouvrage propose un exposé détaillé et cohérent des algorithmes classiques développés dans ce domaine, abordable par des lecteurs qui cherchent à connaître le mécanisme des outils quotidiens d'Internet. De plus, le lecteur approfondira les concepts d'indexation, de compression, de recherche sur le Web, de classification et de catégorisation, et pourra prolonger cette étude avec les exercices corrigés proposés en fin de chapitre. Ce livre s'adresse tant aux chercheurs et ingénieurs qui travaillent dans le domaine de l'accès à l'information et employés de PME qui utilisent en profondeur les outils du webmarketing, qu'aux étudiants de Licence, Master, écoles d'ingénieurs ou doctorants qui souhaitent un ouvrage de référence sur la recherche d'information (4e de couverture)
Apprentissage machine : de la théorie à la pratique by Massih-Reza Amini( Book )

6 editions published in 2015 in French and held by 89 WorldCat member libraries worldwide

Machine learning : programmes libres (GPLv3) essentiels au développement de solutions big data by Massih-Reza Amini( Book )

3 editions published in 2020 in French and held by 84 WorldCat member libraries worldwide

«Machine Learning et intelligence artificielle. Le Machine Learning est l'un des domaines de l'intelligence artificielle qui a pour but de concevoir des programmes qui ne sont pas explicitement codés pour s'acquitter d'une tâche particulière. Les concepts de ce domaine sont fondés sur la logique inférentielle et tentent de dégager des règles générales à partir d'un nombre fini d'observations. Un ouvrage de référence. Cet ouvrage présente les fondements scientifiques de la théorie de l'apprentissage supervisé, les algorithmes les plus répandus développés suivant ce domaine ainsi que les deux cadres de l'apprentissage semi-supervisé et de l'ordonnancement, à un niveau accessible aux étudiants de master et aux élèves ingénieurs. La première édition, connue sous le nom Apprentissage machine, fut traduite en chinois par les éditions iTuring. Dans cette deuxième édition, un nouveau chapitre est dédié au Deep Learning, sur les réseaux de neurones artificiels, et nous avons réorganisé les autres chapitres pour un exposé cohérent reliant la théorie aux algorithmes développés dans cette sphère. Vous trouverez également dans cette édition quelques programmes des algorithmes classiques, écrits en langages Python et C (langages à la fois simples et populaires), et à destination des lecteurs qui souhaitent connaître le fonctionnement de ces modèles désignés parfois comme des boites noires. Ces programmes libres (GPLv3) essentiels au développement de solutions big data sont déposés progressivement sur ce gitlab (https://gricad- gitlab.univ-grenoble-alpes.fr/aminima/machine-learning-tools). À qui s'adresse ce livre ? Aux élèves ingénieurs, étudiants de master et doctorants en mathématiques appliquées, algorithmique, recherche opérationnelle, gestion de production, aide à la décision. Aux ingénieurs, enseignants-chercheurs, informaticiens, industriels, économistes et décideurs ayant à résoudre des problèmes de classification, de partitionnement et d'ordonnancement à large échelle.» --
Recherche d'information : applications, modèles et algorithmes : fouille de données, décisionnel et big data by Massih-Reza Amini( )

1 edition published in 2013 in French and held by 25 WorldCat member libraries worldwide

Apprentissage machine : de la théorie à la pratique by Massih-Reza Amini( )

1 edition published in 2015 in French and held by 18 WorldCat member libraries worldwide

Apprentissage automatique pour l'extraction de caractéristiques : application au partitionnement de documents, au résumé automatique et au filtrage collaboratif by Jean-François Pessiot( Book )

2 editions published in 2008 in French and held by 4 WorldCat member libraries worldwide

En apprentissage statistique, le choix de la représentation des données est crucial et a motivé le développement de méthodes permettant de modifier la représentation initiale des données. Dans cette thèse, nous étudions la problématique du choix de la représentation des données au travers de l'extraction de documents et le résumé automatique de texte. En extraction multi-tâches, nous proposons également des algorithmes d'apprentissage pour la régression et pour l'ordonnancement d'instances. Nous appliquons nos deux modèles au filtrage collaboratif, d'abord vu comme un problème de prédiction de notes, puis comme un problème de prédiction d'ordre
APPRENTISSAGE AUTOMATIQUE ET RECHERCHE DE L'INFORMATION : APPLICATION A L'EXTRACTION D'INFORMATION DE SURFACE ET AU RESUME DE TEXTE by Massih-Reza Amini( Book )

2 editions published in 2001 in French and held by 4 WorldCat member libraries worldwide

LA THESE PORTE SUR L'UTILISATION DE METHODES ISSUES DE L'APPRENTISSAGE AUTOMATIQUE POUR DES TACHES DE RECHERCHE D'INFORMATION DANS LES TEXTES. NOTRE MOTIVATION A ETE D'EXPLORER LE POTENTIEL DES TECHNIQUES D'APPRENTISSAGE POUR REPONDRE AUX DEMANDES D'ACCES A L'INFORMATION TEXTUELLE LIEES AU DEVELOPPEMENT DE GRANDES BASES DE DONNEES TEXTE ET AU WEB. DANS CE CONTEXTE IL EST DEVENU IMPORTANT D'ETRE CAPABLE DE TRAITER DE GRANDES QUANTITES DE DONNEES, D'APPORTER DES SOLUTIONS DIVERSIFIEES AUX NOUVELLES DEMANDES DES UTILISATEURS, ET D'AUTOMATISER LES OUTILS QUI PERMETTENT D'EXPLOITER L'INFORMATION TEXTUELLE. NOUS AVONS POUR CELA EXPLORE DEUX DIRECTIONS. LA PREMIERE EST LE DEVELOPPEMENT DE MODELES PERMETTANT DE PRENDRE EN COMPTE L'INFORMATION SEQUENTIELLE PRESENTE DANS LES TEXTES AFIN D'EXPLOITER UNE INFORMATION PLUS RICHE QUE LA REPRESENTATION SAC DE MOTS TRADITIONNELLEMENT UTILISEE PAR LES SYSTEMES DE RECHERCHE D'INFORMATION. POUR CELA NOUS PROPOSONS DES MODELES STATISTIQUES BASES SUR DES MODELES DE MARKOV CACHES ET DES RESEAUX DE NEURONES. NOUS MONTRONS COMMENT CES SYSTEMES PERMETTENT D'ETENDRE LES CAPACITES DES MODELES PROBABILISTES CLASSIQUES DE LA RECHERCHE D'INFORMATION ET COMMENT ILS PEUVENT ETRE UTILISES EN PARTICULIER POUR DES TACHES D'EXTRACTION D'INFORMATION DE SURFACE. LA DEUXIEME DIRECTION EXPLOREE CONCERNE L'APPRENTISSAGE SEMI-SUPERVISE. IL S'AGIT D'UTILISER POUR DES TACHES D'ACCES A L'INFORMATION UNE PETITE QUANTITE DE DONNEES ETIQUETEES CONJOINTEMENT A UNE MASSE IMPORTANTE DE DONNEES NON ETIQUETEES. CELA CORRESPOND A UNE SITUATION DE PLUS EN PLUS FREQUENTE EN RECHERCHE D'INFORMATION. NOUS PROPOSONS ET ANALYSONS DES ALGORITHMES ORIGINAUX BASES SUR UN FORMALISME DISCRIMINANT. NOUS AVONS UTILISE CES TECHNIQUES POUR LE RESUME DE TEXTE VU SOUS L'ANGLE DE L'EXTRACTION DE PHRASES PERTINENTES D'UN DOCUMENT. CES TRAVAUX SE SONT CONCRETISES PAR LE DEVELOPPEMENT DU SYSTEME D'AIDE AU RESUME AUTOMATIQUE (S.A.R.A.)
Ji qi xue xi : Li lun,Shi jian yu ti gao = Apprentissage machine by Massih-Reza Amini( Book )

1 edition published in 2018 in Chinese and held by 3 WorldCat member libraries worldwide

Ben shu shi ji qi xue xi li lun yu suan fa de can kao shu mu,Cong jian du xue xi,Ban jian du xue xi de ji chu li lun kai shi,Cai yong jian dan,Liu xing de C yu yan,Zhu bu jie shao le chang jian,Xian jin de li lun gai nian,Suan fa yu shi jian an li,Cheng xian le xiang ying de jing dian suan fa he bian cheng yao dian,Man zu du zhe xi wang le jie ji qi xue xi yun zuo mo shi de gen ben xu qiu
Algorithmes d'apprentissage pour les grandes masses de données : Application à la classification multi-classes et à l'optimisation distribuée asynchrone by Bikash Joshi( )

1 edition published in 2017 in English and held by 2 WorldCat member libraries worldwide

This thesis focuses on developing scalable algorithms for large scale machine learning. In this work, we present two perspectives to handle large data. First, we consider the problem of large-scale multiclass classification. We introduce the task of multiclass classification and the challenge of classifying with a large number of classes. To alleviate these challenges, we propose an algorithm which reduces the original multiclass problem to an equivalent binary one. Based on this reduction technique, we introduce a scalable method to tackle the multiclass classification problem for very large number of classes and perform detailed theoretical and empirical analyses.In the second part, we discuss the problem of distributed machine learning. In this domain, we introduce an asynchronous framework for performing distributed optimization. We present application of the proposed asynchronous framework on two popular domains: matrix factorization for large-scale recommender systems and large-scale binary classification. In the case of matrix factorization, we perform Stochastic Gradient Descent (SGD) in an asynchronous distributed manner. Whereas, in the case of large-scale binary classification we use a variant of SGD which uses variance reduction technique, SVRG as our optimization algorithm
Continual learning for image classification by Anuvabh Dutt( )

1 edition published in 2019 in English and held by 2 WorldCat member libraries worldwide

This thesis deals with deep learning applied to image classification tasks. The primary motivation for the work is to make current deep learning techniques more efficient and to deal with changes in the data distribution. We work in the broad framework of continual learning, with the aim to have in the future machine learning models that can continuously improve.We first look at change in label space of a data set, with the data samples themselves remaining the same. We consider a semantic label hierarchy to which the labels belong. We investigate how we can utilise this hierarchy for obtaining improvements in models which were trained on different levels of this hierarchy.The second and third contribution involve continual learning using a generative model. We analyse the usability of samples from a generative model in the case of training good discriminative classifiers. We propose techniques to improve the selection and generation of samples from a generative model. Following this, we observe that continual learning algorithms do undergo some loss in performance when trained on several tasks sequentially. We analyse the training dynamics in this scenario and compare with training on several tasks simultaneously. We make observations that point to potential difficulties in the learning of models in a continual learning scenario.Finally, we propose a new design template for convolutional networks. This architecture leads to training of smaller models without compromising performance. In addition the design lends itself to easy parallelisation, leading to efficient distributed training.In conclusion, we look at two different types of continual learning scenarios. We propose methods that lead to improvements. Our analysis also points to greater issues, to over come which we might need changes in our current neural network training procedure
Learning information retrieval functions and parameters on unlabeled collections by Parantapa Goswami( )

1 edition published in 2014 in English and held by 2 WorldCat member libraries worldwide

The present study focuses on (a) predicting parameters of already existing standard IR models and (b) learning new IR functions. We first explore various statistical methods to estimate the collection parameter of family of information based models (Chapter 2). This parameter determines the behavior of a term in the collection. In earlier studies, it was set to the average number of documents where the term appears, without full justification. We introduce here a fully formalized estimation method which leads to improved versions of these models over the original ones. But the method developed is applicable only to estimate the collection parameter under the information model framework. To alleviate this we propose a transfer learning approach which can predict values for any parameter for any IR model (Chapter 3). This approach uses relevance judgments on a past collection to learn a regression function which can infer parameter values for each single query on a new unlabeled target collection. The proposed method not only outperforms the standard IR models with their default parameter values, but also yields either better or at par performance with popular parameter tuning methods which use relevance judgments on target collection. We then investigate the application of transfer learning based techniques to directly transfer relevance information from a source collection to derive a "pseudo-relevance" judgment on an unlabeled target collection (Chapter 4). From this derived pseudo-relevance a ranking function is learned using any standard learning algorithm which can rank documents in the target collection. In various experiments the learned function outperformed standard IR models as well as other state-of-the-art transfer learning based algorithms. Though a ranking function learned through a learning algorithm is effective still it has a predefined form based on the learning algorithm used. We thus introduce an exhaustive discovery approach to search ranking functions from a space of simple functions (Chapter 5). Through experimentation we found that some of the discovered functions are highly competitive with respect to standard IR models
Détection multidimensionnelle au test paramétrique avec recherche automatique des causes by Ali Hajj Hassan( )

1 edition published in 2014 in French and held by 2 WorldCat member libraries worldwide

Nowadays, control of manufacturing process is an essential task to ensure production of high quality. At the end of the semiconductor manufacturing process, an electric test, called Parametric Test (PT), is performed. The PT aims at detecting wafers whose electrical behavior is abnormal, based on a set of static electrical parameters measured on multiple sites of each wafer. The purpose of this thesis is to develop a dynamic detection system at PT level to detect abnormal wafers from a recent history of electrical measurements. For this, we develop a real time detection system based on an optimized learning technique, where training data and detection model are updated through a moving temporal window. The detection scheme is based on one class Support Vector Machines (1-SVM), a variant of the statistical learning algorithm SVM widely used for binary classification. 1-SVM was introduced in the context of one class classification problems for anomaly detection. In order to improve the predictive performance of the 1-SVM classification algorithm, two variable selection methods are developed. The first one is a filter method based on a calculated score with MADe filter, a robust approach for univariate outlier detection. The second one is of wrapper type that adapts the SVM Recursive Feature Elimination method (SVM-RFE) to the 1-SVM algorithm. For detected abnormal wafers, we propose a method to determine their multidimensional signatures to identify the electrical parameters responsible for the anomaly. Finally, we evaluate our proposed system on real datasets of STMicroelecronics and compare it to the detection system based on Hotelling's T2 test, one of the most known detection systems in the literature. The results show that our system yields very good performance and can provide an efficient way for real-time detection
Semi-supervised multi-view learning : an application to image annotation and multi-lingual document classification by Ali Fakeri Tabrizi( Book )

2 editions published in 2013 in English and held by 2 WorldCat member libraries worldwide

In this thesis, we introduce two multiview learning approaches. In a first approach, we describe a self-training multiview strategy which trains different voting classifiers on different views. The margin distributions over the unlabeled training data, obtained with each view-specific classifier are then used to estimate an upper-bound on their transductive Bayes error. Minimizing this upper-bound provides an automatic margin-threshold which is used to assign pseudo-labels to unlabeled examples. Final class labels are then assigned to these examples, by taking a vote on the pool of the previous pseudo-labels. New view-specific classifiers are then trained using the original labeled and the pseudo-labeled training data. We consider applications to image-text and to multilingual document classification.In second approach, we propose a multiview semi-supervised bipartite ranking model which allows us to leverage the information contained in unlabeled sets of images to improve the prediction performance, using multiple descriptions, or views of images. For each topic class, our approach first learns as many view-specific rankers as there are available views using the labeled data only. These rankers are then improved iteratively by adding pseudo-labeled pairs of examples on which all view-specific rankers agree over the ranking of examples within these pairs
Optimisation non-lisse pour l'apprentissage statistique avec régularisation matricielle structurée by Federico Pierucci( )

1 edition published in 2017 in English and held by 2 WorldCat member libraries worldwide

Training machine learning methods boils down to solving optimization problems whose objective functions often decomposes into two parts: a) the empirical risk, built upon the loss function, whose shape is determined by the performance metric and the noise assumptions; b) the regularization penalty, built upon a norm, or a gauge function, whose structure is determined by the prior information available for the problem at hand.Common loss functions, such as the hinge loss for binary classification, or more advanced loss functions, such as the one arising in classification with reject option, are non-smooth. Sparse regularization penalties such as the (vector) l1- penalty, or the (matrix) nuclear-norm penalty, are also non-smooth. However, basic non-smooth optimization algorithms, such as subgradient optimization or bundle-type methods, do not leverage the composite structure of the objective. The goal of this thesis is to study doubly non-smooth learning problems (with non-smooth loss functions and non-smooth regularization penalties) and first- order optimization algorithms that leverage composite structure of non-smooth objectives.In the first chapter, we introduce new regularization penalties, called the group Schatten norms, to generalize the standard Schatten norms to block- structured matrices. We establish the main properties of the group Schatten norms using tools from convex analysis and linear algebra; we retrieve in particular some convex envelope properties. We discuss several potential applications of the group nuclear-norm, in collaborative filtering, database compression, multi-label image tagging.In the second chapter, we present a survey of smoothing techniques that allow us to use first-order optimization algorithms designed for composite objectives decomposing into a smooth part and a non-smooth part. We also show how smoothing can be used on the loss function corresponding to the top-k accuracy, used for ranking and multi-class classification problems. We outline some first-order algorithms that can be used in combination with the smoothing technique: i) conditional gradient algorithms; ii) proximal gradient algorithms; iii) incremental gradient algorithms.In the third chapter, we study further conditional gradient algorithms for solving doubly non-smooth optimization problems. We show that an adaptive smoothing combined with the standard conditional gradient algorithm gives birth to new conditional gradient algorithms having the expected theoretical convergence guarantees. We present promising experimental results in collaborative filtering for movie recommendation and image categorization
Document clustering in a learned concept space by Young-Min Kim( Book )

2 editions published in 2010 in English and held by 2 WorldCat member libraries worldwide

La tâche de partitionnement de documents est l'un des problèmes centraux en Recherche d'Information (RI). Les résultats de partitionnement indique non-seulement la structure d'une collection, mais ils sont aussi souvent utilisés dans différents tâches de RI. Dans cette thèse, nous nous somme intéressés à développer des techniques probabilistes à base de modèles latents pour cette tâche. Dans ce but, nous proposons quatre techniques différentes basées sur l'observation que le partitionnement est bien plus effectif dans un espace de concepts trouvé automatiquement que dans l'espace de sac-de-mots. L'organisation de cette thèse est la suivante: dans la première partie de la thèse, nous donnons un état de l'art complet sur les techniques de partitionnement et nous présentons les algorithmes classiques pour apprendre les paramètres des modèles de partitionnement probabilistes. Dans une deuxième partie, nous présentons nos contributions en développant d'abord une méthode de partitionnement composée de deux phases. Dans la première phase, les mots de la collection sont regroupés suivant l'hypothèse que les mots apparaissant dans les mêmes documents avec les mêmes fréquences sont similaires. Les documents sont ensuite regroupés dans l'espace induit par ces groupements de mots, appelés concepts de mots. Sur ce principe, nous étendons le modèle latent PLSA pour un partitionnement simultané des mots et des documents. Nous proposons ensuite une stratégie de sélection de modèles permettant de trouver efficacement le meilleur modèle parmi tous les choix possibles. Et aussi, nous montrons comment le PLSA peut être adaptés pour le partitionnement multi-vus de documents multi-langues
Large-scale asynchronous distributed learning based on parameter exchanges by Bikash Joshi( )

1 edition published in 2018 in English and held by 2 WorldCat member libraries worldwide

Apprentissage multi-cibles : théorie et applications by Simon Moura( )

1 edition published in 2018 in English and held by 2 WorldCat member libraries worldwide

In this thesis, we study the problem of learning with multiple outputs related to different tasks, such as classification and ranking. In this line of research, we explored three different axes. First we proposed a theoretical framework that can be used to show the consistency of multi-label learning in the case of classifier chains, where outputs are homogeneous. Based on this framework, we proposed Rademacher generalization error bound made by any classifier in the chain and exhibit dependency factors relating each output to the others. As a result, we introduced multiple strategies to learn classifier chains and select an order for the chain. Still focusing on the homogeneous multi-output framework, we proposed a neural network based solution for fine-grained sentiment analysis and show the efficiency of the approach. Finally, we proposed a framework and an empirical study showing the interest of learning with multiple tasks, even when the outputs are of different types
Arbre de décision temporel multi-opérateur by Vera Shalaeva( )

1 edition published in 2018 in English and held by 2 WorldCat member libraries worldwide

Rising interest in mining and analyzing time series data in many domains motivates designing machine learning (ML) algorithms that are capable of tackling such complex data. Except of the need in modification, improvement, and creation of novel ML algorithms that initially works with static data, criteria of its interpretability, accuracy and computational efficiency have to be fulfilled. For a domain expert, it becomes crucial to extract knowledge from data and appealing when a yielded model is transparent and interpretable. So that, no preliminary knowledge of ML is required to read and understand results. Indeed, an emphasized by many recent works, it is more and more needed for domain experts to get a transparent and interpretable model from the learning tool, thus allowing them to use it, even if they have few knowledge about ML's theories. Decision Tree is an algorithm that focuses on providing interpretable and quite accurate classification model.More precisely, in this research we address the problem of interpretable time series classification by Decision Tree (DT) method. Firstly, we present Temporal Decision Tree, which is the modification of classical DT algorithm. The gist of this change is the definition of a node's split. Secondly, we propose an extension, called Multi-operator Temporal Decision Tree (MTDT), of the modified algorithm for temporal data that is able to capture different geometrical classes structures. The resulting algorithm improves model readability while preserving the classification accuracy.Furthermore, we explore two complementary issues: computational efficiency of extended algorithm and its classification accuracy. We suggest that decreasing of the former is reachable using a Local Search approach to built nodes. And preserving of the latter can be handled by discovering and weighting discriminative time stamps of time series
Systèmes de recommandation pour la publicité en ligne by Sumit Sidana( )

1 edition published in 2018 in English and held by 2 WorldCat member libraries worldwide

This thesis is dedicated to the study of Recommendation Systems for implicit feedback (clicks) mostly using Learning-to-rank and neural network based approaches. In this line, we derive a novel Neural-Network model that jointly learns a new representation of users and items in an embedded space as well as the preference relation of users over the pairs of items and give theoretical analysis. In addition we contribute to the creation of two novel, publicly available, collections for recommendations that record the behavior of customers of European Leaders in eCommerce advertising, Kelkoofootnote{url{https://www.kelkoo.com/}} and Purchfootnote{label{purch}url{http://www.purch.com/}}. Both datasets gather implicit feedback, in form of clicks, of users, along with a rich set of contextual features regarding both customers and offers. Purch's dataset, is affected by popularity bias. Therefore, we propose a simple yet effective strategy on how to overcome the popularity bias introduced while designing an efficient and scalable recommendation algorithm by introducing diversity based on an appropriate representation of items. Further, this collection contains contextual information about offers in form of text. We make use of this textual information in novel time-aware topic models and show the use of topics as contextual information in Factorization Machines that improves performance. In this vein and in conjunction with a detailed description of the datasets, we show the performance of six state-of-the-art recommender models.Keywords. Recommendation Systems, Data Sets, Learning-to-Rank, Neural Network, Popularity Bias, Diverse Recommendations, Contextual information, Topic Model
 
moreShow More Titles
fewerShow Fewer Titles
Audience Level
0
Audience Level
1
  General Special  
Audience level: 0.73 (from 0.51 for Learning w ... to 0.97 for Learning w ...)

Learning with Partially Labeled and Interdependent Data
Covers
Languages
French (24)

English (24)

Chinese (1)