WorldCat Identities

Gaussier, Eric

Overview
Works: 65 works in 101 publications in 2 languages and 1,348 library holdings
Genres: Conference papers and proceedings 
Roles: Editor, Other, Publishing director, Thesis advisor, Opponent, Author
Publication Timeline
.
Most widely held works by Eric Gaussier
Textual information access : statistical models by Eric Gaussier( )

16 editions published between 2012 and 2013 in English and Undetermined and held by 832 WorldCat member libraries worldwide

This book presents statistical models that have recently been developed within several research communities to access information contained in text collections. The problems considered are linked to applications aiming at facilitating information access:- information extraction and retrieval;- text classification and clustering;- opinion mining;- comprehension aids (automatic summarization, machine translation, visualization). In order to give the reader as complete a description as possible, the focus is placed on the probability models used in the applications
Recherche d'information : applications, modèles et algorithmes by Massih-Reza Amini( Book )

9 editions published between 2013 and 2017 in French and held by 267 WorldCat member libraries worldwide

Le premier ouvrage francophone sur les algorithmes qui sous-tendent les technologies de big data et les moteurs de recherche! Depuis quelques années, de nouveaux modèles et algorithmes sont mis au point pour traiter des données de plus en plus volumineuses et diverses. Cet ouvrage présente les fondements scientifiques des tâches les plus répandues en recherche d'information (RI), tâches également liées au data mining, au décisionnel et plus généralement à l'exploitation du big data. La deuxième édition de cet ouvrage propose un exposé détaillé et cohérent des algorithmes classiques développés dans ce domaine, abordable par des lecteurs qui cherchent à connaître le mécanisme des outils quotidiens d'Internet. De plus, le lecteur approfondira les concepts d'indexation, de compression, de recherche sur le Web, de classification et de catégorisation, et pourra prolonger cette étude avec les exercices corrigés proposés en fin de chapitre. Ce livre s'adresse tant aux chercheurs et ingénieurs qui travaillent dans le domaine de l'accès à l'information et employés de PME qui utilisent en profondeur les outils du webmarketing, qu'aux étudiants de Licence, Master, écoles d'ingénieurs ou doctorants qui souhaitent un ouvrage de référence sur la recherche d'information (4e de couverture)
Assistance intelligente à la recherche d'informations( Book )

3 editions published in 2003 in French and held by 75 WorldCat member libraries worldwide

Modèles statistiques pour l'accès à l'information textuelle( Book )

3 editions published in 2011 in French and held by 35 WorldCat member libraries worldwide

Proceedings of the 2015 IEEE International Conference on Data Science and Advanced Analytics : IEEE/ACM DSAA'2015 : 19-21 Oct 2015, Paris, France by IEEE/ACM DSAA( )

3 editions published in 2015 in English and held by 31 WorldCat member libraries worldwide

Nouvelles approches en recherche d'information( Book )

2 editions published in 2015 in French and held by 6 WorldCat member libraries worldwide

EMNLP 2006, 2006 Conference on Empirical Methods in Natural Language Processing : proceedings of the conference, 22-23 July 2006, Sydney Australia by Conference on Empirical Methods in Natural Language Processing( )

2 editions published in 2006 in English and held by 5 WorldCat member libraries worldwide

Interopérabilité Sémantique Multi-lingue des Ressources Lexicales en Données Liées Ouvertes by Andon Tchechmedjiev( )

2 editions published in 2016 in French and held by 3 WorldCat member libraries worldwide

When it comes to the construction of multilingual lexico-semantic resources, the first thing that comes to mind is that the resources we want to align, should share the same data model and format (representational interoperability). However, with the emergence of standards such as LMF and their implementation and widespread use for the production of resources as lexical linked data (Ontolex), representational interoperability has ceased to be a major challenge for the production of large-scale multilingual resources. However, as far as the interoperability of sense-level multi-lingual alignments is concerned, a major challenge is the choice of a suitable interlingual pivot. Many resources make the choice of using English senses as the pivot (e.g. BabelNet, EuroWordNet), although this choice leads to a loss of contrast between English senses that are lexicalized with a different words in other languages. The use of acception-based interlingual representations, a solution proposed over 20 years ago, could be viable. However, the manual construction of such language-independent pivot representations is very difficult due to the lack of expert speaking enough languages fluently and algorithms for their automatic constructions have never since materialized, mainly because of the lack of a formal axiomatic characterization that ensures the pre- servation of their correctness properties. In this thesis, we address this issue by first formalizing acception-based interlingual pivot architectures through a set of axiomatic constraints and rules that guarantee their correctness. Then, we propose algorithms for the initial construction and the update (dynamic interoperability) of interlingual acception-based multilingual resources by exploiting the combinatorial properties of pairwise bilingual translation graphs. Secondly, we study the practical considerations of applying our construction algorithms on a tangible resource, DBNary, a resource periodically extracted from Wiktionary in many languages in lexical linked data
Prédiction de l'activité dans les réseaux sociaux by François Kawala( )

1 edition published in 2015 in French and held by 2 WorldCat member libraries worldwide

Cette étude est dédiée à un problème d'exploration de données dans les médias sociaux: la prédiction d'activité. Dans ce problème nous essayons de prédire l'activité associée à une thématique pour un horizon temporel restreint. Dans ce problème des contenus générés par différents utilisateurs, n'ayant pas de lien entre eux, contribuent à l'activité d'une même thématique.Afin de pouvoir définir et étudier la prédiction d'activité sans référence explicite à un réseau social existant, nous définissons un cadre d'analyse générique qui permet de décrire de nombreux médias sociaux. Trois définitions de la prédiction d'activité sont proposées. Premièrement la prédiction de la magnitude d'activité, un problème de régression qui vise à prédire l'activité exacte d'une thématique. Secondement, la prédiction de Buzz, un problème de classification binaire qui vise à prédire quelles thématiques subiront une augmentation soudaine d'activité. Enfin la prédiction du rang d'activité, un problème de learning-to-rank qui vise à prédire l'importance relative de chacune des thématiques. Ces trois problèmes sont étudiés avec les méthodes de l'état de l'art en apprentissage automatique. Les descripteurs proposés pour ces études sont définis en utilisant le cadre d'analyse générique. Ainsi il est facile d'adapter ces descripteurs à différent média sociaux.Notre capacité à prédire l'activité des thématiques est testée à l'aide d'un ensemble de données multilingue: Français, Anglais et Allemand. Les données ont été collecté durant 51 semaines sur Twitter et un forum de discussion. Plus de 500 millions de contenus générés par les utilisateurs ont été capturé. Une méthode de validation croisée est proposée afin de ne pas introduire de biais expérimental lié au temps. De plus, une méthode d'extraction non-supervisée des candidats au buzz est proposée. En effet, les changements abrupts de popularité sont rares et l'ensemble d'entraˆınement est très déséquilibré. Les problèmes de prédiction de l'activité sont étudiés dans deux configurations expérimentales différentes. La première configuration expérimentale porte sur l'ensemble des données collectées dans les deux médias sociaux, et sur les trois langues observées. La seconde configuration expérimentale porte exclusivement sur Twitter. Cette seconde configuration expérimentale vise à améliorer la reproductibilité de nos expériences. Pour ce faire, nous nous concentrons sur un sous-ensemble des thématiques non ambigu¨es en Anglais. En outre, nous limitons la durée des observations à dix semaines consécutives afin de limiter les risques de changement structurel dans les données observées
Concise Pattern Learning for RDF Data Sets Interlinking by Zhengjie Fan( )

1 edition published in 2014 in English and held by 2 WorldCat member libraries worldwide

There are many data sets being published on the web with Semantic Web technology. The data sets usually contain analogous data which represent the similar resources in the world. If these data sets are linked together by correctly identifying the similar instances, users can conveniently query data through a uniform interface, as if they are connecting a single database. However, finding correct links is very challenging because web data sources usually have heterogeneous ontologies maintained by different organizations. Many existing solutions have been proposed for this problem. (1) One straight-forward idea is to compare the attribute values of instances for identifying links, yet it is impossible to compare all possible pairs of attribute values. (2) Another common strategy is to compare instances with correspondences found by instance-based ontology matching, which can generate attribute correspondences based on overlapping ranges between two attributes, while it is easy to cause incomparable attribute correspondences or undiscovered comparable attribute correspondences. (3) Many existing solutions leverage Genetic Programming to construct interlinking patterns for comparing instances, however the running times of the interlinking methods are usually long. In this thesis, an interlinking method is proposed to interlink instances for different data sets, based on both statistical learning and symbolic learning. On the one hand, the method discovers potential comparable attribute correspondences of each class correspondence via a K-medoids clustering algorithm with instance value statistics. We adopt K-medoids because of its high working efficiency and high tolerance on irregular data and even incorrect data. The K-medoids classifies attributes of each class into several groups according to their statistical value features. Groups from different classes are mapped when they have similar statistical value features, to determine potential comparable attribute correspondences. The clustering procedure effectively narrows the range of candidate attribute correspondences. On the other hand, our solution also leverages a symbolic learning method, called Version Space. Version Space is an iterative learning model that searches for the interlinking pattern from two directions. Our design can solve the interlinking task that does not have a single compatible conjunctive interlinking pattern that covers all assessed correct links with a concise format. The interlinking solution is evaluated with large-scale real-world data from IM@OAEI and CKAN. Experiments confirm that the solution with only 1% of sample links already reaches a high accuracy (up to 0.94-0.99 on F-measure). The F-measure quickly converges improving on other state-of-the-art approaches, by nearly 10 percent of their F-measure values
Modèles d'embeddings à valeurs complexes pour les graphes de connaissances by Théo Trouillon( )

1 edition published in 2017 in English and held by 2 WorldCat member libraries worldwide

The explosion of widely available relational datain the form of knowledge graphsenabled many applications, including automated personalagents, recommender systems and enhanced web search results.The very large size and notorious incompleteness of these data basescalls for automatic knowledge graph completion methods to make these applicationsviable. Knowledge graph completion, also known as link-prediction,deals with automatically understandingthe structure of large knowledge graphs--labeled directed graphs--topredict missing entries--labeled edges. An increasinglypopular approach consists in representing knowledge graphs as third-order tensors,and using tensor factorization methods to predict their missing entries.State-of-the-art factorization models propose different trade-offs between modelingexpressiveness, and time and space complexity. We introduce a newmodel, ComplEx--for Complex Embeddings--to reconcile both expressivenessand complexity through the use of complex-valued factorization, and exploreits link with unitary diagonalization.We corroborate our approach theoretically and show that all possibleknowledge graphs can be exactly decomposed by the proposed model.Our approach based on complex embeddings is arguably simple,as it only involves a complex-valued trilinear product,whereas other methods resort to more and more complicated compositionfunctions to increase their expressiveness. The proposed ComplEx model isscalable to large data sets as it remains linear in both space and time, whileconsistently outperforming alternative approaches on standardlink-prediction benchmarks. We also demonstrateits ability to learn useful vectorial representations for other tasks,by enhancing word embeddings that improve performanceson the natural language problem of entailment recognitionbetween pair of sentences.In the last part of this thesis, we explore factorization models abilityto learn relational patterns from observed data.By their vectorial nature, it is not only hard to interpretwhy this class of models works so well,but also to understand where they fail andhow they might be improved. We conduct an experimentalsurvey of state-of-the-art models, not towardsa purely comparative end, but as a means to get insightabout their inductive abilities.To assess the strengths and weaknesses of each model, we create simple tasksthat exhibit first, atomic properties of knowledge graph relations,and then, common inter-relational inference through synthetic genealogies.Based on these experimental results, we propose new researchdirections to improve on existing models, including ComplEx
Méthodes d'apprentissage approfondi pour l'extraction et le transfert de style by Omar Mohammed( )

1 edition published in 2019 in English and held by 2 WorldCat member libraries worldwide

One aspect of a successful human-machine interface (e.g. human-robot interaction, chatbots, speech, handwriting...,etc) is the ability to have a personalized interaction. This affects the overall human experience, and allow for a more fluent interaction. At the moment, there is a lot of work that uses machine learning in order to model such interactions. However, these models do not address the issue of personalized behavior: they try to average over the different examples from different people in the training set. Identifying the human styles (persona) opens the possibility of biasing the models output to take into account the human preference. In this thesis, we focused on the problem of styles in the context of handwriting.Defining and extracting handwriting styles is a challenging problem, since there is no formal definition for those styles (i.e., it is an ill-posed problem). Styles are both social - depending on the writer's training, especially in middle school - and idiosyncratic - depends on the writer's shaping (letter roundness, sharpness...,etc) and force distribution over time. As a consequence, there are no easy/generic metrics to measure the quality of style in a machine behavior.We may want to change the task or adapt to a new person. Collecting data in the human-machine interface domain can be quite expensive and time consuming. Although most of the time the new task has many things in common with the old task, traditional machine learning techniques fail to take advantage of this commonality, leading to a quick degradation in performance. Thus, one of the objectives of my thesis is to study and evaluate the idea of transferring knowledge about the styles between different tasks, within the machine learning paradigm.The objective of my thesis is to study these problems of styles, in the domain of handwriting. Available to us is IRONOFF dataset, an online handwriting datasets, with 410 writers, with ~25K examples of uppercase, lowercase letters and digits drawings. For transfer learning, we used an extra dataset, QuickDraw!, a sketch drawing dataset containing ~50 million drawing over 345 categories.Major contributions of my thesis are:1) Propose a work pipeline to study the problem of styles in handwriting. This involves proposing methodology, benchmarks and evaluation metrics.We choose temporal generative models paradigm in deep learning in order to generate drawings, and evaluate their proximity/relevance to the intended/ground truth drawings. We proposed two metrics, to evaluate the curvature and the length of the generated drawings. In order to ground those metics, we proposed multiple benchmarks - which we know their relative power in advance -, and then verified that the metrics actually respect the relative power relationship.2) Propose a framework to study and extract styles, and verify its advantage against the previously proposed benchmarks.We settled on the idea of using a deep conditioned-autoencoder in order to summarize and extract the style information, without the need to focus on the task identity (since it is given as a condition). We validate this framework to the previously proposed benchmark using our evaluation metrics. We also to visualize on the extracted styles, leading to some exciting outcomes!3) Using the proposed framework, propose a way to transfer the information about styles between different tasks, and a protocol in order to evaluate the quality of transfer.We leveraged the deep conditioned-autoencoder used earlier, by extract the encoder part in it - which we believe had the relevant information about the styles - and use it to in new models trained on new tasks. We extensively test this paradigm over a different range of tasks, on both IRONOFF and QuickDraw! datasets. We show that we can successfully transfer style information between different tasks
Learning information retrieval functions and parameters on unlabeled collections by Parantapa Goswami( )

1 edition published in 2014 in English and held by 2 WorldCat member libraries worldwide

The present study focuses on (a) predicting parameters of already existing standard IR models and (b) learning new IR functions. We first explore various statistical methods to estimate the collection parameter of family of information based models (Chapter 2). This parameter determines the behavior of a term in the collection. In earlier studies, it was set to the average number of documents where the term appears, without full justification. We introduce here a fully formalized estimation method which leads to improved versions of these models over the original ones. But the method developed is applicable only to estimate the collection parameter under the information model framework. To alleviate this we propose a transfer learning approach which can predict values for any parameter for any IR model (Chapter 3). This approach uses relevance judgments on a past collection to learn a regression function which can infer parameter values for each single query on a new unlabeled target collection. The proposed method not only outperforms the standard IR models with their default parameter values, but also yields either better or at par performance with popular parameter tuning methods which use relevance judgments on target collection. We then investigate the application of transfer learning based techniques to directly transfer relevance information from a source collection to derive a "pseudo-relevance" judgment on an unlabeled target collection (Chapter 4). From this derived pseudo-relevance a ranking function is learned using any standard learning algorithm which can rank documents in the target collection. In various experiments the learned function outperformed standard IR models as well as other state-of-the-art transfer learning based algorithms. Though a ranking function learned through a learning algorithm is effective still it has a predefined form based on the learning algorithm used. We thus introduce an exhaustive discovery approach to search ranking functions from a space of simple functions (Chapter 5). Through experimentation we found that some of the discovered functions are highly competitive with respect to standard IR models
IEEE/ACM/ASA DSAA' 2017 by IEEE International Conference on Data Science & Advanced Analytics( Book )

1 edition published in 2017 in English and held by 2 WorldCat member libraries worldwide

Un modèle général d'information by Leïla Khefi-Khelif( Book )

2 editions published in 2006 in French and held by 2 WorldCat member libraries worldwide

Ln the information retrieval (IR) task, characteristics related to the context of the user search induce sorne needs it is necessary to take into account in the modeling of the IR system. ln this work we consider that the user has a memory about the documents he wants to find: his need consists of a description of the ideal document w.r.t his memory of the content ofthese documents. ln the aim to tackle this need, we propose an infonnation retrieval model based on (i) a complex language (inter-connected entities with multiple use of the same entity to describe the document and the user query), (ii) additional criteria on query tenns, focusing on obligation/optionality, and certainty/uncertainty, in order to express user doubts and its vague needs, and (iii) a matching function which takes into account constraints related to the documentlquery representation, as weil as a query refonnulation approach based on characteristics of documents that are considered relevant by the user. This model is applied thereafter within a concrete application: graphics retrieval by professionals in technical documentation. Through this application, we compare our model with classical IR models in order to validate our approach (e.g. obligation/optionality, and certainty/uncertainty criteria)
An overview of the BIOASQ large-scale biomedical semantic indexing and question answering competition by G Tsatsaronis( )

1 edition published in 2015 in English and held by 2 WorldCat member libraries worldwide

Diffusion de l'information dans les réseaux sociaux by Cédric Lagnier( )

1 edition published in 2013 in French and held by 2 WorldCat member libraries worldwide

Predicting the diffusion of information in social networks is a key problem for applications like Opinion Leader Detection, Buzz Detection or Viral Marketing. Many recent diffusion models are direct extensions of the Cascade and Threshold models, initially proposed for epidemiology and social studies. In such models, the diffusion process is based on the dynamics of interactions between neighbor nodes in the network (the social pressure), and largely ignores important dimensions as the content diffused and the active/passive role users tend to have in social networks. We propose here a new family of models that aims at predicting how a content diffuses in a network by making use of additional dimensions : the content diffused, user's profile and willingness to diffuse. In particular, we show how to integrate these dimensions into simple feature functions, and propose a probabilistic modeling to account for the diffusion process. These models are then illustrated and compared with other approaches on two blog datasets. The experimental results obtained on these datasets show that taking into account these dimensions are important to accurately model the diffusion process. Lastly, we study the influence maximization problem with these models and prove that it is NP-hard, prior to propose an adaptation of the greedy algorithm to approximate the optimal solution
Generalized k-means-based clustering for temporal data under time warp by Saeid Soheily-Khah( )

1 edition published in 2016 in English and held by 2 WorldCat member libraries worldwide

Temporal alignment of multiple time series is an important unresolved problem in many scientific disciplines. Major challenges for an accurate temporal alignment include determining and modeling the common and differential characteristics of classes of time series. This thesis is motivated by recent works in extending Dynamic time warping for aligning multiple time series from several applications including speech recognition, curve matching, micro-array data analysis, temporal segmentation or human motion. However these DTW-based works suffer of several limitations: 1) They address the problem of aligning two time series regardless of the remaining time series, 2) They involve uniformly the features of the multiple time series, 3) The time series are aligned globally by including the whole observations. The aim of this thesis is to explore a generalized dynamic time warping for time series clustering. This work includes first the problem of prototype extraction, then the alignment of multiple and multidimensional time series
Modélisation et apprentissage de dépendances á l'aide de copules dans les modéles probabilistes latents by Hesam Amoualian( )

1 edition published in 2017 in English and held by 2 WorldCat member libraries worldwide

This thesis focuses on scaling latent topic models for big data collections, especiallywhen document streams. Although the main goal of probabilistic modeling is to find word topics, an equally interesting objective is to examine topic evolutions and transitions. To accomplish this task, we propose in Chapter 3, three new models for modeling topic and word-topic dependencies between consecutive documents in document streams. The first model is a direct extension of Latent Dirichlet Allocation model (LDA) and makes use of a Dirichlet distribution to balance the influence of the LDA prior parameters with respect to topic and word-topic distributions of the previous document. The second extension makes use of copulas, which constitute a generic tool to model dependencies between random variables. We rely here on Archimedean copulas, and more precisely on Franck copula, as they are symmetric and associative and are thus appropriate for exchangeable random variables. Lastly, the third model is a non-parametric extension of the second one through the integration of copulas in the stick-breaking construction of Hierarchical Dirichlet Processes (HDP). Our experiments, conducted on five standard collections that have been used in several studies on topic modeling, show that our proposals outperform previous ones, as dynamic topic models, temporal LDA and the Evolving Hierarchical Processes,both in terms of perplexity and for tracking similar topics in document streams. Compared to previous proposals, our models have extra flexibility and can adapt to situations where there are no dependencies between the documents.On the other hand, the "Exchangeability" assumption in topic models like LDA oftenresults in inferring inconsistent topics for the words of text spans like noun-phrases, which are usually expected to be topically coherent. In Chapter 4, we propose copulaLDA (copLDA), that extends LDA by integrating part of the text structure to the model and relaxes the conditional independence assumption between the word-specific latent topics given the per-document topic distributions. To this end, we assume that the words of text spans like noun-phrases are topically bound and we model this dependence with copulas. We demonstrate empirically the effectiveness of copLDA on both intrinsic and extrinsic evaluation tasks on several publicly available corpora. To complete the previous model (copLDA), Chapter 5 presents an LDA-based model that generates topically coherent segments within documents by jointly segmenting documents and assigning topics to their words. The coherence between topics is ensured through a copula, binding the topics associated to the words of a segment. In addition, this model relies on both document and segment specific topic distributions so as to capture fine-grained differences in topic assignments. We show that the proposed model naturally encompasses other state-of-the-art LDA-based models designed for similar tasks. Furthermore, our experiments, conducted on six different publicly available datasets, show the effectiveness of our model in terms of perplexity, Normalized Pointwise Mutual Information, which captures the coherence between the generated topics, and the Micro F1 measure for text classification
From lexical towards contextualized meaning representation by Diana-Nicoleta Popa( )

1 edition published in 2019 in English and held by 2 WorldCat member libraries worldwide

Continuous word representations (word type embeddings) are at the basis of most modern natural language processing systems, providing competitive results particularly when input to deep learning models. However, important questions are raised concerning the challenges they face in dealing with the complex natural language phenomena and regarding their ability to capture natural language variability.To better handle complex language phenomena, much work investigated fine-tuning the generic word type embeddings or creating specialized embeddings that satisfy particular linguistic constraints. While this can help distinguish semantic similarity from other types of semantic relatedness, it may not suffice to model certain types of relations between texts such as the logical relations of entailment or contradiction.The first part of the thesis investigates encoding the notion of entailment within a vector space by enforcing information inclusion, using an approximation to logical entailment of binary vectors. We further develop entailment operators and show how the proposed framework can be used to reinterpret an existing distributional semantic model. Evaluations are provided on hyponymy detection as an instance of lexical entailment.Another challenge concerns the variability of natural language and the necessity to disambiguate the meaning of lexical units depending on the context they appear in. For this, generic word type embeddings fall short of being successful by themselves, with different architectures being typically employed on top to help the disambiguation. As type embeddings are constructed from and reflect co-occurrence statistics over large corpora, they provide one single representation for a given word, regardless of its potentially numerous meanings. Furthermore, even given monosemous words, type embeddings do not distinguish between the different usages of a word depending on its context.In that sense, one could question if it is possible to directly leverage available linguistic information provided by the context of a word to adjust its representation. Would such information be of use to create an enriched representation of the word in its context? And if so, can information of syntactic nature aid in the process or is local context sufficient? One could thus investigate whether looking at the representations of the words within a sentence and the way they combine with each-other can suffice to build more accurate token representations for that sentence and thus facilitate performance gains on natural language understanding tasks.In the second part of the thesis, we investigate one possible way to incorporate contextual knowledge into the word representations themselves, leveraging information from the sentence dependency parse along with local vicinity information. We propose syntax-aware token embeddings (SATokE) that capture specific linguistic information, encoding the structure of the sentence from a dependency point of view in their representations. This enables moving from generic type embeddings (context-invariant) to specific token embeddings (context-aware). While syntax was previously considered for building type representations, its benefits may have not been fully assessed beyond models that harvest such syntactical information from large corpora.The obtained token representations are evaluated on natural language understanding tasks typically considered in the literature: sentiment classification, paraphrase detection, textual entailment and discourse analysis. We empirically demonstrate the superiority of the token representations compared to popular distributional representations of words and to other token embeddings proposed in the literature.The work proposed in the current thesis aims at contributing to research in the space of modelling complex phenomena such as entailment as well as tackling language variability through the proposal of contextualized token embeddings
 
moreShow More Titles
fewerShow Fewer Titles
Audience Level
0
Audience Level
1
  General Special  
Audience level: 0.50 (from 0.30 for Textual in ... to 0.97 for IEEE/ACM/A ...)

Textual information access : statistical models
Covers
Languages
English (28)

French (23)