Over the last decade, the online search for biological information has progressed rapidly and has become an integral part of any scientific discovery process. Today, it is virtually impossible to conduct R&D in biomedicine without relying on the kind of Web resources developed and maintained by the NCBI. Indeed, each day millions of users search for biological information via NCBIs online Entrez system. However, finding data relevant to a users information need is not always easy in Entrez. Improving our understanding of the growing population of Entrez users, their information needs and the way in which they meet these needs opens opportunities to improve information services and information access provided by NCBI. Among all Entrez databases, PubMed is the most used and often serves as an entry point for people to access related data in other databases.One resource for understanding and characterizing patrons of PubMed search engines is its transaction logs. Our previous investigation of PubMed search logs has led us to develop and deploy several useful applications in assisting user searches and retrieval such as the query formulation in PubMed, namely Related Queries, Query Autocomplete and Author Name Disambiguation. Inspired by past success, we have continued using log analysis to improve access to NCBI resources. For example, we have used user clicks to identify articles that the user considered relevant to their own query. In 2016-2017, we have used deep learning models to understand the relationship between the query and the content of potentially relevant articles. This approach is robust and outperforms both traditional IR algorithms as well as related shallow and deep models based on continuous representations of text, with better results on the under-specified query and term mismatch problems. Of course, there are multiple factors that indicate whether an article is relevant to the searcher. These include the connection between the query and the content, how recent the article is, whether other people found the article relevant, etc. PubMeds new Best Match sort order (using a Learning to Rank algorithm) combines a number of different scores and sources of information to identify the most relevant queries. This has significantly improved the results of our relevance rankings since Spring 2017. We are continuing the effort begun by our work on TermVariants. When a term is used in a query, usually documents using equivalent terms are also desired. A seeming trivial example is singular and plural terms. But care must be taken to avoid irrelevant articles. For example, navely applying plural rules to abbreviations is often not helpful. Guidelines are being developed to show where these expansions will be helpful. To better understand queries, we developed a Field Sensor to completely identify the portions and aims of a query. In other words, we identify which part of the query is an author name, a journal title, a date, or key phrases describing a knowledge the searcher would like to uncover. One practical use for this tool is reminding those looking for information, not specific articles, about our improved relevance searching. We continue to improve our handling and understanding of author names in PubMed articles. Principle Investigators on NIH-funded grants make a particularly important subset of PubMed authors. Additional information about these authors is available from their grants. Information about published papers in grants allows us to do a better job connecting papers and authors. These authors can be more reliably identified between different institutional affiliations, across changes in research focus and even connect different names for the same author.