Added to the Google Search Engine, the new multisearch option allows you to combine search terms with relevant images, increasing the accuracy of the results found.
It often happens that you search for a person or object without knowing its exact name. In this case, simply starting a search by image will often return only generic matches with other similar pictures in appearance, but still irrelevant to what you are looking for. Similarly, if you do not know exactly what you are looking for, the use of phrases and keywords will also return generic results.
Google is trying to address this limitation of the search engine by combining the two search technologies: text and images. In fact, the new multisearch feature may be much more powerful than the two taken separately, with Google using artificial intelligence technologies to determine the context in which those expressions and images are used.
For now, Google claims that the apparel shopping scenario is the ideal use case for the new multisearch feature, which is currently in beta testing.
For now, the new feature is added to the Google app for Android and iOS users in the United States. Where multisearch is available, all you have to do to start a search is upload a picture taken with your mobile phone. Select the product you want to search for from the image and add a description or keywords that might help narrow your search. Ideally, the results list should directly return a selection of similar-looking products, taken from the offer of stores that have published their catalog online.