Guided Selling Glossary
Scoring
Scoring is the process of evaluating how well each product in a catalog matches a shopper's answers in a product finder quiz. It turns declared intent into a ranked set of recommendations by combining product attributes, question weights, and merchandising rules.
Last updated 2026-02-20
Scoring (Product Recommendation Scoring)
Scoring is the process of evaluating how well each product in a catalog matches a shopper’s answers in a product finder quiz. It turns declared intent into a ranked set of recommendations by combining product attributes, question weights, and merchandising rules.
Also known as: product scoring, recommendation scoring, quiz scoring, attribute-based scoring.
What it is / What it isn’t
- Scoring is: a structured method for matching shopper answers to products based on how well each product fits the shopper’s stated needs.
- Scoring is not: a popularity ranking, a collaborative filter (“people who bought X also bought Y”), or a black box that the merchandising team can’t inspect or adjust.
Two approaches to scoring
Most product finder quiz platforms use one of two approaches, or a combination of both.
Decision-tree logic
A decision tree maps every possible path a shopper can take through the quiz and assigns specific product recommendations to each path.
Example: A skincare quiz asks two questions: “Do you have sensitive skin?” and “What’s your top concern?” For each combination (sensitive + fine lines, normal + dryness, etc.), the brand team manually selects the exact products to recommend.
When it works well:
- Short quizzes with a small number of products
- Product lineup that doesn’t change often
- Each answer combination needs hand-picked precision
Where it breaks down:
- As questions or products increase, the number of combinations grows fast
- Every catalog change (new product, discontinued product) requires manual updates
- Adding a new question means rebuilding the entire decision table
Attribute-based scoring
Attribute-based scoring evaluates how well each individual product matches each individual answer choice. Instead of mapping every path, it scores every product against every relevant attribute and finds the best overall matches.
Example: Using the same skincare quiz, each product is scored on how well it works for sensitive skin and how well it addresses each concern. A gentle moisturizer might score well for sensitivity but moderately for fine lines. The system combines these scores to find the products that best match the shopper’s full set of answers.
When it works well:
- Large catalogs with frequent product changes
- Quizzes that evolve over time (new questions, new answer options)
- Multi-select questions where shoppers can choose more than one option
Why most enterprise implementations favor it:
- Scoring rules can be automated using data from the product feed (names, descriptions, attributes, categories), so the merchandising team doesn’t have to manually score every product
- When a new product is added to the feed, the system scores it automatically based on existing rules
- Adding a new question only requires adding new scoring rules, not rebuilding the entire recommendation map
The layers that make scoring work
Raw scores alone don’t produce good recommendations. Several additional layers shape how scores translate into what the shopper sees.
Question weighting
Not every question in a quiz is equally important. Scoring systems let the merchandising team assign weight to each question based on how much it should influence the final recommendation:
- Low weight: the question is useful for data collection or acts as a tiebreaker, but doesn’t heavily influence product selection
- High weight: the system strongly favors products that match this answer
- Filter (must-match): a product is excluded entirely if it doesn’t match the shopper’s answer to this question
For example, in a skincare quiz, the “sensitive skin” question might be treated as a filter: if the shopper says yes, products not suitable for sensitive skin are removed from consideration entirely, regardless of how well they score on other attributes.
Interpretation modes
How the system combines scores depends on the type of question:
- Preference-style questions (“What colors do you like?”): the system looks for products that match any of the selected options. A shopper who picks blue and black wants to see products in either color.
- Requirement-style questions (“What features do you need?”): the system prioritizes products that match all of the selected options. A shopper who picks touchscreen and spillproof keyboard ideally wants both.
The right interpretation mode depends on the product category and the intent behind the question.
Merchandising overrides
On top of scoring, merchandising rules can further refine results:
- Require that results include products at different price points
- Limit results to one color variant per product
- Set a minimum match threshold (don’t show products below a certain score)
- Boost or bury specific products while still respecting the shopper’s answers
These overrides give the merchandising team precise control without replacing the scoring logic. For more on how these rules work, see merchandising rules.
What the brand gets
- Recommendations based on structured product-to-answer fit, not popularity or guesswork
- A scoring system that scales with the catalog and doesn’t require manual updates for every new product
- Fine-grained controls (weighting, interpretation, overrides) that let the merchandising team encode real domain knowledge
- A foundation for ongoing optimization: adjust weights, add questions, and refine rules based on performance data
Cartful context
Cartful uses a hybrid approach that combines elements of decision-tree logic and attribute-based scoring, tailored to each brand’s product category and quiz goals:
- scoring rules that connect to the product feed, so products are evaluated automatically based on their attributes
- question weighting that lets the merchandising team control how much each answer influences the recommendation
- multiple interpretation modes for different question types (preference vs. requirement)
- merchandising overrides for precise control on top of the scoring layer
- a no-code visual editor (Studio) where teams can adjust scoring, weights, and rules without engineering involvement
- the scoring approach is recommended and configured during onboarding by Cartful’s team based on your product category
Common pitfalls
- Using decision-tree logic for a large or frequently changing catalog (the recommendation table becomes unmanageable)
- Treating every question as equally important (some answers matter more than others for product fit)
- Not connecting scoring rules to the product feed (manual scoring doesn’t scale and goes stale)
- Ignoring interpretation mode: treating a feature requirement like a color preference (or vice versa) produces poor matches
- Skipping merchandising overrides when edge cases matter (e.g., showing three variants of the same product)
Related
See Cartful in action
Get a live walkthrough tailored to your catalog.