HҮBRIS RUNTIME DEVELOPMENT TOOLS UPDATE

:
IMPROVED FLEXIBLESEARCH. NEW IMPEX, MEDIA,
CONFIGURATION TOOLS
20.08.2016 · by

Rauf Aliev

· in Other · Leave a comment

Overview
 

In one of my previous articles I announced my new project

 

, hybris  Runtime Development

tools. This post is about some important updates to the package.

/

-

Hybris Runtime Development tools is a set of the web command line utilities for hybris

.

developers to progress faster with the everyday tasks and troubleshooting

 

-

 

.

available both as a command line application and as an API interface

these components freely with your own toolset

 (

The updates are marked with

,

-

Each utility is

So you can combine

.

both web based and standalone

!)

NEW

hybrisFlexibleSearch (100% ready)

,

REST API and a console tool to FlexibleSearchService

-

,

FlexibleSearch command line interface

,

Various output formats

Using pipes and filters

(

,

,

,

,

such as TXT TSV CSV XML

,

grep

awk

Cascade reference resolution

,

 

perl etc

),

(

),

no more PK is in output

Handy for testing and troubleshooting

NEW! Search by PK. No need to specify a type, just a PK.
NEW! FlexibleSearch built-in beautifier (reformatter) 
NEW! Supporting a registry of hybris types without unique fields for proper
resolution (see below)

NEW! Mixing types in Select is available now

hybrisTypeSystem (100% ready)

,

REST API and a console tool to the type system

,

Shows all available types

,

Shows the detailed information about the type

.

Shows the detailed information about the type attribute

NEW! Supporting collection types
NEW! Search a type of the item by it’s PK

hybrisBeans (100% ready)

,

REST API and console tool with the common configuration

,

Shows the information about the specific bean

Changing bean values

(!)

.

on the fly

hybrisImpex (100% ready) NEW! 
REST API and console tool with the common configuration
NEW! impex beautifier (reformatter) NEW!
NEW! executing impexes from files (HAC-like mode)
NEW! executing impexes from Media (HMC-like mode)

hybrisMedia (100% ready) NEW!
NEW! REST API and console interface to Media resources
NEW! requesting information about the media item by code
NEW! saving/displaying the media item by code
NEW! uploading media files by media code

hybrisConfiguration (100% ready)  NEW!
NEW! all configuration parameters with values
NEW! changing configuration parameters (key/value)
NEW! syncronization configuration parameters in memory with configuration
files

hybrisGroovy (0%)
executing groovy scripts

hybrisCronjob (0%)
REST API and console interface to Cronjobs

Starting

/

stopping cronjobs

Changing cronjob attributes

hybrisSession (0%)
REST API and console interface to sessions

Changing session attributes

hybrisUsers (0%)
REST API and console interface to the user and customer service

 

sets up a current user by login

-

change user related runtime information

hybrisLog (0%)
log parsing operations

/

activating deactivating debug mode for classes

hooks for flexible search requests and solr requests

hybrisSOLR (0%)
REST API and console interface to SOLR search and SOLR indexer

/

start stop index

request for SOLR data

REST API and console tool with the common configuration

Execute groovy scripts easily

hybrisPWS (0%)
console interface to Platform Web Services

See details about the updates in forthcoming

 

.

publications

hybrisFlexibleSearch
.

Please refer to the previous post to see the basic capabilities

This utility is designed for

.

perfoming FlexibleSearch requests from the command line

:

Major updates are the following

NEW! Search by PK. No need to specify a type, just a PK.
NEW! FlexibleSearch built-in beautifier (reformatter) 
NEW! Supporting a registry of hybris types without unique fields for proper resolution
(see below)
NEW! Mixing types in Select is available now

Search by PK

,

If you have a PK of any object in hybris

you can see a detailed information about this object

.

.

It is not required now to know what type this object has

$ ./hybrisFlexibleSearch.sh ‐pk 8796176187393
catalogVersion code
electronicsProductCatalog:Online 4135570
$ ./hybrisFlexibleSearch.sh ‐pk 8796093120544
isocode
es

For the API version of this feature a new URL parameter

“ ”
pk

.

is introduced

Flexible Search reformatter

For the API version of this feature a new URL parameter

beautify

is introduced

(

).

boolean

Registry of hybris types without unique fields
.

This feature is about the PK resolution

.

contains java objects

text

.

,

In hybris

the response from FlexibleSearch service

,

However for exporting data you need to convert these objects into

,

.

For atomic types like String or Integer the conversion is automatic

The hybris default Flexible Search viewer can t resolve the references and you need to rewrite

.

your queries

,

For example

:

this is what a typical response looks like in HAC

,

All reference links

all collections and all maps are

”(

encoded

).

each in his or her own way

.

My flexible search console application accomplishes this task by cascade resolution

,

example above

{

}

normal

from

my application in response to the flexible search request

{

}”

Product

select

{

For the

},

pk

:

will return the following

$ ./hybrisFlexibleSearch.sh ‐q "Select {pk} from {Product}" ‐f "pk,normal" ‐
mr 4
pk normal
8796167176193 (electronicsProductCatalog:Online:/300Wx300H/266899‐5310.jpg)
8796167340033 (electronicsProductCatalog:Online:/300Wx300H/284533‐9485.jpg)
8796167372801 (electronicsProductCatalog:Online:/300Wx300H/107701‐5509.jpg)

,

normal is a collection of Media

,

example

there is only one item in the list

,

configuration of the utility

.

so this is a reason why the value is in parentheses

 

.

.

This item is of Media type

,

Media has two keys

In my

According to the

.

catalogVersion and code

,

The other utility

:

hybrisTypeSystem helps to see it easily

$ ./hybrisTypeSystem.sh ‐t Media | grep UNIQ
catalogVersion CatalogVersion [UNIQ], [!]
code java.lang.String [UNIQ], [!]

.

Code is an atomic type

 /300

In the example above

=8796167176193.

item with pk

because the first component

,

300 /266899-5310.

Wx

H

jpg is a code of the

?

Why do we see three components rather than two

,

CatalogVersion

It is

is also a composed type and my utility

.

performs an additional request to resolve it into Catalog and catalogVersion

,

and catalogVersions are also composed types

Both catalog

.

and the system retrieves their unique codes

?

What rules are used for the resolution

,

For the example above

+

that Media should be resolved into CatalogVersion

resolved into Catalog

+

?

Version

?

Code

how the system understands

that CatalogVersion should be

?

and that Catalog should be resolved into Id

:

The rules are simple

,

If the type has unique fields

they are used for the resolution

,

If the type has no unique fields

.

manual rules are used

,

,

However there too many types that have no unique fields

namely

 

452

in hybris

6.1.

There is

, types-without-uniq-fields.txt.

a list of these types in the updated version of the application

:

The format of this file is simple

AbstractAdvancedSavedQuerySearchParameter:pk
AbstractComment:pk
...
WorkflowTemplateActionTemplateRelation:pk

.

Each line is a rule

1, 2, 3.

Y

Y

Y

: 1, 2, 3

The rule X Y

Y

Y

says that the type X should be resolved in the fields

.

Most of the rules have only pk on the right side

.

used types from this list to my convenient rules

I changed some of the frequently

:

The rule for PriceRow type

PriceRow:currency,price

If you try to see the price of the Product

,

:

the price will be resolved according to this rule

$ ./hybrisFlexibleSearch.sh ‐q "select {pk} from {PriceRow}" ‐mr 1
currency price
JPY 3030.0

Mixing type in SELECTs and joins

Mixing type in SELECTs and joins

 

,

In the first release of hybrisFlexibleSearch

one object out from the request

.

,“

query

For example

select

{

}

pk

.

there were any capabilities to request more than

The PK of this object is the first component of

from

{

}”

Product

select


.

returns a set of objects of Product type

,

However if you wanted to show prices along with the product

,

the previous version of

.

hybrisFlexibleSearch wasn t able to do so

.

I added this feature into the new version of hybrisFlexibleSearch

Now you are able to process

requests like

select
     {pk}, {PriceRow}
 from
     {Product},
     {PriceRow}
 where
     {PriceRow.ProductId}={Product.code}

(Instead of 

./hybrisFlexibleSearch.sh -q "select {pk} from {Product}" -f code,europe1Prices 

that have been available in the previous version)

,

.

However there is a issue with the type resolution from the attributes in the response

,

example

,{

for the flexible request above

}

PriceRow

is hybris

PriceRow type and hybris

.

flexiblesearch engine is not capable to identify this type without any clues

of the utility there is an command line option named resultTypes

designed to show the types of the resulting set

.

For

(-

In the new version

r for short

)

that is

Compare the output with and without this

:

option

:

Without resultTypes

$ ./hybrisFlexibleSearch.sh ‐q "select {pk}, {PriceRow.pk} from {Product}, 
{PriceRow} where {PriceRow.ProductId}={Product.code}" ‐mr 5
catalogVersion code
electronicsProductCatalog:Online 266899 8796093088799
electronicsProductCatalog:Online 266899 8796093252639
electronicsProductCatalog:Online 65652 8796096201759
electronicsProductCatalog:Online 65652 8796096300063
electronicsProductCatalog:Online 284533 8796098200607

:

With resultTypes

$ ./hybrisFlexibleSearch.sh ‐q "select {pk}, {PriceRow.pk} from {Product}, 
{PriceRow} where {PriceRow.ProductId}={Product.code}" ‐mr 5 ‐r Item,PriceRow
catalogVersion code
electronicsProductCatalog:Online 266899 USD:86.86
electronicsProductCatalog:Online 266899 JPY:7400.0
electronicsProductCatalog:Online 65652 USD:358.88
electronicsProductCatalog:Online 65652 JPY:30570.0
electronicsProductCatalog:Online 284533 USD:419.43

By default

,

the type of fields in the SELECT part is

”.

String

-

So resultTypes will help you to

convert the PKs from this list into the human readable form

For the API version of this feature a new URL parameter

.

resultTypes

.

is introduced

hybrisTypeSystem
.

Please refer to the previous post to see the basic capabilities

This utility is designed for

.

requesting the information about the hybris types

Features
NEW! Supporting collection types
NEW! Search a type of the item by it’s PK

Supporting collection types
$ ./hybrisTypeSystem.sh ‐t CommentItemRelationcommentsColl
The Element of collection (type): Comment
Code: CommentItemRelationcommentsColl
Extension name: comments
Description: null
XML definition: <collectiontype code=CommentItemRelationcommentsColl 
elementtype="Comment autocreate="false generate="false/>

Search a item type by pk
Now also can find a type by PK

(

$ ./hybrisTypeSystem.sh ‐pk 8796176187393
Product

hybrisImpex
Features

.

in the similar way as described above

Features
REST API and console tool with the common configuration
NEW! impex beautifier (reformatter) NEW!
NEW! executing impexes from files (HAC-like mode)
NEW! executing impexes from Media (HMC-like mode)

Command-line interface
-

The command line app is a console wrapper for API

.

.

See API for the details

:

Usage

./hybrisImpex.sh ‐i file1.impex file2.impex
./hybrisImpex.sh ‐i folder/*
./hybrisImpex.sh ‐c <mediaCode>

For the last example you need to upload an Impex File to Media storage using the following

command

./hybrisMedia ‐c <mediaCode> -i <impexFile> ‐mt ImpExMedia

Parameters
parameter

 Description

–?

help

beautify (-b)

,

Example

.

available options

.

Only for the console version

.

Reformats the impex file

-

All other commands are

b

ignored

legacy (-l)

legacy mode on

maxThreads (-

Maximum number of threads

.

-

Default value is false

.

By default

l

, 16

.

threads

 -

mt

16

mt)
validationMode

Validaton mode

(-mode)

strict

codeExec (-ce)

Enable code execution

mediaCode (-

the code of the media of ImpExMedia type to import

(

/

).

strict relaxed

 -

Default value is

.

=

mode

.

 -

By default it is disabled

.

 -

ce

mt

16

strict

mc)

At first you need to upload the impex as a media

object with this code

Examples
Reformatter (beautifier)
./hybrisImpex.sh  ‐b data/products‐classifications_en.impex

Executing impexes
:

There are two options on how to execute impexes

to load the impex file directly to the hybris

Runtime Development tool server and execute there or to upload it as a media

)

not

(

using tools or

.

and execute the impex file by the reference to the media

hybrisMedia
.

It is a new utility for uploading media to the system directly from the command line

Features
NEW! REST API and console interface to Media resources
NEW! uploading media files by media code

Console

Console
All medias

(

)

it is a long list

./hybrisMedia.sh ‐am

All media formats

(

to use with

-

)

mt option

./hybrisMedia.sh ‐amf

,

Uploading a file

:

creating an item of Media type in hybris

./hybrisMedia.sh ‐c <mediaCode> ‐i <file name> ‐mt Media

Show information about the image with the specified Media code

./hybrisMedia.sh ‐c <mediaCode>

Download the media file with the specified Media code to the current directory

./hybrisMedia.sh ‐c <mediaCode> ‐d

.

Uploading an Impex file

It can be processed later with hybrisImpex utility

./hybrisMedia.sh ‐c <mediaCode> ‐i  <ImpexFilename> ‐mt ImpExMedia

Download and show the contents of the file

(

./hybrisMedia.sh ‐c <mediaCode> ‐d 

Download and save the contents of the file

./hybrisMedia.sh ‐c <mediaCode> ‐d ‐t <localFile>

Parameters

!)

use for text files only

Parameters
parameter

 Description

–?

help

-all-medias (-

,

Example

.

available options

.

Only for the console version

The plain list of all media items

(

)

-

code

a

a)
-all-media-

-

Displays all media formats

amf

formats(amf)
mediaFormat

 -

Specifies the media format

(-mf)
download (-

mf

 -

Download file from hybris

 300

Wx

300

H

d

d)

 -

tofile (-t)

Save the downloaded file

mediaType (-

Media type of a newly created item

mt)

media types are available OOTB

 

.

BarcodeMedia

.

t file jpg

 

:

.

 -

The following

,

Media

mt

ImpExMedia

,

CatalogUnawareMedia

CatalogVersionSyncScheduleMedia

,

ConfigurationMedia

Document

,

,

,

,

,

EmailAttachment Formatter ImpExMedia

,

JasperMedia

,

LDIFMedia

,

LogFile

ScriptMedia

download (-

Download the media file with the specified m media

d)

code

(

should be used with

- )

-

c

_01

c

mediacode

-

d

code (-c)

Code of the item to retrieve

)

other options for the info

(- /
d

/

download

or create

(

with

 -

or no

c

-)
i

hybrisConfiguration
This utility is designed for runtime configuration management

,

are available also in HAC

,

the Configuration tab

,

Basically

.

except the sync feature

Logging

.

You are able to see hybris logs in the tool s output

:

all the capabilities

$ ./hybrisImpex.sh ‐i data/products.impex
[READING IMPEX] data/products.impex
INFO   | jvm 2    | main    | 2016/08/21 14:06:21.580 | INFO  [hybrisHTTP5] 
[DefaultImportService] Starting import synchronous using cronjob with 
code=00000G7D
INFO   | jvm 2    | main    | 2016/08/21 14:06:21.585 | INFO  [hybrisHTTP5] 
(00000G7D) [ImpExImportJob] Starting multi‐threaded ImpEx cronjob "ImpEx‐
Import" (16 threads)
INFO   | jvm 2    | main    | 2016/08/21 14:06:21.621 | WARN  
[ImpExReaderWorker] [ImpExReader] line 2 at main script: skipped code line 
line 2 at main script:impex.setLocale( Locale.GERMAN ) since bean shell is 
not enabled
INFO   | jvm 2    | main    | 2016/08/21 14:06:26.943 | INFO  [hybrisHTTP5] 
(00000G7D) [Importer] Finished 1 pass in 0d 00h:00m:05s:354ms ‐ processed: 
1954, no lines dumped (last pass 0)
INFO   | jvm 2    | main    | 2016/08/21 14:06:27.050 | INFO  [hybrisHTTP5] 
[DefaultImportService] Import was successful (using cronjob with 
code=00000G7D)
Status: SUCCESSFUL

Parameters
parameter

 Description

–?

help

,

extension

Example

.

available options

.

Only for the console version

-

Extension name

e

(-e)

trainingstorefront

name (-n)

Name of the property to change

value (-v)

New value of the property

list (-l)

All available configuration properties

-

.

 -

v

(

not only for

 -

l

)

the specified extension

check (-c)

Compares property files with the memory state

sync (-s)

Copy all new update property values from files to

/

  -

c

c

memory

Examples of usage
/

/

.

n sso redirect url

:

Setting creating changing a configuration property

./hybrisConfiguration.sh ‐e hybristoolsclient ‐n test1 ‐v value1

value value

:

Find a value of the specific property

./hybrisConfiguration.sh ‐e hybristoolsclient ‐list | grep test1

:

Scan the property files and compare them with the information in memory

./hybrisConfiguration.sh ‐e hybristoolsclient ‐check

Sync the property files with the memory

(

-

one way sync

,

only memory information will be

)

changed

./hybrisConfiguration.sh ‐e hybristoolsclient ‐sync

Video

,

For the reference

:

the part I of the demonstration

Download
The extension is available by request

. -

:

E mail

_

Rauf

Aliev

@

.

.

epam com

:

Skype

_

rauf

aliev

HҮBRIS BEANS COMMAND-LINE TOOL + WEB API
18.08.2016 · by

Rauf Aliev

· in Other · Leave a comment

Overview
.

-

This tool is a part of my hybris Runtime Developer tools package

 

 

It allows showing one page

hybris beans using the command line interface.

- -

all in one information about

Features
,

REST API and console tool with the common configuration

,

Shows the information about the specific bean

Changing bean values

(!)

.

on the fly

Using
Command-line interface
-

The command line app is a console wrapper for API

.

.

See API for the details

:

Usage

Change the value

 

property of bean to value

./hybrisBeans.sh ‐b bean ‐n property ‐v value

:

Show all hybris beans

./hybrisBeans.sh

Parameters
parameter

 Description

–?

help

,

Example

.

available options

Only for

.

the console version

bean (-b)

/

Shows update the bean

=

bean

myProductRaoPopulator

information

propertyname

=

name of the bean to change

propertyname

new value of the bean

propertyvalue

categoryService

(-n)
propertyvalue

=

<

(-v)

>

defaultCategoryService

=13

propertyvalue

:

Changing any property of any bean

#bash hybrisBeans.sh ‐b myProductRaoPopulator ‐n categoryService ‐v "
<defaultCategoryService>"

:

You can see the list of bean methods here

#bash hybrisBeans.sh ‐b indexerService

API

API
curl "https://electronics.local:9002/tools/beans/bean/myProductRaoPopulator?
propertyName=categoryConverter&propertyValue=<defaultCategoryRaoConverter>" 
‐k 2>/dev/null

Video

Limitations
You can change the value of the bean property only if the bean has a setter for this

.

property

(

,

For example

.

bean

.

multipartResolver

.

.

.

)

org springframework web multipart commons CommonsMultipartResolver

 

has a

 

roperty named maxUploadSize that cannot be changed using my utility because the

.

class from Spring doesn t have a setter

:

The current implementation supports properties of two types

Properties of List or Map types are not supported yet

.

Strings and References

.

 

.

Sometimes bean classes use parameters only at the first load

You will be able to

.

change the values of the parameters but the initial value will be used anyway

Download
The extension is available by request

. -

:

E mail

_

Rauf

Aliev

@

.

.

epam com

:

Skype

_

rauf

aliev

HҮBRIS FLEXIBLESEARCH COMMAND-LINE TOOL +
WEB API
18.08.2016 · by

Rauf Aliev

· in Other · Leave a comment

Overview
.

This tool is a part of my hybris Runtime Developer tools package

:

parts

server side

(

)

a REST API

The module consists of two

(

and client side

).

a command line utility

Features
,

REST API and a console tool to FlexibleSearchService

-

,

FlexibleSearch command line interface

,

Various output formats

Using pipes and filters

,

,

,

,

such as TXT TSV CSV XML

(

,

grep

awk

Cascade reference resolution

,

 

),

perl etc

(

),

no more PK is in output

Handy for testing and troubleshooting

FlexibleSearch beautifier

(

)

reformatter

Using
API
http://localhost:9001/tools/flexiblesearch/execute?<params>

.

Input parameters are mostly the same as in the console application

.

Both GET and POST

Console app
.

.

It is a console wrapper for API

See API for the details

./hybrisFlexibleSearch <parameters>

or

hybrisFlexibleSearch.bat <parameters>

 ‐query=“Select {pk} from {Product}”
 ‐itemtype=”…”
 ‐fields=”field1,field2”
 ‐ref=”Media:code Category:code,name”
 ‐ref‐from‐conf
 ‐catalogName=”..”
 ‐catalogVersion=”..”
 ‐language=”en”
 ‐output‐format=”TSV”|”CSV”|”value” 
 ‐user=”user”
 ‐verbose

Command line parameters
Command line

 Description

Example

parameter

–?

,

help

.

available options

Only for the

.

console version

query (-q)

Flexible Query request. Starts with

select

{

}”.

pk

=

query

Select

{

}

pk

from

{

}

Product

You can use itemtype

option instead of query for basic

.

requests

itemtype (-i)

=

itemtype

{

}

pk

from

X is a shortcut for

{ }”.
X

select

=

itemtype

Product

You can use itemtype

.

option instead of query

ref (-r)

Optional. It is designed for
customizing the external references

 -

ref

:

,

:

Category code name Media code

.

resolving process

,

For example

Product type has an

attribute named

supercategories. It is

a collection of CategoryModel

,

resolution were disabled

.

If the

the value of

supercategories will look like

“(

CategoryModel

(3256854678324@23),
(1464235643253@3)).

CategoryModel

By default

,

this

option contains unique keys for the

.

type

,

However you can change it to your

.

own logic

fields (-f)

=

Optional. A comma separated list of

.

attributes to display

,

If empty

fields

,

code name

the

unique attributes of the type are used

catalogName

Optional. If empty, the value from

(-cn)

project property is used

=

catalogName

electronicsProductCatalog

.

(

.

.

.

)

flexiblesearch default catalog name

catalogVersion

Optional. If empty, the value from

(-cv)

project property is used

=

catalogVersion

Online

.

(

.

.

.

)

flexiblesearch default catalog version

language (-l)

Optional. If empty, “en” is used by

=” ”

language

en

default

output-format

Optional. TSV, CSV, BRD,XML,IMPEX,

(-of)

CON

.

.

The default value is TSV

TSV. Tab separated values. Good to

-

copy paste to Excel for further

processing or to use for filtering in

.

unix pipes

There is no quoting in TSV

(

it means that if you have a

,

the value

symbol in

this symbol will be put in

.

the output stream as is

CSV. Comma separated values.
Quoting is used for values with the

special symbols

(“,

comma or

\ ).
n

-

=

output format

CSV

BRD is BR-delimited, that means that
each field will take the separate line

and an additional empty line between

.

the rows

Important

:

,

the attribute values

if you have BRs in

they will be

.

removed

CON is for printing the output in

/

.

Windows Unix console

,

It looks like

,

CSV but without quotes

columns are

.

aligned and easy to read

XML means XML (not implemented yet)
IMPEX. Output is ready to be
imported again using hybrisImpex

. Not implemented yet.

utility

user (-u)

flexible search request

beautify(-b)

=

Optional. Changes the user for the

user

13@

keenreviewer

.

Optional. Beautifier (reformatter).

-

b

This option is used only with query

.

parameter The query is not executed

.

only reformatted

,

ignored

pk

,

All other options are

-

including output format

.
-

Identify the type behind pk and show

.

the results

pk

You possibly need to you

this options along with

fields. It is a

useful feature if you need to request

the information about the item with

,

the specified PK

but you are not sure

/

you don t know what type is behind

.

this PK

verbose (-v)

.

Turns the debug mode on

The debug

information is mixed with output

(

not

)

implemeted yet

debug (-d)

,

Debug is on

the hybris log will contain

the debug information

Examples of usage
API

8796176744449

.

hybris com

API
$ curl "https://localhost:9002/tools/flexiblesearch/execute?query=select%20\
{pk\}%20from%20\{Language\}&fields=isocode,name&maxResults=3&debug=true" ‐k 
2>/dev/null
isocode name
en English
de German
es Spanish

$ curl "https://localhost:9002/tools/flexiblesearch/execute?query=select%20\
{pk\}%20from%20\
{Product\}&fields=code,supercategories&maxResults=3&debug=true" ‐k 
2>/dev/null
code supercategories
266899 
(electronicsProductCatalog:Online:827,electronicsProductCatalog:Online:brand
_5)
65652 
(electronicsProductCatalog:Online:588,electronicsProductCatalog:Online:brand
_10)
284533 
(electronicsProductCatalog:Online:1421,electronicsProductCatalog:Online:bran
d_10)

Command line
,

Export to CSV first

5

results of Product

(

select

{

}

pk

from

{

}):

Product

$ bash hybrisFlexibleSearch.sh ‐itemtype Product ‐of CSV ‐mr 5
"catalogVersion", "code"
"electronicsProductCatalog:Online", "266899"
"electronicsProductCatalog:Online", "65652"
"electronicsProductCatalog:Online", "284533"
"electronicsProductCatalog:Online", "107701"
"electronicsProductCatalog:Online", "266685"

The same request in TSV

(

-

):

tab separated values

$ bash hybrisFlexibleSearch.sh ‐itemtype Product ‐of TSV ‐mr 5
catalogVersion code
electronicsProductCatalog:Online→266899
electronicsProductCatalog:Online→65652
electronicsProductCatalog:Online→284533
electronicsProductCatalog:Online→107701
electronicsProductCatalog:Online→266685

The same request

,

:

BRD output format is used

$ bash hybrisFlexibleSearch.sh ‐itemtype Product ‐of BRD ‐mr 5
catalogVersion: electronicsProductCatalog:Online
code: 266899
catalogVersion: electronicsProductCatalog:Online
code: 65652
catalogVersion: electronicsProductCatalog:Online
code: 284533
catalogVersion: electronicsProductCatalog:Online
code: 107701
catalogVersion: electronicsProductCatalog:Online
code: 266685

The same request using Flexible Search

(

):

and in CSV

$ bash hybrisFlexibleSearch.sh ‐query "Select {pk} from {Product}" ‐of CSV ‐
mr 5
"catalogVersion", "code"
"electronicsProductCatalog:Online", "266899"
"electronicsProductCatalog:Online", "65652"
"electronicsProductCatalog:Online", "284533"
"electronicsProductCatalog:Online", "107701"
"electronicsProductCatalog:Online", "266685"

.

Requesting the names of supercategories

The

ref

 

.

option is used in this example

$ bash hybrisFlexibleSearch.sh ‐query "Select {pk} from {Product}" ‐of CSV ‐
mr 5 ‐f name,supercategories ‐ref "Category:name"
"name", "supercategories"
"Adapter AC Infolithium f Cybershot", "(Power Adapters & Inverters,Sony)"
"EF 2x II extender", "(Camera Lenses,Canon)"
"Binocular IS 10X30", "(Binoculars,Canon)"
"DC Car Battery Adapter", "(Power Adapters & Inverters,Sony)"
"Battery Video Light", "(Camera Flashes,Sony)"

$ bash hybrisFlexibleSearch.sh ‐query "Select {pk} from {Product}" ‐of CSV ‐
mr 5 ‐f name,picture
"name", "picture"
"Adapter AC Infolithium f Cybershot", 
"electronicsProductCatalog:Online:/300Wx300H/266899‐5310.jpg"
"EF 2x II extender", "<NULL> "
"Binocular IS 10X30", "electronicsProductCatalog:Online:/300Wx300H/284533‐
9485.jpg"
"DC Car Battery Adapter", 
"electronicsProductCatalog:Online:/300Wx300H/107701‐5509.jpg"

"Battery Video Light", "electronicsProductCatalog:Online:/300Wx300H/266685‐
1385.jpg"

:

You can change the catalog

$ bash hybrisFlexibleSearch.sh ‐query "Select {pk} from {Product}" ‐of CSV ‐
mr 5 ‐catalog apparelProductCatalog
"catalogVersion", "code"
"apparelProductCatalog:Online", "29532"
"apparelProductCatalog:Online", "300020294"
"apparelProductCatalog:Online", "300047513"
"apparelProductCatalog:Online", "45572"
"apparelProductCatalog:Online", "300040462"

Output format

=“

”(

CON

Displays the result in a

)

console

console-readable format. Similar to TSV, but test is aligned into

.

columns

FlexibleSearch beautifier

To Do list for further versions
/

File import export

/

Configuration import export

. “-

-

load config

parameter will allow to replace some

.

frequently used command line parameters with the reference to configuration file

XML output format

IMPEX output format

Flexible Query parameters

Video

Download
The extension is available by request

 

. -

:

E mail

_

Rauf

Aliev

@

.

.

epam com

:

Skype

_

rauf

aliev

HҮBRIS TҮPE SҮSTEM COMMAND-LINE TOOL + WEB API
18.08.2016 · by

Rauf Aliev

· in Other · Leave a comment

Overview
.

This tool is a part of my hybris Runtime Developer tools package

-

It allows showing one page

 

- -

.

all in one information about hybris types or the specific type using the command line

much faster and more flexible than using hybris administrative console

(

It is

).

HAC

Features
,

REST API and a console tool to the type system

,

Shows all available types

,

Shows the detailed information about the type

.

Shows the detailed information about the type attribute

Using
Command-line interface
Parameters
parameter

 Description

–?

help

type (-t)

Shows the list of attributes of <type>

,

Example

.

available options

.

Only for the console version

from

attribute

Shows the detailed information about the specific

(-a)

attribute

pk

Identifies the type s

( )

.

results

behind pk and shows the

It is a useful feature if you need to request

the information about the item with the specified

,

PK

but you are not sure

.

is behind this PK

Examples:

/

you don t know what type

=

{

{

}

query

Select

}

pk

Product

=

itemtype

Product

-

pk

8796176744449

Requesting the information about the available attributes of the specific type

below it is

in the example

”)

Product

Requesting the information about the specific attribute

of

(

(

in the example below it is

”)

Product

Web API
$ curl "https://localhost:9002/tools/typesystem/type/Language/attributes" ‐k 
2>/dev/null
$ curl "https://localhost:9002/tools/typesystem/type/Language/attributes" ‐k 
2>/dev/null
Type: Language
Supertype:C2LItem
Subtypes:
active→java.lang.Boolean→null→mandatory!
comments→CommentItemRelationcommentsColl→null→optional
...

$ curl "https://localhost:9002/tools/typesystem/type/Product/attributes" ‐k 
2>/dev/null | head ‐n 10
Type: Product
Supertype:GenericItem
Subtypes: ApparelProduct, VariantProduct
allDocuments→ItemDocrRelationallDocumentsColl→null→optional

picture

assignedCockpitItemTemplates→Item2CockpitItemTemplateRelationassignedCockpit
ItemTemplatesColl→null→optional
comments→CommentItemRelationcommentsColl→null→optional
creationtime→java.util.Date→null→optional
itemtype→ComposedType→null→optional
modifiedtime→java.util.Date→null→optional
owner→Item→null→optional

$ curl "https://localhost:9002/tools/typesystem/types?extension=hmc" ‐k 
2>/dev/null
hmc→HMCHistoryEntry
hmc→HistoryActionType
hmc→WizardBusinessItem
hmc→SampleWizard
hmc→ImportExportUserRightsWizard
hmc→ImportUserRightsWizard
hmc→ExportUserRightsWizard

 

,

In a similar way

you can request all the types from all extensions if you omit the extension

.

param

Video

Download
The extension is available by request

. -

:

E mail

_

Rauf

Aliev

@

.

.

epam com

:

Skype

_

rauf

aliev

HҮBRIS MARKETPLACE POC: 2,000,000 PRODUCTS,
15,000 CATEGORIES, 6000 SEARCH FACETS
17.07.2016 · by

Rauf Aliev

· in Product Management · Leave a comment

Introduction
Online marketplaces frequently contend with needing to deal with a huge number of

,

products

.

,

categories

(

and facets

 

,

Hybris OOTB supports large product sets

.

for these volumes

.)

product attributes used to filter search and category pages

 

but many hybris components are not optimized

.

This PoC demonstrates one of the solutions that addresses this issue

Complexity
- -

-

.

The Hybris out of the box architecture is not suitable for huge product and facet sets

.

major bottleneck is in fetching data for indexing

.

slower this process will be

A

,

The more products and facets you have

the

The merchants of typical online marketplaces generally expect

that the website will reflect the changes shortly a

퐀ꐀer the new updates are uploaded, which

.

may not be the case in the event of a high quantity of products and facets

,

There are several solutions to make the indexing faster but for the really huge catalogs like

,

the scanario we re discussing in this article

,

such cases

- -

.

a slight improvement may not be enough

For

-

designs of many out of the box components should be rethought to meet high

.

load and big data requirements

Solution
-

In order to validate my design with the real data and massive volumes I used the freely

available BestBuy database to create this proof of concept

://

https

.

. /

-

/#

bestbuyapis github io api documentation

 

The BestBuy XML has about

attributes

(

2,000,000

:

, 15,000

products

-

-

products bulk download

categories and about

6000

product

).

facets

Marketplaces with the similarly large amounts of product generally use a distributed product

.

management rather than a centralized one

fferent types of products and it is

There are di

fferent product types. For this

common to use the specialized management solutions for di

reason I assume that the products should be loaded into hybris from the external source

.

where the products are managed

Demo of PoC

Marketplace: 2,000,000 products and
6000 facets in SAP hybris
from Rauf Aliev

01:55

,

As I mentioned above

.

I assumed that product data are provided as CSV files

,

In my solution

there is no such process as indexing because the data are supposed to be loaded directly into

.

SOLR

,

fference between indexing and

However for the storefront s viewpoint there is no di

.

the direct upload into SOLR

It s important that this process is very fast

: ~1

second per

1000

.

products

For 2,000,000 products the full update takes 25 minutes (on my laptop). If you need to
refresh all marketplace product set from nothing to

If you need to refresh all marketplace product set from nothing to full set

you need to wait for no more than

25

min per

2

MM items

(

(2

or faster if you use better

).

hardware

Custom marketplace indexer
.

The hybris OOTB indexer should be replaced with the custom indexer

),

MM products

In my PoC

the preparation step takes

the uploading step takes

~20

~4

sec per

sec per

5000

5000

,

products

.

products

:

Example of BestBuy XML

<product>
 <sku>9999119</sku>
 <productId>1219460752591</productId>
 <name>Amazon ‐ Fire TV Stick ‐ Black</name>
... 
 <longDescription>Amazon Fire TV Stick connects to your TV&apos;s HDMI port. 
Just grab and go to enjoy Netflix, Prime Instant Video, Hulu Plus, 
YouTube.com, music, and much more.</longDescription>
 <longDescriptionHtml>Amazon Fire TV Stick connects to your TV&apos;s HDMI 
port. Just grab and go to enjoy Netflix, Prime Instant Video, Hulu Plus, 
YouTube.com, music, and much more.&lt;br&gt;&lt;br&gt;&lt;a 
href=&quot;/site/home‐solutions/streaming‐media‐players‐buying‐
guide/pcmcat333300050010.c?
id=pcmcat333300050010&amp;type=category&lt;br&gt;&lt;br&gt;&quot; 
onclick=&quot;return popNew(this,960,800);&quot; title=&quot;Streaming media 
players&quot; target=&quot;_blank&quot; 
name=&quot;&amp;lid=PDP_BuyingGuide_StreamingMEdiaPlayer_123166&quot;&gt;&lt
;img 
src=&quot;http://images.bestbuy.com/BestBuy_US/en_US/images/abn/2014/global/
buyingguides/streaming_media/entry_point/PDP_StreamingMedia.png&quot; 
width=&quot;418 px&quot; height=&quot;90 px&quot; alt=&quot;Streaming media 
players&quot; /&gt;&lt;/a&gt;&lt;br&gt;&lt;a href=&quot;/site/home‐
promotions/tv‐alternatives‐education/pcmcat331500050009.c?
id=pcmcat331500050009&quot; onclick=&quot;return popNew(this,960,800)&quot; 
title=&quot;Cable and Satellite Alternatives&quot; target=&quot;_blank&quot; 
name=&quot;&amp;lid=PDP_TV_Alternatives_122113&quot;&gt;&lt;img 
src=&quot;http://images.bestbuy.com/BestBuy_US/en_US/images/abn/2014/hom/pr/
blue‐ray‐pdp‐banner‐402x88.jpg&quot; alt=&quot;Cable and Satellite 
Alternatives&quot; /&gt;&lt;/a&gt;</longDescriptionHtml>
 <details>
      <detail>
        <name>Compatible Wireless Standard(s)</name>
        <value>Wireless A|Wireless B|Wireless G|Wireless N|Wireless N Dual 
Band</value>
      </detail>
 ...
    </details>

 ...
 </product>

(

)

full XML for one product is here

:

SOLR document I create from this XML

{
 "indexOperationId_long":36449,
 "id":"BestBuy/Online/9999119",
 "catalogId":"BestBuy",
 "catalogVersion":"Online",
 "price_usd_string":"39.99",
 "priceValue_usd_double":39.99,
 "category_string_mv":["cat00000"],
 "img‐
65Wx65H_string":"http://img.bbystatic.com/BestBuy_US/images/products/9999/99
99119_s.gif",
 "img‐
515Wx515H_string":"http://img.bbystatic.com/BestBuy_US/images/products/9999/
9999119_sb.jpg",
 "allCategories_string_mv":["cat00000"],
 "inStockFlag_boolean":true,
 "img‐
30Wx30H_string":"http://img.bbystatic.com/BestBuy_US/images/products/9999/99
99119_s.gif",
 "code_string":"9999119",
 "name_text_en":"Amazon ‐ Fire TV Stick ‐ Black",
 "name_sortable_en_sortabletext":"Amazon ‐ Fire TV Stick ‐ Black",
 "img‐
96Wx96H_string":"http://img.bbystatic.com/BestBuy_US/images/products/9999/99
99119_s.gif",
 "manufacturerAID_string":"",
 "manufacturerName_text":"",
 "description_text_en":"Amazon Fire TV Stick connects to your TV`s HDMI 
port. Just grab and go to enjoy Netflix, Prime Instant Video, Hulu Plus, 
YouTube.com, music, and much more.",
 "ean_string":"9999119",
 "summary_text_en":"Streams 1080p content; dual‐band, dual‐antenna Wi‐Fi 
(MIMO); supports 802.11a/b/g/n Wi‐Fi networks; Bluetooth 3.0 with support 
for HID, HFP and HPP profiles; 1GB memory; 8GB internal storage",
 "itemtype_string":"Product",
 "stockLevelStatus_string":"inStock",
 "categoryName_text_en_mv":["Best Buy"],
 "img‐
300Wx300H_string":"http://img.bbystatic.com/BestBuy_US/images/products/9999/
9999119_sb.jpg",
 "feature‐Compatible_Wireless_Standard(s)_string" : "Wireless A|Wireless 
B|Wireless G|Wireless N|Wireless N Dual Band",
"feature‐Interface(s)_string" : "HDMI|Micro USB",
"feature‐Smart_Capable_string" : "Yes",

"feature‐Color_Category_string" : "Black",
"feature‐Maximum_Supported_Resolution_string" : "1080p",
"feature‐Hard_Drive_string" : "Yes",
"feature‐Computer_Connectivity_string" : "Not Applicable",
"feature‐Instant_Content_Supported_string" : "Amazon Video|CNN|ESPN|HBO 
GO|HBO NOW|Hulu Plus|Netflix|Pandora|YouTube",
"feature‐Smartphone_Compatible_string" : "Yes",
"feature‐Instant_Streaming_string" : "Yes",
"feature‐Playable_Formats_string" : "AAC‐
LC|AC3|BMP|FLAC|GIF|H.264|JPEG|MP3|PNG|Vorbis|WAV",
"feature‐Hard_Drive_Size_string" : "8 gigabytes",
"feature‐Remote_Control_Included_string" : "Yes",
"feature‐pricematch_string" : "yes",
 "url_en_string":""
}

Custom product page
.

My custom product page works with SOLR rather than the database

SolrClient solrClient =  new
LBHttpSolrClient( "http://localhost:28983/solr/master_electronics_Product" );
SolrQuery solrSearchQuery =  new SolrQuery();
solrSearchQuery.set( "q" , "code_string:" +productCode);
QueryResponse response = solrClient.query(solrSearchQuery);

.

I would add caching to it to make it faster

 

In my PoC every HTTP request makes a request to

.

the SOLR server

Facets
To add a new filter in the facet area you need to create a facet item in the IndexedItem object

E-Commerce
Solr->DB Sync. 

.

The system creates a Product item in hybris once this product is added to the

cart

.

 

This approach has the advantage of avoiding major changes in Cart and Checkout

.

functionality

 

Because the total number of products in carts is much smaller than the total

,

number of products in the database

this approach will not a

ffect the performance.

HIGHLIGHTING IN HҮBRIS SEARCH
11.07.2016 · by

Rauf Aliev

· in SOLR · Leave a comment

Introduction
 

 

 

 

-

Many applications like to highlight snippets of text from each search result so the user can

.

see why the document matched the query

- -

Hybris out of the box isn t able to highlight words

.

in the search results

Hybris also doesn t use a free text search for product attributes from the classification

.

catalog

This post explains how to make these attributes searchable and how to highlight

.

search results

Complexity
.

Hybris does not support highlighting

,

solution

,

and the complexity

Solution

,

,

However Apache SOLR

.

is in the integration

,

a part of hybris

.

does

So the

:

In my solution

(1)

SOLR needs a designated field for highlighting

textual attributes of the product

.

This field should contain the texts from all

.

(2)

Additional parameters should be added to the SOLR query

.

(3)

The highlighted fragments go separately from the products in the response

.

Technical details
(1)

I added the custom field

 

Product.allFields to the Product model.

,

Each time the product attributes are changed

,

For example

in the product

following data

(

#1934406

.

the PrepareInterceptor rebuilds this field

,

from the demo shop

the field is populated with the

):

CSV

HDR-CX105E  Red, – 1920x1080i HD video recording using AVCHD format<br/>-Records up to 3
hours HD video on 8 GB internal flash memory plus HYBRID recording on optional Memory
Stick™<br/>-Exmor™ CMOS sensor for brilliant picture quality with high sensitivity and low
noise<, 4905524596595, Sony, 2.36 Megapixel CCD (1/5″”), 8 GB Flash Memory, Memory Stick,
MPEG2 (1920 × 1080), JPEG (max.2304 × 1728 Pixel), 10x Zoom, 120x Digitaler Zoom, SteadyShot
image stabilization, 2.7” Wide LCD, PictBridge, Memory Stick (Duo/Pro Duo) Slot, USB 2.0 (Out),
Video (Out), Audio (Out), S-Video (Out), Component Video (out), HDMI (out),Li-Ion, Camcorder
tape type: Flash memory., Camera shutter speed: 1/2 – 1/1000s, Minimum illumination: 5.0lx,
Source data-sheet: ICEcat.biz., Power consumption: 3.5W, Recording speed: HD, FH (1920x1080i,

16Mbps) / HQ (9Mbps) / SP (7Mbps) / LP (5Mbps) , SD., Internal memory: 8000.0MB, Display:
LCD., Display diagonal: 2.7″, Display resolution: 211200.0pixels, Digital zoom: 120.0x, Aperture
range: 1.8., 35 mm camera lens equivalent: 42.0mm, Focal length: 3.2mm, Filter size: 30.0mm,
Megapixel: 2.36MP, Optical zoom: 10.0x, Lens system: Carl Zeiss Vario-Sonnar T*., USB 2.0 ports
quantity: 1.0., Video out: 1.0., Audio output: 1.0., HDMI ports quantity: 1.0., S-Video out: 1.0.,
Depth: 107.0mm, Weight: 280.0g, Height: 60.0mm, Width: 55.0mm, Battery type: NP-FH60.,
Battery life: 1.5h, Audio system: Dolby Digital 2ch., Colour of product: Red., Compatible memory
cards: Memory Stick., Optical sensor resolution: 2360000.0pixels, E䀉ective sensor resolution:
1990000.0pixels, Optical sensor size: 1/5.0″, Sensor type: Exmor CMOS Sensor., Video capture
resolution: 1920 × 1080pixels, Still image capture resolutions: 2304 × 1728%, Bundled so툁ware:
Picture Motion Browser.,

(2)

I implemented the custom FacetSearchStrategy

(

that extends

)

DefaultFacetSearchStrategy

In the middle of the

search

 

,

method

.

(

 

),

before solrClient query solrQuery

I injected the

.

configuration for the SOLR highlighter

solrQuery.setHighlight(true);
solrQuery.addHighlightField("allFields_text_en");
solrQuery.setHighlightSimplePre("<em>");
solrQuery.setHighlightSimplePost("</em>");
solrQuery.setHighlightFragsize(100);
solrQuery.setHighlightSnippets(3);
solrQuery.setHighlightRequireFieldMatch(true);

(3)

.

I parse the response from SOLR

,

With the configuration above

the SOLR response will

.

have an additional block for highlighting

<lst name="highlighting">
<lst name="electronicsProductCatalog/Online/816379">
<arr name="allFields_text_en">
<str>
SAL‐1680Z ‐ DT 16‐80mm F3.5‐4.5 ZA <em>Carl</em> <em>Zeiss</em> Vario‐Sonnar 
T* zoom lens, Minimum aperture: 22 ‐ 29., Viewing
</str>
</arr>
</lst>
<lst name="electronicsProductCatalog/Online/23231">
<arr name="allFields_text_en">
<str>
and get better results. Armed with high‐quality <em>Carl</em> <em>Zeiss</em> 
optics, ISO sensitivities of 100/200/400/800
</str>
<str>
5.2Mpix 8MB <em>Carl</em> <em>Zeiss</em> Lens Zoom ring High speed USB 
10xdig.Zoom PEG, Display: LCD TFT 1.8"" color".,

</str>
</arr>
</lst>
...
</lst>

I extended SearchResponseResultsPopulator to parse this information and replace

.

.

product summary with the snipped if it exists

.

displayed

,

Otherwise

the standard summary data is

HҮBRIS + SOLRCLOUD: SHARDING AND
DISTRIBUTED INDEXING
29.06.2016 · by

Rauf Aliev

· in SOLR · Leave a comment

Introduction
,

The traditional Hybris Solr cluster has a number of drawbacks

.

and scaling capabilities

In this post

,

including a lack of failover

  

I explore SolrCloud as one of the possible options

for

.

resolving these issues

,

To illustrate

:

the traditional Hybris Solr cluster looks like this

,

This traditional architecture has a number of drawbacks as outlined above

:

including

A lack of failover capability

 

Having the cluster configuration stored in the Hybris database

(

)

supportability

 

An inability set up autoscaling if the need arises

 

 

,

To mitigate these issues

fferent routes you can take.

there are a couple di

Option 1:
:

Implementing a load balancer to manage search requests

-

non ideal in terms of

 

,

The drawback to this option is that the indexer is still not scalable

 

which was one of the main

.

issues with the traditional architecture

Option 2:
,(

The second option

which I ll be focusing on for the remainder of this posting

 

.

Hybris with SolrCloud

,

This option resolves the scalability issue

),

is integrating

as well provides a better

.

failover option and gets the cluster configuration out of the Hybris database

 

:

The target architecture should look like this

, ’

To fully understand the Solr Cloud solution

it s necessary to first define the terminology that

.

is used

Collection: Search index distributed across multiple nodes; each collection has a

,

name

shard count

,

.

,

and replication factor In my example

shard count

=3

and

replication factors

and

= 3.

Collection is

 

“master_electronics_CloudIndex_collection”

“master_apparel-uk_CloudIndex_collection”. A collection is a set of shards.

Shard: Logical slice of a collection; a shard is a set of cores. Each shard has a name,

,

hash range

)

one

,

.

leader and replication factor Documents are assigned to one

(

and only

.

shard per collection using a document routing strategy

 

 document routing strategy is based on the hash value of the ID field by default.

The

Replication Factor: The number of copies of a document in a collection. For example,
a replication factor of

 

3

 100

for a collection with

M documents would equal

 300

M

.

documents total across all replicas

 

Replica: A Solr index that hosts a copy of a shard in a collection. Behind the scenes,

.

,

each replica is implemented as a Solr core

and replication factor of

2

in a

10

For example

a collection with

node cluster will result in

4

20

shards

.

Solr cores per node

Leader: A replica in a shard that assumes special duties needed to support distributed

 

.

indexing in Solr Each shard has one

(

.

elected automatically using ZooKeeper In general

 

)

leader at any time

,

it doesn t matter which replica

and only one

,

and leaders are

 

.

is elected as the leader

 

 

 

Overseer. The overseer is the main node for a whole cluster which is responsible for

.

,

processing actions involving the entire cluster If the overseer goes down

a new node

.

will be elected to take its place

 

 

I ve created a diagram below that shows how you can write to any node in the cluster and it

will redirect the query to the

.

storage

 

,

Ultimately

.

data to the cloud

Leader

.

node

,

Then

 

the leader node will redirect the query to

,

the client shouldn t be aware of leaders and shards

they simply add new

 

:

The main benefits of using SolrCloud as storage for Hybris include

Distributed indexing. In the traditional architecture, Hybris can only work with one

,

indexer which ends up becoming a bottleneck

.

Automatic fail–over. In the traditional architecture, the indexer is a weak point in

.

terms of durability

Sharding. Horizontal scaling for indexing parallelization. For example, if you have

1,000,000

 

products and ten shard servers

,

each server will manage

~1/10

of

1

.

mln

 

You

.

will be able to scale the system by adding shard servers

Replication. For failover correction (high availability) and load balancing. Unlike

,

-

traditional SOLR replication

Leader election.
As a result

,

 

,

If the leader goes down

 

.

SolrCloud replication is not master slave

.

a new node will be elected to take its place

.

we have no SPOF

Overseer election. If the overseer goes down, a new node will be elected to take its

 

.

place

As a result

,

 

.

we have no SPOF

Centralized configuration (Zookeeper). This feature can also be added to traditional

.

architecture

Complexity
- -

-

 

.

Out of the box Hybris doesn t support SolrCloud

;

collections

.

it works with cores

.

collections instead

Solution

Hybris knows nothing about the SolrCloud

In the SolrCloud model

,

it will need to work with the

Technical details
,

In summary

 

,

there are two possible solutions to improve failover capability

and autoscaling over the traditional Hybris Solr cluster

supportability

:

-

to replace Hybris built in indexing strategy

.

to put an adapter between Hybris and SolrCloud

 

.

I chose the second option as it overcomes more of the limitations of the traditional structure

:

Some of my notes

,

In my architecture

there is a load balancer

(

 

)

nginx

that plays the role of an adaptor

-

and transforms incoming hybris generated requests into the SolrCloud format

,

simplicity

For

.

my load balancer works with only one host in the cloud

,

basically

.

_

master

it replaces

_

master

_

electronics

_

electronics

_

CloudIndex

CloudIndex

_

collection

with

1_

shard

1”.

replica

It is a

straightforward way to make it work and for a real project we will need to

reconsider it

.

,

1_

However it doesn t mean that the node with shard

be loaded the most

.

,

It is a cloud

( )+

node based on Hash id

see

1

replica

will

and the request may be redirected to another

Leader

.

above

.

The Hybris configuration was moved to Zookeeper Solr uses Zookeeper as a system of

,

record for the cluster state

,

for central config

 

.

and for leader election

.

The Hybris Solr plugin was moved to Solr instances in the cloud

There is a fake core named

()

_

master

_

electronics

.

using checkCore

and fails with SolrCloud

”.

CloudIndex

,

Unfortunately

Hybris checks the cores

this functionality is critical

 

fficult to turn off. It is much easier to create an empty core with the

in Hybris and di

.

name Hybris expects and these checks will be passed

CloudIndex

,

empty

/

.

and Hybris uses

would be named

Product

_

master

for empty indexName

_

electronics

Product

.

( . .
e g

By default it is

/

the core collection

)

if indexName is empty

the TWO PHASE indexing mode is not working yet

in TO DO

.

in the core collection name is from itemType indexName

.

I was using

DIRECT

.

mode

So it is

SOLR-BASED DҮNAMIC AVAILABILITҮ GROUPS. POC:
500K AVAILABILITҮ GROUPS
26.06.2016 · by

Rauf Aliev

· in SOLR · Leave a comment

Situation

I am still working on overcoming hybris

.

catalogs

.

limitations

,

In one of the previous blog posts

.

customer groups

Today s topic is about personalized

I talked about personalized prices for

500,000
.

This time I want to tell you about personalized product availability

.

It is clear that in most cases the number of availability groups is not very high

: 500,000 availability groups

used an extreme case

.

customer per group

,

Having solved this issue

-

,

for one e shop

,

However I

with one unique

,

this approach could easily be scaled down

.

recognizing the possible bottlenecks and limitations

 

:

In my example below

` 뉫ʸ`
` `
le

right

customer

(#745)

customer

belongs to availability group

(#11111)

belongs to availability group

AVAILABILITY GROUP

EF

#745.

2

X II EXTERNDER

#745

#11111.

AVAILABILITY GROUP

AVAILABLE

NOT AVAILABLE

RECHARGEABLE BATTERY PACK

AVAILABLE

NOT AVAILABLE

FLAGSHIP TRIPOD

NOT AVAILABLE

AVAILABLE

HIGH QUALITY TRIPOD

AVAILABLE

AVAILABLE

The following behavior is expected

from the device where customer

(

).

see the screenshot below

#745

;

is logged in

#11111

뉫ʸ side is a screenshot

The le

the right side is for customer

#11111.

Data and models

Complexity
,

In hybris

.

availability information is stored in the database and SOLR index

,

pages and search results

.

the database

 

.

hybris uses SOLR

,

For product pages

,

Indexing is a slow process for large

For category

it uses the information from

,

comprehensive catalogs

so it is common

fferent. However, the product information (such as

that information in these sources is di

,

product attributes

,

title

)

description or product images

,

is not changed frequently

while such

.

product data as prices and stock data are very dynamic

,

However the indexing logic is arranged in hybris so that

retrieved from the database

   

.

for indexing purposes

(

)

almost

all the information is being

,

If your stock is very dynamic

you need to

뉫ʸer the previous one is complete. At times the indexing

launch a new indexing process just a

 

.

can take hours

  

 

This can mean that information about product availability won t be relevant

.

for a large number of the products for a significant amount of time

,

One solution is to change the availability information directly

.

fields

,

It is a good approach

Solution

without touching the other

.

but I m not satisfied with the performance

 
.

There is a separate SOLR core to handle availability groups

.

core is rather basic

AvailableOrNot

 

-

This is a simple four column dataset

:

The configuration of this

,

customercode

,

productid

.

customercode,productcode,stock
customer1,107701,true
customer1,479956,true
customer1,592506,true
customer1,824259,true

It is very fast to update with information from the warehouse management system or

.

ERP system

A full update

(50

)

M records

takes

187

.

seconds

.

The database is used only to handle availability groups

.

SOLR

.

Hybris stock data is not used anymore

Stock information is stored in

SOLR data with hybris data when needed and only for the items a

Technical details
SOLR
:

New core

personalstock

.

:

Configuration

,

For the compatibility

you might sync

ffected.

-

.

personalstock schema xml

.

solrconfig xml

:

Uploading data

time ‐p curl "http://localhost:8983/solr/personalstock/update/csv?
stream.file=/hybris/solr‐
prices/stock.csv&stream.contentType=text/plain;charset=utf‐8"

.

.

See above for the Stock csv structure

Custom SOLRQueryConvertor
,

To use this additional core

   

.

you need to slightly change the requests for SOLR from hybris

There is a query parser plugin named JOIN that can be leveraged to use the data from our

.

new SOLR core

public class DSSOLRQueryConvertor extends DefaultSolrQueryConverter 
implements SolrQueryConverter, BeanFactoryAware {
 @Resource
 UserService userService;
 public SolrQuery convertSolrQuery(SearchQuery searchQuery) throws 
FacetSearchException {
 SolrQuery solrQuery = super.convertSolrQuery(searchQuery);
 String customerAvailabilityGroup = 
getAvailabilityGroupOftheCurrentCustomer();;
 String customerQuery="*:*";
 if (!customerAvailabilityGroup.equals("")) {
 customerQuery = "customercode:" + customerAvailabilityGroup;
 solrQuery.add("fq", "{!join from=productcode to=code_string 
fromIndex=personalstock}"+customerQuery);
 }
 return solrQuery;
 }
 private String getAvailabilityGroupOftheCurrentCustomer() {
 AvailabilityGroupModel AvailabilityGroupModel = 
userService.getCurrentUser().getAvailabilityGroup();
 currentAvailabilityGroup = AvailabilityGroupModel.getCode();
 return currentAvailabilityGroup;
 }
}

90M UNIQUE PRICES FOR 500K USER GROUPS
22.06.2016 · by

· in SOLR · Leave a comment

Rauf Aliev

Situation

Today s challenge is about comprehensive pricing

500,000

customers have unique prices for

.

in the system

,

Having logged

180

.

 

.

products

In total

, 90,000,000

priced items are

.

the customers should see their personal price

This is an

.

extreme case of customer group prices

Complexity
 

For the small number of

,

customer groups

,

stored in SOLR as documents

_

price

N

_ => 1,30, …}.”

_

customer

group

N

Indexed products are

Alternatively you can use a mapping structure

Each time the hybris indexes the product

.

all the groups will be retrieved and stored in the index

,

search results and category pages

document

.

so each document will have the records like

_ : 1,30.”

customergroup

the solution seems trivial

,

:{

price

all the prices for

The storefront uses this index for

so all you need to do is to get the right data from the

.
/

The obvious weakness of this approach is that it is designed for small sets of customer price

.

groups

,

For larger sets

,

especially for huge sets

 

.

you need to reconsider the design

Large

.

SOLR documents will significantly slow down the search and indexing

Solution
,

Actually

large sets of fields in SOLR is not a serious drawback

.

to restrict a set of fields for data retrieval

.

fields needed

 

,

;

90

M price items and

500

It is possible to extend hybris

SOLR query has a parameter

This improvement is interesting

technical details are below

.

to specify a list of

.

.

:

fl

but I decided to go the other way

now let s start with the demonstration

K customers

“ ”

The

90 000 000 personalized prices for 500
000 products. SAP Hybris. SOLR.
from Rauf Aliev

04:11

Technical details

Hybris
New

priceRequestService.injectPrices

Custom

Custom

PriceFactory: calls priceRequestService.injectPrices(prices, customerGroupId)
SolrProductSearchFacade

SolrClient solrClient = new 
LBHttpSolrClient("http://localhost:8983/solr/personalprices");
SolrQuery solrSearchQuery = new SolrQuery();
if (customerUid.equals("")) { customerUid = "<ALL>"; }
solrSearchQuery.set("q", "productcode:(" + listOfcodes + ") AND 

(customercode:"+customerUid+")");
solrSearchQuery.set("rows", "30");
QueryResponse response = solrClient.query(solrSearchQuery);
Iterator<SolrDocument> iter2 = response.getResults().iterator();
Map<String, String> productprices = new HashMap<String, String>();
while (iter2.hasNext()) {
SolrDocument solrDocument = iter2.next();
String productCode = solrDocument.getFieldValue("productcode").toString();
String price = solrDocument.getFieldValue("price").toString();
productprices.put(productCode, price);
}

SOLR price request
Additional SOLR core

Simple structure

(

productCode

customerGroup

currencyCode

 

).

priceValue

Super fast update
Full price update takes

478

.

seconds on my laptop

,

For all customers

The price update could be performed using the CSV import request

.

for all products

:

http://localhost:8983/solr/personalprices/update/csv?
stream.file=/hybris/solr‐
prices/testdata.csv&stream.contentType=text/plain;charset=utf‐8

Database
 

No additional database requests are performed to retrieve the prices to show them on the

.

category and search pages

Hybris PriceRows were not used in this design at all

.

APACHE SOLR 6 WITH SAP HҮBRIS 6
04.06.2016 · by

· in SOLR · Leave a comment

Rauf Aliev

Situation
,

As of today

 

was released in August

SOLR

6

fficially supports Apache SOLR 5.3. This version of the search engine

SAP hybris o

2015.

released in April

2016.

It has a number of new features as

(

in order of importance for

):

hybris projects

Cross DC replication

Accommodate

2

or more data centers

/

Active passive disaster recovery

Support limited bandwidth links

Eventually consistent passive cluster

:

Scalable

/

no SPoF and or bottleneck

fferent replication factors

Peer cluster can have di

,

Asynchronous updates

no penalty for indexing operations and burst indexing

Push operations for low latency replication

Low overhead

uses existing transaction logs

- -

Leader to leader communication ensures an update is sent only once to peer

cluster

Graph traversal queries

(

,

for example

Parallel SQL Execution in SOLR cloud

/

fetch all upline categories from the current

)

(

you can work with SOLR documents using SQL

)

queries JDBC

and other features

   

[!] It is absolutely clear that SOLR 6 is not ready yet to be used with the hybris 6 in production
enviroment because SOLR

6

has not been tested and approved by SAP yet

are situations where SOLR

6

is a good choice

.

,

However there

.

Complexity
It is not enough to replace SOLR

 

5.3

Indexing and search will not work

Challenge

,

with SOLR

6.0

.

to make hybris work with the new version

.

too many exceptions

Challenge
 

6

To make hybris

6

and SOLR

.

.

work together Mainly for the educational purposes

Solution

Technical details
Configuration
,

Generally

SOLR

5.3

configuration may be used in SOLR

6.0.

hybris\config\solr\instances\default\configsets\default\conf ‐> 
      solr‐6.0.1\server\solr\configsets\default\conf

:

The following changes should be made

/

fferent types will not work in SOLR 6. So comment these

TF IDF scoring classes for di

.

:

lines in schema xml

&amp;lt;similarity 
class=&quot;de.hybris.platform.lucene.search.similarities.FixedTFIDFSi
milarityFactory&quot;/&amp;gt;

hybris RestManager storage will not work in hybris

.

6.0.

Comment these lines in

:

solrconfig xml

&amp;lt;restManager&amp;gt;
    &amp;lt;str 
name=&quot;storageIO&quot;&amp;gt;de.hybris.platform.solr.rest.IndexAw
areStorageIO&amp;lt;/str&amp;gt;
&amp;lt;/restManager&amp;gt;;

.

update libraries

Instead of lucene

-

-5.3.0.

jar and solr solrj

-

-6.0.1.

jar

solr core

solr solrj

-

-*-5.3.*

-5.3.0.

you need to use lucene

-

-*-6.0*,

jar you need to use solr core

replace MultiMaxScoreQParser with the new version

(

-6.0.1.

instead of

jar

).

see below

MultiMaxScoreQParser
package de.hybris.platform.solr.search;
import org.apache.lucene.search.BooleanClause;
import org.apache.lucene.search.BooleanQuery;
import org.apache.lucene.search.DisjunctionMaxQuery;
import org.apache.lucene.search.Query;
import org.apache.solr.common.params.SolrParams;
import org.apache.solr.request.SolrQueryRequest;
import org.apache.solr.search.LuceneQParser;
import org.apache.solr.search.SyntaxError;
import java.util.ArrayList;
public class MultiMaxScoreQParser extends LuceneQParser
{
 

float tie = 0.0f;

 

public MultiMaxScoreQParser(final String qstr, final SolrParams 

localParams, final SolrParams params,
 

 

 

{

 

final SolrQueryRequest req)

 

 

super(qstr, localParams, params, req);

 

 

if (getParam(&quot;tie&quot;) != null)

 

 

{

 

 

 

 

 

}

 

}

 

@Override

 

public Query parse() throws SyntaxError

 

{

 

 

tie = Float.parseFloat(getParam(&quot;tie&quot;));

final Query q = super.parse();

 

 

if (!(q instanceof BooleanQuery))

 

 

{

 

 

 

 

 

}

 

 

final BooleanQuery obq = (BooleanQuery) q;

 

 

final BooleanQuery.Builder newq = new 

return q;

BooleanQuery.Builder();
 

 

DisjunctionMaxQuery dmq = null;

 

 

for (final BooleanClause clause : obq.clauses())

 

 

{

 

 

 

if (clause.isProhibited() || clause.isRequired())

 

 

 

{

 

 

 

 

 

 

 

}

 

 

 

else

 

 

 

{

 

 

 

 

final Query subQuery = clause.getQuery();

 

 

 

 

if (!(subQuery instanceof BooleanQuery))

 

 

 

 

{

 

 

 

 

 

if (dmq == null)

 

 

 

 

 

{

 

 

 

 

 

 

newq.add(clause);

dmq = new 

DisjunctionMaxQuery(new ArrayList &lt; Query &gt; () ,tie);
 

 

 

 

 

 
}

newq.add(dmq, 

BooleanClause.Occur.SHOULD);
 

 

 

 

 

 

 

 

 

 

dmq.getDisjuncts().add(clause.getQuery());
 

 

 

 

}

 

 

 

 

else

 

 

 

 

{

 

 

 

 

 

ArrayList &lt; Query &gt; queries = 

 

for (final BooleanClause 

new ArrayList&lt; Query &gt; ();
 

 

 

 

subQueryClause : ((BooleanQuery) subQuery).clauses())
 

 

 

 

 

{

 

 

 

 

 

 

queries.add(subQueryClause.getQuery());
 

 

 

 

 

}

 

 

 

 

 

final DisjunctionMaxQuery subDmq = 

new DisjunctionMaxQuery(queries, tie);
 

 

 

 

 

newq.add(subDmq, 

BooleanClause.Occur.SHOULD);
 

 

 

 

}

 

 

 

}

 

 

}

 

 

BooleanQuery result = newq.build();

 

 

//to do: to populate boosting

 

 

//result.setBoost(obq.getBoost());

 

 

return result;

 

}

}

Known issues and limitations
.

Scoring may work wrongly

was not moved to SOLR

6

I have not found any evidences of it

from SOLR

,

but this part of code

5.3.

Index Aware Storage is responsible for storing SOLR configuration that is being

.

managed by hybris like synonyms or stopwords

It is not yet implemented in this

.

solution

Any questions?
 

:

Contact me privately using the form below or leave your comment to this article

CONSOLIDATING CONTENT AND PRODUCTS
SOLR SEARCH
02.06.2016 · by

Rauf Aliev

· in SOLR · Leave a comment

Situation
 

.

Hybris search is designed primarily to deal with products

 

There are no content page search

.

capabilities in hybris

.

Hybris uses Apache SOLR for search

,

facet search

,

fuzzy search

Using SOLR allows hybris to introduce such features as

-

 

Solr s basic unit of information is a document

.

something

 

 

,

which is a set of data that describes

.

These documents are composed of fields

,

product attributes

.

and search based category pages

categories of the product

,

 

A product document would contain the

   

,

keywords

.

and so on

The structure of the SOLR document is defined by SOLR schema

.

.

Shoe size could be a field

Some attributes could be

defined as dynamic allowing hybris to store dynamic sets of product attributes in the SOLR

.

index

,

Hybris SOLR indexer fetches the information from the database

converts it into the SOLR

ff-load these documents into the SOLR.

document format and o

To fetch this data back from SOLR hybris uses Lucene Query Language and the indexes

.

created by SOLR indexer

 

SOLR is a lot faster than traditional databases that makes SOLR one of the best choices for e

.

commerce solutions

,

-

.

However the there are some important limitations

Complexity
 

If your website has both plain static content and product pages you may want to do a

.

keyword search over both of them

)

news

,

For example

 

.

they might be tagged in a similar way to tagging the product pages

be used as facets to filter pages

(

)

both product pages and content pages

,

It is easy to configure hybris to use two indexes

 

if your content pages are articles

.

grouped by page type

.

one per page type

(

reviews or

These tags could

of the same topic

.

The results will also be

,

However this approach doesn t allow customers to filter results

,

topic

.

for example

(

both pages and content

.

The idea of the today s experiment is to get the consolidated results

,

the example mentioned above

:

it should look like this

)

by

For

The first guess is to add a new type to the list of item types

,

.

However it will not work in hybris

(

).

indexed types

Hybris gives only one indexer OOB so they have one and

.

expect one only

.

Hybris SOLR Indexer creates a SOLR core per type

 

Hybris SOLR Search is not able to mix

fferent SOLR cores. Moreover, hybris SOLR Search can’t work with a collection of

items from di

.

item types

,

All the classes of hybris SOLR Search work with only ONE item type instance

.

,

when you have more

 

In addition to that

,

If you have two types

even

hybris SOLR Search will use only the first item

 

.

.

hybris SOLR search is designed to work with product catalogs only

.

To overcome these limitations you need to customize both indexer and search module

 

.

technical details are under the video

How deep they should be customized depends on the

.

specific requirements

Solution

Technical details
To get consolidating work

,

The

you need to

1. Add new SolrIndexedType item
2. Add new full/update query.
{

.

of ContentPage type

 

,“

Let s take the simplest one

SELECT

{

PK

}

from

}.”

ContentPage

3. Add Solr Indexed Properties.
properties

They should be compatible with commerce solr

(

)

because they will share the same SOLR core

4. Create new populator that extends SearchSolrQueryPopulator.

You need to do it

 

.

to overcome the issue with Hybris Search module and Content Catalogs

the original hybris populator works with product catalogs

<

:

only finalCollection

>

CatalogVersionModel

catalogVersions

=

();

getSessionProductCatalogVersions


.

(

);

target setCatalogVersions catalogVersions

(new

.

ittarget setCatalogVersions

ArrayList

in my PoC I got rid of

<

5. Create your own SolrCoreNameResolver

CatalogVersionModel

>());
fferent

to make hybris use one core for di

.

types

6. Create your own ConfigurationExporterListener.beforeIndex

and

 

FullDirectIndexOperationStrategy.beforeIndex because they re-creates the SOLR
core every time the indexer goes to the new type from the indexedTypes list

7. Add your own keyword providers

.

for Content Pages

,

For example

.

 

they can pull out

all indexable content of all the page s components and place it into the one text solr

field

(“

”,

keywords

).

for example

Any questions?
 

.

Contact me privately using the form below or leave your comment to this article

SOLR PRODUCT DATA PARTIAL UPDATE
01.06.2016 · by

Rauf Aliev

· in SOLR · Leave a comment

Situation
.

SAP hybris works with Apache SOLR to provide enhanced product search capabilities

:

There are two SOLR modes

.

indexing and search

 

In the indexing mode hybris fetches new

.

products from the database and sends them to SOLR to index

Complexity
This operation is quite expensive in terms of performance since the product data needs to be

 

fferent database tables. To change only one attribute in the SOLR index

collected from the di

 

.

you need to fetch all the attributes and update the SOLR document completely

,

For example

.

possible

 

 

you want to store price information in the index to make price range filtering

 

When the price is changed you will need to fetch all the product attributes and

 

- -

-

update the whole SOLR document using standard out of the box SOLR indexer

.

configuration

bottleneck

,

If the product price is changed frequently

 

the indexing process is going to be a

.

Challenge
 

?

How to make the update operation faster if the changes are minor

How to update only one

?

attribute in the SOLR document

Solution
-

.

Apache SOLR in the standalone mode has a simple and well documented indexer interface

,

For example

,

if you need to update the price attribute only

the SOLR server

you can make a http request to

:

curl 'http://localhost:8983/solr/master_My_Product/update?commit=true' 
    ‐H 'Content‐type:application/json' 
    ‐d'[{"id":"ElectronicsProductCatalog/Online/<PRODUCTCODE>",
      "priceValue_usd_double":{"set":"1020.00"}}]'

master_My_Product is a name of SOLR core
ElectronicsProductCatalog is a name of product catalog

 

<PRODUCTCODE> is a product Code

 

:

Another example is about updating the availability status

curl 'http://localhost:8983/solr/master_My_Product/update?commit=true' 
 ‐H 'Content‐type:application/json' 
 ‐d'[{"id":"ElectronicsProductCatalog/Online/<PRODUCTCODE>",
 "inStockFlag_boolean":{"set":"false"}}]'

 

PAGE FRAGMENT CACHING FOR HҮBRIS
24.07.2016 · by

Rauf Aliev

· in Other · 2 Comments

Introduction
-

,

Caching is inevitable for a high performance

.

fferent

scalable web application

.

types of caching that have already been implemented in hybris

There are di

,

However almost every

 

solution requires additional tools and improvements to make hybris more resilient to high

ffic or page load time related requirements.

tra

fferent types of caching that could be used in hybris projects. Some of them are

There are di

,

already included in the platform

,

the database side

.

products as well

.

such as entity cache or query cache

.

such as database query cache

Some of them are on

SOLR plays the role of a caching server for

,

However for high performance you need to add some additional

.

components to make the system faster

Sometimes it is impractical to cache an entire page because portions of the page may need

to change on each request

.

,

In those cases

.

you can cache just a portion of a page

.

Default hybris doesn t have any capabilities like this

There is a package from hybris

/

.

Professional Service for page page fragments caching based on Varnish

,

poorly documented and licensed separately

Since the package is

I decided to create my own PoC to estimate the

ffort needed to create a similar extension.

amount of e

 

,

I believe that my solution has some advantages in terms of features and flexibility

:

namely

My solution has tools for cache invalidating on a coarse or fine grain level. For

,

example

,

used

,

if objects are changed

then caches where these objects are mentioned or

.

must be invalidated and recreated

-

For external non manageable

.

the only solution is to wait until the cache TTL time is complete

have tools to manage the cache contents and to easily change it

 

,

With my solution

.

My solution is not limited to CMS objects. You can cache any page fragments,

.

including parts of components or parts of the page controller templates

With my solution, cache fragments may depend on each other or on external
entities such as customer id, post data or session parameters. 

Solution
Video

,

reverse proxy

you

CMS page fragment caching in hybris
with MongoDB
from Rauf Aliev

05:48

Syntax: custom tags
I used JSP custom tags to mark the areas for caching

,

In order to use custom tags

.

library

,

In my PoC

/

-

.

you need to create a custom tag

.

I created cachetags tld and put it into

:

resources WEB INF

<?xml version="1.0" encoding="UTF‐8"?>

<taglib xmlns="http://java.sun.com/xml/ns/javaee"
 xmlns:xsi="http://www.w3.org/2001/XMLSchema‐instance"
 xsi:schemaLocation="http://java.sun.com/xml/ns/javaee 
http://java.sun.com/xml/ns/javaee/web‐jsptaglibrary_2_1.xsd"
 version="2.1">
 <tlib‐version>1.1</tlib‐version>
 <short‐name>cachetags</short‐name>
 <tag>
    <name>cached</name
    <tag‐class>org.training.storefront.cache.CacheTags</tag‐class>
    <body‐content>JSP</body‐content>
    <dynamic‐attributes>true</dynamic‐attributes>

 </tag>
</taglib>

.

You can specify any number of attributes in the tag with any names and any values

Names of

.

the attributes are used as keys in the cache and values are used as values

:

For example

<%taglib prefix="cache" uri="/WEB‐INF/cachetags.tld"%>
<cache:cached attr1="value1" attr2="value2">
  something
</cache>

:

will create the following record in the cache

{
 "value1_value2" : "something",
 "attr1" : "value1",
 "attr2" : "value2",
  ctime : 146863804587 
}

ctime

is a creation time

 

.

in ms

.

This value can be used to invalidate old records

Cache storage
-

.

You can use any NoSQL database and in memory cache libraries to store these JSONs

.

PoC I used MongoDB

 

,

Among all NoSQL solutions

In my

.

this one works well with Windows

Invalidation
-

.

Some server side logic may make the cached fragment invalid

,

In my PoC

I purged the records that have the attribute or attributes that reflect the objects

ffected.

a

,

For example

Cache contents

key

_1234_14

ProductDetails

JSP tags

obj

ProductCode

CustomerId

ProductDetails

1234

14

CategoryId

<

:

cache cached

=”

obj

ProductDetails

=”${

ProductCode

}”

categoryId

=”${

CustomerId

… </

. }”>

customer id

>

:

cache cached

_4321_14

ProductDetails

1234

ProductDetails

<

14

:

cache cached

=”

obj

ProductDetails

=”${

ProductCode

}”

categoryId

=”${

CustomerId

… </

. }”>

customer id

>

:

cache cached

_/ /123

CategoryPage

c

<

/ /123

CategoryPage

c

:

cache cached

=”

obj

CategoryPage

=”${

ProductCode

… </

. }”>

category id

>

:

cache cached

_/ /123

CategoryPage

c

<

/ /123

CategoryPage

c

:

cache cached

=”

obj

CategoryPage

=”${

CategoryId

</

. }> …

category id

>

:

cache cached

If Product

#1234

,

is changed

.

:

you need to send two requests to the cache engine

(“

”, “1234”)

cache purge

ProductCode

”, “1234”);

.

(“

or cache purge

”, “

obj

”,

ProductDetails

ProductCode

.

(“

cache purge

If product

1234

”,

CategoryId

is in category

.

lines will be purged

123,

(“1234”).

getProductByCode

);

getCategories

.

all the records will be removed

If not

,

only the first two

,

the

,

and

The next time the customer makes a request to the product page

.

cache will be updated

,

However if this product was included in the product carousel component product list

,

this component was cached

fficult to understand what product carousel

it is much more di

.

component caches should be invalidated

- -

every cache entry with a time to live

(

A simple solution to this problem is to associate

)

TTL

 

.

value

,

With this solution

the validity of an entry is

퐀ꐀer some time, meaning the hits on expired cache entries are considered misses.

expired a

- ff

The trade o

.

is content freshness to scalability

Examples of cached fragments definitions
Product details. In the below example, the caching fragment depends on product code and

.

session

fferent cached fragments for different pairs of product

It means that hybris will use di

.

and user sessions

-

If product prices are not customer dependent

 

.

productCode attribute

, ff

In this case

di

,

you can use only the

erent customers will use the same cached fragment

.

Category page. In the next example, the fragment depends on the URL and query string.

 

.

Facets change the URL parameters and the category id is a part of the URL

Therefore with

fferent category pages, you will have different cached fragments.

di

 

Product carousel component. As you can see from the code, the cached fragment depends
on the URL and

$

.

titlle

The product carousel component can be put into the same page with

 

fferent configuration.

a di

.

will be cached separately

,

fragments because

.

cache

If you add

${

fferent titles. Each instance

In our case the configurations have di

If you eliminate the second attribute you will have two identical

,

for the second time

}

title

the fragment from the first instance is reused from

,

as a second key

fferent caches for the instances.

hybris will use di

Quick and dirty performance testing
://

https

.

:9002/

electronics local

/

/ /571

Catalogue Cameras c

Number of threads

(

): 80

users

/

/ /

-

trainingstorefront electronics en Open

VARNISH CACHING IN HҮBRIS
27.07.2016 · by

· in Other · Leave a comment

Rauf Aliev

Introduction
 

 

.

Varnish is an HTTP accelerator also known as a caching HTTP reverse proxy

You can install it

.

in front of hybris and configure it to cache the contents

.

Varnish Cache is a very fast thing

.

depending on your architecture

It typically speeds up delivery with a factor of

,

However

.

Varnish to use all its features and speed

 

’.

Default hybris isn t

.

,

x

your backend system should be compatible with

Last weekend I managed to integrate Varnish with hybris

Professional service

300 – 1000 ,

that does the same task

,

There is a module from hybris

but as to my knowledge it has some

.

peculiarities that limit the areas where it can be used

Hybris professional services are able to

ffers this module only if it will work well with the customer data,

identify these areas and o

ffic and website structure. Generally, their solution is good for most cases. Nevertheless, I

tra

decided to try Varnish with my page fragment caching infrastructure which I explained in the

.

previous post

Below is a comparison between my solution and my understanding of hybris Varnish caching

.

module

,

Unfortunately

,

detailed study

due to the SAP licensing model these modules are not available for

so my comparison

 

is based on opinions of others I managed to get

Varnish hybris extension

 My solution

CMS objects only

can work with any page fragments

.

new tags for JSP that can be used

 

.

It introduces

/

in JSP TAG files

without any limitations

no capabilities for cache

for some cases it has some capabilities to

invalidation

manually invalidate the caches

caching

live

not possible

HTML fragments is

(

,

for example

capable to work with

live

HTML fragments

list of

)

customer addresses

-

well tested and used on many

,

projects

 

experimental thing

(

)

but free

but it is not free

The last point is a strong one. My solution seems to be more flexible and cheaper, but it
might also be more expensive in support. If you do have a choice, it would better go to
hybris Professional Services.

Solution

Solution
-

.

The overall diagram of my hybris Varnish implementation is below

.

It is based on the page caching concept that I explained in the previous post

 

According to that

,

the fragments are supposed to be cached should be marked with the

.

cache

tags

.

This can perhaps best be illustrated with a simple example

layout

 

:

SECTION

1

SECTION NAV

Section 2

:

the cache markers might be placed as following

Let s take the following page

Custom tags
:

The cache cached tags are processed by my custom tag library class that extends

.

BodyTagSupport and implements the DynamicAttributes interface

,

to override

(),

doStartTag

()

doEndTag

There are three methods

( , 1, ).

and setDynamicAttribute s s

o

public class CacheTags extends BodyTagSupport implements DynamicAttributes

@Override
public void setDynamicAttribute(String s, String s1, Object o) throws 
JspException {
{ this.dynamicAttributes.put(s1, o.toString()); }

,

-

For the latter method

+

url

”, “

get

”.

url

 

:“

I used some pre defined macros for attribute values

,“

For example

url

”,

sessionid

:

is processed by the following code

ServletRequest sr = ((CacheTags) this).pageContext.getRequest();
String url = ((HttpServletRequest) sr ).getRequestURI();
this.dynamicAttributes.put("url", url);

So it means that for the tag

“<

1=” ”

:

cache cached param

=”

A

param

url

”>…</

>”

cache

you will

:

use page URL as a part of cache key

"A_http://domain.ru/folder/folder" => "cached fragment"

The compound key consists of keys separated by

,

purposes

,

but for the real system

size constant

.

“_”.

It is convenient for debugging

I recommend you to use a hash function to keep the key

You can use any attribute names except

“ ”
ttl

that is a reserved one for TTL

.

value

if (s1.equals("ttl"))
{
    ttl = Integer.parseInt(o.toString());
}

Varnish server
,

As I mentioned before

Varnish server is in the middle between the user and the application

( ).

server s

So for the debugging purposes you can request the

raw

.

response from the application

.

Note that ESI tags are mixed in the HTML tags

,

be broken

,

If you open this HTML in browser the page will

.

because some fragments were replaced with ESI tags

,

nothing about these tags

Your browser knows

.

certainly

,

.

However Varnish is capable of processing these tags properly

For each tag Varnish may

make a request to the server to fetch the contains that we replaced with this tag

.

I said

because Varnish is a caching proxy and next time it will start looking in its cache storage first

Cache data provider

:

Let s look closer on the ESI tag I used

<esi:include src="/cache/get?key=2_5_"/>

.

this URL is autogenerated by my JSP custom tag library

Varnish makes additional requests to the backend to retrieve the page fragments and cache

.

them to speed up the next similar page requests

may

.

Cache Data Provider is a separate server whose solr purpose is to provide the data from

NoSQL shared storage

(

parameter

(

)

MongoDB

to the requestor

(

).

Varnish

 

The key is specified as a

).

key

fferent types of caches: browser cache,

Note that the whole system operates with the di

,

Varnish cache

,

Hybris cache

.

MongoDB cache

We added two of them

(

).

MongoDB and added

:

There are the following cache cases

Varnish cache

MongoDB

Issue

Solution

It is a

Cache the fragment in Varnish

cache

Not

Presents

/

presents Stale

typical

first

query or

a query

欀脃er TTL

a

is over

Presents

Presents or

It is a

Varnish shouldn t make a request to

not presents

repeated

MongoDB Cache provider because the

query

cached fragment had been successfully

(

retrieved before

.

within

)

So this is why we

TTL

shouldn t care about MongoDB cache

for this case

Not

Not

/

/

presents stale

presents stale

No data

Varnish won t be able to retrieve and

neither in

deliver the fragment

MongoDB

case

nor in

solution

,

.

actually

.

This case is a bad

See below for the

Varnish

Varnish cache – Mongo DB cache conflict resolution
If the fragment is removed from both caches

(

Varnish and MongoDB

.

retrieve it from hybris separately and that is an issue

,

be generated separately

.

the fragments

there are no ways to

In some situations these fragments can

fferent types of

but you need to implement specific controllers for di

,

For example

implementation is trivial

)

for CMS content slots or for CMS components this

(

),

and I suspect that hybris Professional Services went this way

-

.

for the controller related page fragments it is not so

-

but

The bad point is that the custom

.

functionality for the latter is very project specific

Let look deeper into this point

.

 

To retrieve these fragments the hybris Professional Service s

extension possibly uses a custom controller to render CMS fragments separately of the page

.

where they are placed

,

.

However there are three issues with this approach

 

The first issue is about the

-

filters page controller

/

.

interaction

Sometimes some data

/

are prepared by custom hybris spring filters to be used in the page or component

.

controller So it means that filters must be used for each cache request

,

filters

.

If you skip the

fferent look.

the rendered components might have a di

-

,

The second issue is about resource intensive filters

you replace one slow request with

(

) 10

say

,

slow

like BTG

(

).

personalization

,

but cacheable requests

Once

fficiency

the e

.

might be questionable

 

,

Sometimes the cached fragments may depend on each other on other components or

.

the same shared information like javascript variables

?

How to overcome this issues

.

?

My solution is very simple

When the issue happens

1.

when both the controller page s TTL is longer than fragments

2.

and when the controller page is also cached

.

,

These points is for plain structure

replace

controller page

with

.

with no nested cacheable fragments

parent ESI

,

If the controller page is not cached

.

,

ESI fragments in MongoDB

If so

For nested ESIs

.

in the points above

-

each request from the user to the server will re create the

the case

#3

.

will never happen

If the controller page s TTL is not longer than fragments

,

TTLs

the fragments TTL will never

.

be removed from MongoDB

So the solution is to set TTL for controller page as a min

:

nested cacheable fragments

(

).

fragments TTLs

The same is for

the TTL of parent ESI is calculated based on TTLs of the nested

.

ESI which are specified as a tag parameter or calculated based on their substructures

,

For example

:

the product page has the following structure

SECTION

SECTION NAV

(

TTL

1(

TTL

= 3)

= 10)

Section 2 (ttl = 5)

So it means that the TTL of the product page will be calculated as min

you can programmatically set the TTL of the controller to

.

caching

Varnish cache configuration

“0”

(10,3,5) =3.

,

Certainly

to get rid of the controller

The configuration below is oversimplified

.

just to illustrate an approach

The backend

-

response caching depends on requested URL and my custom X VarnishCache header

:

vcl 4.0;
import std;
backend default {
 .host = "electronics.local";
 .port = "9001";
 .connect_timeout = 60s;
 .first_byte_timeout = 60s;
 .between_bytes_timeout = 60s;
 .max_connections = 800;
}
sub vcl_recv {
 return (hash);
}
sub vcl_backend_response {
    if (bereq.url ~ "/cache/get") { 
        set beresp.ttl = std.duration(beresp.http.X‐VarnishCache+"s",0s);
    } 
    else
    {
        set beresp.do_esi = true; 
        set beresp.ttl = std.duration(beresp.http.X‐VarnishCache+"s",0s);
     } 
    return (deliver);
 }

Video (PoC)
The video demonstrates that the request rate on the application server side is

.

than the request rate from the browser

6

times less

-

the high resolution version of the diagram is here

 

HҮBRIS PAGE FRAGMENT CACHING WITH NGINX
AND MEMCACHED
30.07.2016 · by

Rauf Aliev

· in Other · Leave a comment

 

-

Today s post is a follow up to the previous articles about caching

I managed to integrate hybris with Nginx

).

for the cache

(

)

as a reverse proxy

.

and memcached

 

(

as a storage

The key distinctive feature of this solution is that Nginx works with

.

Memcached directly

.

You can find all three solutions below

previous articles on my blog

 

See the explanations of first two approaches in the

(

).

Page fragment caching and Varnish caching

Why nginx?
/

Nginx is a high performance web proxy server that powers the most busiest and heavy tra

:

websites in the world

.

,

WordPress com

,

Airbnb

,

Discovery Education

.

Zynga and much more

dynamic content

,

Dropbox

,

MaxCDN

,

Netflix

,

,

TED

It includes caching features for both static and

.

also can be setup as proxy for such backend platforms as hybris

ffic

There are two points explaining why I chose Nginx

 

,

 

:

Unlike Varnish

It is capable to work with NoSQL directly

Unlike Varnish

Nginx supports SSL by default

 

 

.

,

.

Nginx is a native web server while Varnish is just a proxy cache layer

,

Along with your webservers

you will possibly have too many caching layers

 
  

People say that

 

.

Varnish consumes more CPU and RAM than Nginx

Why memcached?
.

Basically because it is an only NoSQL database supported by Nginx

 

:

another strong point as well

,

However there are

Memcached is originally intended for caching

.

.

As for drawbacks

.

Memcached is a memory storage

It is not persistent

.

-

.

Memcached is a key value store while the MongoDB is a document store

Hybris modifications
Architecture
.

The overall architecture is the same as explained in the previous article

fferences

The only di

:

are

memcached instead of MongoDB

SSI includes

No

/

(

/

)

NGINX

?

instead of ESI includes

=

cache get key

(

)

Varnish

,

XXX handler because NGINX is capable to work with Memcached

directly

Memcached client library
.

There is a number of libraries for Java to support memcached

Greg Whalin

(

://

https

SSI includes

.

/

/

-

-

 

I chose the Java client from

/

github com gwhalin Memcached Java Client wiki

).

:

I simply replace the format of include tags

+

Varnish

MongoDB version

(

):

the previous solution

<ESI:include src="/cache/get?key=XXXX"/>

+

NGINX

memcached version

(

):

this solution

<‐‐# include file="/cache/get?key=XXXX"/>

,

Basically

for the NGINX configuration I used you can use anything before

.

it will be ignored

“?”

.

hybris side

Hybris Memcached Service
.

There are the same methods as I used for the MongoDB hybris service

public interface ICacheService {
    public void connect() throws UnknownHostException;
    public Map<String, String> getMap(String key);
    public String get(String key);
    public String put(String key, String value, Map<String, String> 
attributes);
}

public class MemcachedService implements ICacheService {
...
}

Unlike MongoDB, memcached is a simple key‐value storage, so I needed to 
create more than one key in put (key, value, attributes):
put("a","b", { "attr1" ‐> "value1", "attr2" ‐> "value2" }) 
will put the following key‐value pairs in the memcached storage:
a‐>b
a*attr1 ‐> a*value1
a*attr2 ‐> a*value2

NGINX configuration

:

sign in the path

The solution doesn t imply fetching page fragments separately from the

NGINX configuration
In my environment

,

the hybris uses port

#9001

and nginx uses port

#39001.

server {
 listen 39001;
 ssi on;
location / {
   ssi on;
   set $memcached_key "===="; 
   if ($arg_key != "") {      
      set $memcached_key "$arg_key";   
      memcached_pass localhost:11211;
      } 
     error_page 404 502 504 = @fallback;
 }
location @fallback {
 proxy_pass http://localhost:9001;
 }
}

Challenges
Building NGINX for Windows
.

NGINX for Windows comes with a limited set of modules

_

ngx

_

http

_

ssi

 

_

module and ngx

_

http

.

executable available for download

 

.

build it from the sources

,

the second option

_

ssi

module are not included into the

 

,

There are two ways on how to do it

.

use Visual C or Cygwin

I chose

.

.

make

(

,

automake

,

make

gcc etc

)
.

Possibly you will have some problems with gzip module like I had

took it out from the configuration

(–

-

without http

You are also need to change worker connections from

_

_

gzip

1024 (

)

module

)

default

Contoller page caching
 
+
In the Varnish

:

you need to

nginx sources

install Cygwin and the modules required for C

&

,

In order to build nginx with these modules

so follow in my footsteps

download Cygwin

configure

The required modules such as

to

64.

.

MongoDB solution I used caching for the controller pages

.

decided not to cache these pages at all

In this solution I

:

The first reason is avoid repetitions

.

you can

implement this caching in the similar way explained earlier The second reason is that this

I

 

type of caching would add an unnecessary layer of complexity to the system

pages set the cookies

(

)

session

 

,

.

.

 

,

 

 

If you

but the session is over the page won t be displayed

These complexities are common for any type of caching

part to avoid creating

The controller

and these cookies might be used by the components

use the cached version of the page

properly

.

 

.

false illusions of simplicity

.

So I decided to omit this

HҮBRIS CLUSTER / REDIS SESSION FAILOVER
02.08.2016 · by

· in Без рубрики · Leave a comment

Rauf Aliev

Introduction

.

,

Let s start with the session handling basics

When the user logs in

.

,

one web server in the cluster On subsequent requests

the session is created on

the load balancer may bounce that

.

user to another web server which doesn t have that session information

,

To the user it

.

appears that they are no longer logged in

:

There are two common fixes for this

Cookie-based sessions.  Cookies are used to store the session information. The

,

,

session data

such as the user ID

 

are not saved to the server or any other storage but

.

are instead within the browser s cookie

 

There is a limitation of the available amount

. ’

of data a cookie can store

It s also easy to make insecure unless done correctly

,

cookie needs to be encrypted in a way that can t be unencrypted

.

,

hijacked by a malicious user Certainly

 

the

even if the cookie is

.

Hybris doesn t use this approach

 

Sticky sessions. They mean user sessions, usually identified by a cookie, will tell the

 

 

.

load balancer to always send requests from a client to the same server Thus all

.“

requests from the same client are sent to the same server

.

common way to solve the problem described at the beginning

,

not shared

 

 

,

in the case of the server is down

overcome it

,

 

Sticky sessions

is a

If the session storage is

 

the user will lose all the session data

.

To

.

there are two strategies

   

Replicating the session data across the cluster. The load balancer redirects
requests from the failed server to another server that server will use its copy of

.

the session to continue client s session from where it was before the failure

,

Thus the client will not notice any service interruption

 

.

availability strategy

which is the goal of high

Read more on why and what for clustering and session

.

replication here

 

Сentral shared session storage. Persistent stores such as a RDBMS or NoSQL

.

databases are commonly used as session drivers

,

Oracle

,

The RDBMS

,

are considered too slow for the session handling

.

may update or insert data

 

:

purpose

,

Redis

because every request

NoSQL databases are much better for this

 

,

MongoDB

such as MySQL or

,

Tarantool

,

Memcached

Cassandra

.

Non-sticky sessions. Non-sticky session replication provides higher performance,

.

while sticky session approach provides higher reliability

-

,

For non sticky sessions

the

replication should work much faster because every single request may be processed

.

-

by any server in the cluster The centralized storage is a good option for the non sticky

.

session

,

As we see

,

the centralized session storage looks like the universal solution

-

.

and non sticky sessions

 

,

storage

both for sticky

The default hybris doesn t support the centralized session

but the newer versions support it to an even lesser extent

Solution

Demo of PoC

.

Details
JARs

(

\

\

):

platform tomcat lib

-

-

-

-2.0.0.

tomcat redis session manager

-

2-2.2.

commons pool

jedis

-2.5.2.

Configuration

jar

jar

jar

:

<Context path="/trainingstorefront" ... >
<Valve 
className="com.orangefunction.tomcat.redissessions.RedisSessionHandlerValve" 
/>
<Manager 
className="com.orangefunction.tomcat.redissessions.RedisSessionManager"
 host="127.0.0.1" 
 port="6379" 
 database="0" 
 expireSessionsOnShutdown="false" 
 notifyListenersOnReplication="true"
/>
<Loader ... />
</Context>

 

 
 

USING HҮBRIS RULE ENGINE FOR PRODUCT RECOMMENDATIONS
09.08.2016 · by

Rauf Aliev

· in Product Management · Leave a comment

Introduction
,

There are the following user cases for the recommendation engines

Substitute products:

(1) Based on attribute similarity.

( ),

It can rely on the properties of the item s

which are analyzed to determine what else

.

the user may like

: Apple iphone 6 and Samsung Galaxy S6 are substitutes because they have a lot of similar/same

Example

characteristics (product attributes).

(2) Based on association rules 
: Coke and Pepsi are substitutes. Note that it is a manual work to connect them into the group.

Example

Сomplementary products

(3) Based on collaborative filtering (

)

frequently bought together

: the IPA beer and Kettle Jalapeno chips are complementary products because people like to buy them

Example

together.

(4) Based on association rules (

-

)

for co purchased products

: the IPA beer and Lays are complementary products because the merchant wants to sell more Lays than

Example

Kettle.

- -

-

Hybris out of the box supports

#2

and

#4,

but the association rules in it are extremely basic

-

/

-

:
.

hybris you need to manually select other up sell cross sell products or use the external tool

digital cameras with a dozen of memory cards you need to manually or automatically create

.

information

the simple product linking SKU

<->

.

SKU

In

It means that in order to link the five

24×5=120

records of the reference product

.

The number of the records is getting bigger for the large product catalogs

, the collaborative filtering

Two other mentioned approaches

attribute similarity are out of scope of this post. The algorithms used

and

ffic. A main goal of the recommendation engine of this kind is to extract new

there are good for a large amount of data and tra

knowledge from the available sources

(

,

such as the PIM database

)

CRM and webserver logs

and use it to enrich the customer

.

experience and increase the sales

.

The purpose of this article is to show the PoC of the recommendation system based on the rule engine

powerful rule engine for the product promotion management

,

Hybris has already had the

.

so I decided to reuse it for the recommendation system

Solution
,

There are two pieces of functionality

Rule builder for the promotion engine

Recommendation engine

,

As you can see in the diagram below

.

these modules are not connected with each other

 

,

Builder produces the rules

and the

.

recommendation engine uses them

Rules
There are some examples of the rules which are available in the system already

If you visit the product page with one of

,

as complimentary products

If the product has

:

 

Film Camera products, the system must recommend the color and black&white films

.

if any

“memory stick”

in the

“Supported memory cards”

,

classification attribute

the system must recommend the

Memory Stick products, if any.
For the

Products X,Y,Z the system must recommend the products from categories A and B except the products with the

attribute N=<something>

,

Actually

.

the range of possible rules it truly indefinite

The hybris rule builder allows customizing the conditions and actions

.

,

If more than one rule is fulfilled

.

the results will be mixed

Rule Builder
Rule Builder is initially designed for promotions

.

-

It is a brand new product

,

In order to use it for the product recommendation rules

:

SAP added it to hybris in April

2016 (

version

6.0)

.

I added custom conditions and actions

Generating recommended products
This module uses hybris Drools engine for evaluating rules against the products

(

).

one or more

The result is a SOLR request

.

Custom Conditions
:

For the demo I created two custom conditions

product title condition

.

product classification attribute condition

.

It is easy to create a universal product condition that deals with all product attributes available

.

It is important that in my solution conditions work with ProductModel attributes for filtering while actions deal with indexed properties

Custom actions
:

For the demo I created one custom condition

.

Filter all products with specified Indexed Properties

Post processing
The resulting SOLR request is a merge of the rules output

Video

.

Architecture

Technical details
Impex
$lang=en
INSERT_UPDATE RuleConditionDefinition;id[unique=true] ;name[lang=en]
;priority;breadcrumb[lang=$lang] ;allowsChildren ;translatorId
;translatorParameters;categories(id)
;producttitle ;Product title ;200 ;Product ;false
;simpleProductAttributeConditionTranslator ; ;general
;Example_Compatible_memory_cards ;Compatible memory cards ;200 ;Product ;false
;extProductAttributeConditionTranslator ; ;general
INSERT_UPDATE RuleConditionDefinitionParameter;definition(id)
[unique=true];id[unique=true];priority;name[lang=$lang];description[lang=$lang];type;value;required[default=true];
#y_cart_total;operator;1100;Operator;Operator to compare the cart total
value;Enum(de.hybris.platform.ruledefinitions.AmountOperator);”””GREATER_THAN_OR_EQUAL”””;
;producttitle ;titlestr ;1000 ;Title Substring ;Title Substring ;java.lang.String;;
;Example_Compatible_memory_cards;comp_mc;1001;Example Compatible memory cards
(Substring);Example Compatible memory cards (Substring); java.lang.String
INSERT_UPDATE RuleConditionDefinitionRuleTypeMapping;definition(id)
[unique=true];ruleType(code)[unique=true]

;producttitle;PromotionSourceRule
;Example_Compatible_memory_cards;PromotionSourceRule
#ACTIONS
$lang=en
INSERT_UPDATE RuleActionDefinitionCategory;id[unique=true];name[lang=$lang];priority
;recommendations;recommendations;700
INSERT_UPDATE
RuleActionDefinition;id[unique=true];name[lang=$lang];priority;breadcrumb[lang=$lang];translatorId;translatorParameters;categories(id)
;recommend_products;Add products to recommendations;200;Add product to
recommendations;ruleExecutableActionTranslator;actionId­
>ruleAddProductsToRecommendedAction;recommendations
INSERT_UPDATE RuleActionDefinitionParameter;definition(id)
[unique=true];id[unique=true];priority;name[lang=$lang];description[lang=$lang];type;value;required[default=true]
;recommend_products;solrProperty;100;Solr Property;Solr
Property;ItemType(SolrIndexedProperty);
;recommend_products;solrExpression;101;solrExpression;solrExpression;Enum(de.hybris.ruleenginetrail.enums.ActionOperator);
;recommend_products;value;102;Value of the Field;Product Title Substring;java.lang.String;;
INSERT_UPDATE RuleActionDefinitionRuleTypeMapping;definition(id)
[unique=true];ruleType(code)[default=PromotionSourceRule][unique=true]
;recommend_products;

Classes
Custom ProductRAO populator. Converts0 ProductModel to ProductRAO. By default, only “code” and “supercategories” are

.

populated

,

I added all the remaining product properties

including classification attributes

.

Custom Product Attribute Condition Translators. This classes are used to convert the values from the condition parameters

(

)

see impex

into DroolsRules

,

custom translators

(

more exact

,

,

).

into the RuleIrCondition that used for the drools rules composition

.

as an example

for the product title and for the product classification attribute

I created two

It is trivial to convert it into

.

something more comprehensive

Custom Rule Executable Action. I use only one type of action, “ProductsToRecommend”. This action is executed each time the

.

rule condition is fulfilled

as a fact

,

Internally

-

all this class do is adding ProductToRecommendRAO with configurable rule dependent data

.

 

ProductToRecommendRAO item type. I used the only custom attribute here, solrCondition (Map type). This structure is used for

.

the messages from actions back to the controller or service

 

.

The messages contain the solr attribute name and value

DISTRIBUTED PROMOTION CALCULATION IN THE
CLUSTER. PROMO AS A SERVICE
05.07.2016 · by

Rauf Aliev

· in Promotions · Leave a comment

Introduction
 

 

fficient way to increase sales. Businesses need to be able to easily

Promotions are a very e

.

create promotions for any and every occasion

 

-

Having a good promotion engine integrated

.

with your e shop will definitely make your sales higher and your customers happy

-

-

.

E commerce platforms commonly have built in promotion engines

.

introduced a new promotion engine

based rules engine project

 

,

It is based on Drools

6.0

This year hybris

-

the popular open source Java

.
.

This promotion engine is used mainly for the cart calculation

,

messages on the product pages

There are no promotion

category pages or search result pages in hybris

6.0.

This post is about designing a solution where promotions are calculated for category and

.

search pages

.

These pages all have a set of products

So the challenge is to calculate promo

-

!

prices for every item according to the product level promotion rules and to do it fast

Complexity
.

The promotion calculation is a CPU and memory intensive process

calculations for category and search pages will significantly a

.

performance

,

To meet the performance requirements

.

your application cluster

Performing heavy

ffect the overall system

-

you need to considerably scale up

퐀ꐀen

Hybris mainly uses the promotion engine on the cart page which is not requested very o

(~5%).

For category pages you need to call the promotion engine up to

.

inside one http request session

.

cart page

20

퐀ꐀen, all

x more o

퐀ꐀen than the

The category page is used at least twice as o

So the promotion calculation for category and search pages will create a

significant

,

.

additional burden on the application cluster

,

To overcome this issue

.

you can use brute force by scaling up the application cluster

,

However adding new hybris application nodes can be costly because of the SAP licensing

.

model

Today’s experiment shows that the promotion engine is no longer a bottleneck in
hybris.

Video of PoC

Solution

Solution
,

Real time promotion calculation on the product list pages

,

search result pages

and

.

product detail pages

,

Promo prices may depend on parameters

.

such as the currently logged customer

ffect the storefront immediately.

Reconfiguring promotion rules will a

The calculation service is relatively fast

.

.

I did some quick measurements with jMeter

.

The numbers are shown at the end of the video

 

Promotion calculation cluster

:

Does not use hybris core and hybris database

(

)

no license limitations

Based on the lightweight http server and has its own load balancer

Has its own caching subsystem

,

Stateless

(

ff in the video)

caching is o

so easily scalable

Supports batch calculations

(

)

for a set of products at a time

RESTful web interface

Limitations
-

.

The proof of concept demonstrates only product level promotions

: order entry percentage discount action

Two types of actions are supported now

order entry fixed discount action.

:

The following RAO objects are only supported in the PoC

ProductRAO

CartRAO

UserRAO

OrderEntryRAO

Technical details
Jetty webserver

:

Two servlets

(

):

https

and

/calculate
Input parameters:

,

hybris RAO objects

,

contains recalculated prices

XML

Output:
XML

/updateRules

:

Input parameters

ID of the drools rule

,

XML

drools rule

:

Hybris side

Javascript on the product page

/

search page

:

RuleCompileService

ff load the Drools rules

You need to add a listener or rewrite a service to o

to the promotion calculation cluster

HҮBRIS 6 “COULD HAVE FIRED” MESSAGES (POC)
04.06.2016 · by

Rauf Aliev

· in Promotions · Leave a comment

Situation
 

 

“Could Have Fired” message indicates that the criteria of the promotion has been partially

.

fulfilled

,

For example

three products

when you have two products in your cart and the promotion for the

(

like buy

3

get

1

),

free

the

could have fired

message could look like

buy one

”.

more product and get one free

SAP hybris

6.0

-

.

uses new rule based promotion engine that is faster and more flexible

,

However because of the completely new paradigm some capabilities that we had in the

previous versions of hybris are not presented now in hybris

6.

One of them is

messages

.

In the new version for this purpose you need to use fake promotions

:

example above you need to create two promotions

.

,

another is for a set of two products

could have fired

Being fired

point

It means that for the

one is for a set of three products and

the second promotion should show the

.

message

.

For some cases this approach is quite flexible

Could have

.

fired

fired

 similar 

messages for products which

,

For example

you can configure

to the promotional product

.

 

could have

It is a really strong

.
,

However it is a manual work

.

message could be shown

support

.

fficult to cover all the cases when ‘could have fired’

It is di

The

could have fired

fficult to manage and

promotions are di

.

Complexity
-

The new rule based promotion engine is only integrated into the cart

product list

,

Drools

,

,

Product pages

.

search results will not use it at all

 6

the core of the hybris

,

promotion engine

 

.

 

partially fired rules

.

is designed for a broad range of tasks

hybris pushes data into drools and receives a set of actions

.

There are no such thing as

.

detection

Challenge
- -

Create a proof of concept prototype with smart

about showing tips on the product pages like

could have fired

.

messages

The idea is

SAP

If you put this product in the cart

,

If you put this product in the cart

,

If you put this product in the cart

you will get

 

10%

discount

 

you will get a free product

you will have a free delivery

etc

.

Solution
Product page controller evaluates promotions against

).

product

(

the customer s cart

Then the results are compared with ones from the shopping cart

displayed as

.

+

current

Added actions are

.

could have fired

messages

.

The caveat is in promotion engine interfaces

,

For some reason

the classes don t expose all

 

ffice. Some methods and attributes that used by hybris are not

the data available in the backo

.

available for developers

Video
:

In the video below the following process is shown

:

the second part is a bit more complicated

Technical details
final CartModel existingCart = cartService.getSessionCart();
final CartModel virtualCart = modelService.create(CartModel.class);
cloneCart(existingCart, virtualCart);
final ProductModel product = productService.getProductForCode(productCode);
addProductToCart(virtualCart, product)
...
RuleEvaluationResult ruleEvaluationResultWithoutNewProduct =
    promotionEngineService.evaluate(virtualCart, getPromotionGroups());
RuleEvaluationResult ruleEvaluationResult =
    promotionEngineService.evaluate(existingCart, getPromotionGroups());
...
Iterator < AbstractRuleActionRAO > next6 = 
ruleEvaluationResult.getResult().getActions().iterator();

while (next6.hasNext()) {
    AbstractRuleActionRAO actionRAO = next6.next();
    if (actionRAO instanceof DiscountRAO) {
        BigDecimal discountValue = ((DiscountRAO) actionRAO).getValue();
        if (ruleEvaluationResultWithoutNewProduct.getResult() != null) {
            Iterator < AbstractRuleActionRAO > next7 = 
ruleEvaluationResultWithoutNewProduct.getResult().getActions().iterator();
            ...
        }
    }
    ...
}

Any questions?
 

:

Contact me privately using the form below or leave your comment to this article

HҮBRIS 6.0 PROMOTION ENGINE CUSTOMIZATION (POC)
02.06.2016 · by

Rauf Aliev

· in Marketing · Leave a comment

Situation
Hybris

6.0

-

-

.

uses the brand new rule based promotion engine

,

In past versions

.

Commerce

the Hybris promotion engine was lacking in relation to Oracle ATG Web

The new engine provides the flexibility needed to generate all types of

.

promotions simply and intuitively

New Promotion Engine is based on

 

 

Drools library. Drools is a powerful reasoning system. It

.

allows you to define your business logic using business rules and change them in a runtime

.

Hybris transparently converts the promotions into the drools scripts

These scripts are used

for the shopping cart calculation along with the product and user data

.

There is a rule

context containing

some facts or

 

.

objects

creates

Hybris

 

facts every

time the promotion

engine is being

. Facts

executed

are

supposed to be

plain old

 

java objects

(

).

POJO

 

 

There is a

set of standard

hybris data that

 

is being pushed

into the drools

engine as

facts (see

).

the diagram

 

Every rule consists of two parts

of facts

(

)

objects

 

.

Condition and Action

The Condition part checks if the state

 

.

and evaluations when the condition part triggers

 

.

in the rule context met some conditions

The Action part processes actions

Hybris promotion builder allows you to set

,

up conditions and actions for the business rules interactively

.

provided

using the building blocks

Hybris

6.0

documentation explains clearly about the creating your own conditions and

actions as well as you own drools rule templates

.

Complexity
 

Facts are supposed to be plain old java objects

.

promotion engine starts working

,

featured customers

(

)

POJO

and they are created every time the

If you need to check the customer against a list of

the standard way is to create

.

working memory with these objects

 

,

Obviously

1

1

mln

mln objects in memory and populate the

it doesn t look right

.

:

A couple of examples

Condition

“The customer is in the list of TOP 1000 CUSTOMERS” and

Condition

“The customer has more than 1 placed order”

Challenge
.

Find a solution on how to implement these conditions

   

For the first condition the rule engine should be able to look up the current user in the long

.

list of the qualified customers

 

.

It is supposed that this list is stored as a separate entity

 

.

The second condition is supposed to look up in the customer s orders

 

concept task

,

 

in the real project it is not a good to scan through orders every time you

calculate the cart

.

Solution

- -

It is just a proof of

Solution
 

   

-

In order to push model data to the drools engine hybris uses so called rule aware objects

(

),

RAO

,

namely the subtypes of them

.

RAO Providers

 

It is a little tricky to get it work because OOTB promotion RAOs are basically plain old java

   

.

objects and aren t designed to work with hybris services

,

POJO beans

They are expected to be declared as

.

and there are no place to inject the flexible request inside

.

See the technical details below

Video (proof of concept)

Technical details
Impex
INSERT_UPDATE RuleConditionDefinition;id[unique=true] ;name[lang=en] 
;priority;breadcrumb[lang=$lang] ;allowsChildren ;translatorId 
;translatorParameters;categories(id)
;1000bestcustomers ;1000 best customers ;200 ;User ;false 
;BestCustomersListTranslator;;customer

Translator

public class BestCustomersListTranslator implements RuleConditionTranslator 
{@
    Override
    public RuleIrCondition translate(final RuleCompilerContext context,
        final RuleConditionData condition,
        final RuleConditionDefinitionData conditionDefinition)
    throws RuleCompilerException {
        final RuleIrAttributeCondition qualifiedCustomers = new 
RuleIrAttributeCondition();
        
qualifiedCustomers.setVariable(context.generateVariable(QualifiedCustomersRA
O.class));
        qualifiedCustomers.setAttribute(“bestcustomers”);
        qualifiedCustomers.setOperator(RuleIrAttributeOperator.EQUAL);
        qualifiedCustomers.setValue(true);
        irConditions.add(qualifiedCustomers);
        final RuleIrGroupCondition irCustomerReviewCondition = new 
RuleIrGroupCondition();
        irCustomerReviewCondition.setOperator(RuleIrGroupOperator.AND);
        irCustomerReviewCondition.setChildren(irConditions);
        return irCustomerReviewCondition;
    }
}

Provider class
@Override
protected
Set &lt; Object &gt; expandRAO(final CartRAO cart, final Collection options) 
{

    facts.add(qualifiedCustomersRAO);

}

Creating the RAO
public class QualifiedCustomersRAO implements java.io.Serializable {
    public boolean getbestcustomers() {
        String currentUser = 
userService.getCurrentUser().getUid().toString();
        final FlexibleSearchQuery query = new 
FlexibleSearchQuery(&quot;select * from {QualifiedCustomers}, {User} where 
{User.uid} = ? code and {User.pk} = {QualifiedCustomers.code}&quot;);
        query.addQueryParameter(&quot;code&quot;, currentUser);
        final SearchResult found = flexibleSearchService.search(query);
        return (found.getTotalCount() &gt; 0);

    }
}

Any questions?
 

:

Contact me privately using the form below or leave your comment to this article

HҮBRIS DEVELOPMENT SKILL TREE
14.08.2016 · by

Rauf Aliev

· in Other · Leave a comment

.

This chart shows a set of skills and knowledge needed for hybris developers

The pieces of

knowledge are connected and grouped to allow you to see the dependencies and ultimately

.

the big picture

.

Click on the picture for the larger PDF version

 

:

PNG

PDF

://

https

 

( 3):
A

.

.

.

/2016/08/

hybrismart files wordpress com

Hybris Development Skill Tree

(

-

.

knowledge tree png

)

PDF

VISUAL REPRESENTATION OF HҮBRIS DATA
MODEL (ITEMS.XML)
20.06.2016 · by

Rauf Aliev

· in Other · Leave a comment

Visual representation of hybris data
model (items.xml)
from Rauf Aliev

01:59

Features:
ypes of the all/selected hybris Extensions

/

Processing all selected hybris T

:

Detailization control

/

/ ff

some all attributes on o

/

- -

Enabling disabling showing many to many relations

Architecture:
implemented as regular hybris addon

(

.

localextension xml

).

No

Update is

.

required

Generates graphviz script

(“

dot

,

digraph

.

Graphviz is not required

language

Graphviz to convert the diagram into PNG

.

recommended

-

).

The extension uses

,

but highly

In case of the no graphviz configuration you will need to use

.

online graphviz processors

Screenshots:

User interface

Customer Review data model

Wishlist data model

Simplified ERD of “promotion”
module

module

A fragment of simplified CMS module

.

This version is an alpha preview

.

available for download soon

The beta version of the extension is supposed to be

!

Stay tuned

HҮBRIS/OKTA SSO INTEGRATION
15.06.2016 · by

Rauf Aliev

· in Users · Leave a comment

Situation
-

OKTA is a cloud based SSO platform that allows users to enter one name and password to

.

access multiple applications

Also it works as Identity Provider that is useful if you want to

.

store credentials outside your service

:

There are two user groups where SSO integration makes sense

business users and

.

customers

Complexity

.

Hybris OOTB doesn t support any particular SSO providers

It has a module named

samlsinglesignon which can be used for the integration with any SAML-compatible SSO

.

services

,

However this module is designed only for hybris assisted service module

functionality

(

).

belongs to the call center features

Solution
The video below shows the results of this experiment

In the video OKTA is an external identity provider

form

->

OKTA credentials

->

-

,

hybris e shop

(

.

hybrisLogin

-> ”

OKTA

)

customer is authenticated

button

->

.

OKTA com

hybris­OKTA SSO integration (PoC)
from Rauf Aliev

00:53

Technical solution
:

Behind the scenes the interaction between OKTA and hybris look like this

1.

,

Once you try to access the protected resource

point

2.

(

,/

samlsiglesignon extension

the system redirects you to SSO entry

/)

saml

SSO entry point generates a new authentication request using SAML

2.0

protocol

,

.

digitally sign it and send it to the OKTA

 

3. 툀퀀
A

er authentication at OKTA

with your account

,

you will be redirected back to hybris

- .

and automatically signed in

4.

 

The samlsinglesignon extension listens to incoming requests

/

/*).

Once the extension receives a request from Okta

.

request has a correct SAML assertion

5.

,

Otherwise

it is

,

samlsinglesignon

the identity provider

(

(

),

Okta

,

If failed

the extension redirects the user back to

.

and the user is asked to log in

 samlPassThroughToken

it creates the secure cookie

.

user to the URL of the protected resource

.

customer session by the website

,

work with ASM module

it checks if the

and redirects the

This cookie should be used for initiating the

 

the SSO functionality in hybris is preconfigured to

so there is a

asmaddon that has a Filter that processes

samlPassThroughToken, and set a session user if the token is found. We are not going

,

to use ASM Addon in this solution

 

so we need to write our own processor that sets up

the customer session based on the cookie from

samlsinglesignon.

.

There are a number of edge cases that need to be supported in your code

.

IDP session ends earlier or later than storefront s session

-

In the

later

 

,

For example

the

case the storefront

-

should re request the token and re establish the authentication seamlessly without any data

.

lost You needn t parse the token and authenticate the user if this user has already been

.

authenticated

You need to support single sign out as well

.

User data provisioning is also

.

,

needed if you use Okta as IDP If you use the external IDP for more than one customer type

fferent types of sessions.

you need to support di

Configuring Okta
Request a developer account

(

)

trial

.

Create an app in the okta console

Create new app

->

SAML

2.0 ->

.

from okta com

In my case it is

,

Enter app name

.

electronics local

:

then you need to specify two URLS

1.

.

Single Sign on URL

Change your domain

 

, https://localhost:9002/samlsinglesignon/saml/SSO. It is very important to

here

specify the correct domain and protocol here. 

2.

Audience URI

 

 

) urn:ssoextension:hybris:de  (

(

SP Entity ID

.

you can change it in the configuration

3.

Download the certificate here

4.

Download metadata xml

,

or you own SP Entity ID

)

I used the default value

:

.

(“

Identity Provider metadata

-

):

link

.

You will need to re download it each time you change app settings

 

You need to

.

restart hybris each time you replace metadata xml

5.

Create a sample user

6.

Assign the app to the user

)

if needed

Configuring

(

this operation could be done automatically later using API

,

Configuring
samlsinglesignon extension
.

Create a jks file

The simplest way is to use the default keystore file that is included

.

into the extension

The right way is to create it from scratch

(

see the documentation of

)

keytool

keytool.exe ‐certreq ‐alias hybris ‐keystore samlKeystore.jks ‐file 
certificate.cer

.

.

.

Copy downloaded metadata xml into the security folder or change sso metadata location to

.

your own

Change the following parameters

(

)

I used the default values

sso.metadata.location = classpath:security/metadata.xml
sso.entity.id = urn:ssoextension:hybris:de
sso.keystore.location = security/samlKeystore.jks
sso.keystore.password = changeit
sso.keystore.privatekey.password = changeit
sso.keystore.default.certificate.alias = hybris
sso.keystore.privatekey.alias = hybris

Token processor
:

For ASM you need to do nothing

.

AssistedServiceFilter java

the token processing has already implemented in

:

protected void doFilterInternal(final HttpServletRequest httpservletrequest, 
final HttpServletResponse httpservletresponse,
final FilterChain filterchain) throws ServletException, IOException
{
if (AssistedServiceUtils.getSamlCookie(httpservletrequest) != null)
{
try
{
final LoginToken token = new 
CookieBasedLoginToken(AssistedServiceUtils.getSamlCookie(httpservletrequest)
);
// perform login only in case token doesn't belong to currently logged in 
agent
if (!getAssistedServiceFacade().isAssistedServiceAgentLoggedIn()
|| 
!getAssistedServiceFacade().getAsmSession().getAgent().getUid().equals(token

.getUser().getUid()))
{
if (getAssistedServiceFacade().isAssistedServiceAgentLoggedIn())
{
getAssistedServiceFacade().logoutAssistedServiceAgent();
}
getAssistedServiceFacade().loginAssistedServiceAgent(httpservletrequest);
getAssistedServiceAgentLoginStrategy().login(token.getUser().getUid(), 
httpservletrequest, httpservletresponse);
getAssistedServiceFacade().emulateAfterLogin();
}
}
...

:

For customers and cockpit administrators you need to set the session manually

....
final UserModel user = userService.getUserForUID(token.getUser().getUid());
....

USING HҮBRIS PCM FOR HANDLING NONPRODUCT ITEMS
01.06.2016 · by

Rauf Aliev

· in Product Management · Leave a comment

Situation
:

There are the following business requirements for one of the grocery stores

add a news feed with

news categories and

add a recipe list with

recipes filtering

-

:

cross links with products

 

,

product catalog

 

each recipe may be linked with ingredients from the

 

relevant recipes at the product page

 

:

A general approach to implement these requirements is the following

,

Create custom item types

News item type. One of the attributes is a category (or list of categories).
News category item type..
Recipe item type.
Relation: Recipe<->Product (N:N)
Create custom CMS page controllers

News page controller
News list. Shows a list of News items.
News  details. Shows a selected News page.
News Category navigation. Shows a list of categories.
Recipes controller
Recipe list. Show a list of recipes.
Recipe details. Shows a selected Recipe page.

Complexity
There is a lot of existing news and recipes that are supposed to be migrated from the existing

.

Possibly

/

tagging

website

filtering

,

 

the client will need to add search capabilities in the future as well as the

.

Challenge

Challenge
?

How to speed up the development without losing flexibility

How to leverage existing hybris

?

functionality

Solution
-

.

The solution is to use product data model to store non product items like news and recipes

.

The additional product type should be added

.

categories

Product categories will serve as news

.

Product page details might be used for news detail and recipe details

News and

-

recipes could be stored in the SOLR index to re use product list and search results

.

capabilities and templates

,

.

However this solution has its pros and cons

.

Using improper objects for news and recipes

ffects could occur as well – especially if the

could make the solution less clear Some side e

hybris content development has not been harmonized with the hybris commerce

development

.

MULTI-COUNTRҮ CONTENT CATALOGS
05.06.2016 · by

Rauf Aliev

· in Marketing · Leave a comment

 Situation
,

There are N regional websites

.

one per country

 

The global marketing teams are responsible for global content

 

.
.

The local marketing teams should be able to manage local content only

 

Global teams should be able to manage some local content also if they have enough

.

permissions

-

,

The system should be idiot proof which means that any changes made by the regional

 

ffect only their website, not other local websites.

administrators should a

:

There are two ways to implement it partially

“Multi-language”:
.

components

It is good if all the websites have the same design and

.

Any exceptions should be implemented as hybris CMS restrictions

It is

fficult to configure the proper permissions. In this solution, the

impossible or very di

CA website administrator can change US website content

.

partial

/

this is why this solution is

When the number of regions countries is more than

inconvenient

3-4

this solution is also

.

Di枯erent pages for di枯erent countries. Per-page permissions. It will work, but with

.

data duplications

- -

-

.

Out of the box apparel stores are implemented this way

Complexity
Although the data model is prepared for having multiple content catalogs assigned to

 

a single website

-

-

 

(

),

CMSSite

 

.

the cockpit itself is not ready

,

For the single content catalog

of the box hybris doesn t have any capabilities to hide a particular content slot for the user

 

who doesn t have enough permissions to manage it

.

Challenge
,

Hide particular content slots in hybris WCMS

.

rights

depending on

 

the current administrator s

-

out

Solution

Technical solution
Architecture
WCMS part
There is an

interceptor

”–

.

the hybris layer between the model class and the service class

.

This interceptor replaces slot names on the fly

 

-

Sequence diagram of WCMS LoadInterceptor interaction

 

Storefront part
Page Controller or Filter should set a variable

JSP s pageSlot tag has an attribute

position

”.

position

region

.

and push into JSP

In our solution we will build the value of

attribute dynamically using value of

region

.

variable

PageTemplateModel.onLoad Interceptor
public class LoadVelocityTempateInterceptor implements 
LoadInterceptor&amp;amp;amp;amp;lt;PageTemplateModel&amp;amp;amp;amp;gt; {
    @Resource
    UserService userService;
    @Resource
    SessionService sessionService;
    @Resource
    CMSSiteService cmsSiteService;
    @Override

    public void onLoad(PageTemplateModel o, InterceptorContext var2) throws 
InterceptorException
    {
        // check if the request is from backoffice (WCMS)
if (cmsSiteService.getCurrentSite() != null) { return; }
        String vt = o.getVelocityTemplate();
        UserModel currentUser = userService.getCurrentUser();
        Set < UserGroupModel > groups = 
userService.getAllUserGroupsForUser(currentUser);
        Iterator < UserGroupModel > iter = groups.iterator();
        while (iter.hasNext()) {
            UserGroupModel ugm = iter.next();
            String groupStr = ugm.getUid();
            if (groupStr.indexOf("reg_")!=‐1) {
               vt = vt.replace("_regional_", "_reg_"+groupStr);
                o.setVelocityTemplate(vt);
                return;
            }
        }
    }
}

Page Controller
...
model.addAttribute("regional",  AreWeOnCanadianWebsite() ? "reg_ca" : 
"reg_us");
...

JSP template

Velocity Template (for WCMS)