MichaelB.
7/29/2018 - 4:16 PM

Create an HDBDD (CDS) Table

You can use Core Data Services (CDS) to create a design table table object, similar to an hdbtable object. More sophisticated method, adds additional information and semantics to table defintions. If you use the XS Advanced Engine (XSA), then CDS is the recommended way to create a table. HDBTables are not the recommended way for XSA. For CDS use a DDL Source File, when right-click a package -> New -> Other. Hitting Tab will automatically complete the file name with extension. It will automatically create a skeleton when you press Finish. The namespace is the package where we create the object. The schema name indicates where the table will be created. Context is only required if you create multiple tables within one file. Entity is the table name. The entity should be the same as the file name we provided when creating the hdbdd file. You define the column names in between the brackets. The first letter of the data types should be upper case and the rest lower case. If you mention a small letter first when typing a data type it will not recognize it. The colon between the column name and the data type is mandatory. SQL & CDS data types are not the same. CDS has different data types, you can find them in the SAP HANA Developer Guide. NVARCHAR = String INTEGER / INT = Integer DECIMAL(p,s) - Decimal(p,s) DATE = LocalDate DOUBLE = BinaryFloat Each column should end with a semicolon You can enter the length in parentheses after the data type. All you need to create a CDS table: namespace, schema and entity. Similar to the hdbtable object, the package name is concatenated with the table name. The columns have the same case sensitivity defined in the hdbdd file. When you check the definition you can see the SQL data types and the column store data types. You can insert data into a CDS table using the SQL INSERT statement. You can also use the CDS method to import data. You can expose a CSV file to Eclipse by putting a file in your workspace. Then create a Table Import Configuration (hdbti-file) via right-click on package -> New -> Other. Using that file you can define the table import parameters for a CSV file that is located in the package. The number of table columns and their types must match the number of CSV columns and their types. For a table you need to give 2 colons (::), for a file: one colon (:). When you activate the TIC file, the data from the CSV is automatically loaded to the CDS table. Do not leave the CSV file open while activating the TIC and define the csv file with lower-case. You can INSERT extra data via SQL. You can also delete the data, update the file and re-activate the TIC to update the data. You can import data with hdbti for each type of table, not just hdbdd.

Namespace is the place where the actual document resides -> place where CDS can be found Schema is for objects that you activate Context: a wrapper for the entities that are built in this context, name of the CDS document. Entities within the context, inherit the context. You can also create sub-contexts or nested contexts. You can specify additional details after technical configuration.

-- Example
namespace "system-local"."public"."mbussink"."Exercises"."SQLScript";

@Schema: 'MBUSSINK'

Context "CDS"

@Catalog,tableType: #COLUMN

entity CDS_TABLE {

    key ID : Integer;
        name : String(30);
        city : String(30);

}

technical configuration {column store;};


-- Example 2
namespace "system-local"."public"."mbussink"."Exercises"."SQLScript";

@Schema: 'MBUSSINK'

Context EmployeeAnalysis{

type StreetAddress{
name : String(80);
number : Integer;
};

type CountryAddress{
name : String(80);
code : String(3);
};

@Catalog,tableType: #COLUMN
entity Address {
    key ID : Integer;
        street : StreetAddress;
        zipCode : Integer;
        city : String(80);
        country : CountryAddress;
        type : String(10); // Home, office
        };


view FrenchActiveAddressView AS SELECT FROM Address{
    ID,
    street,name,
    street.number}
    WHERE country = 'France'
    AND status = 'A';
    ;

entity MyEntity {
    key ID : Integer;
        a : Integer;
        color : String(1);

        };

view MyView5 AS SELECT FROM MyEntity{
    id,
    CASE color // defined in MyEntity above
    WHEN 'R' THEN 'red'
    WHEN 'G' THEN 'green'
    WHEN 'B' THEN 'blue' ELSE 'black'
    END AS color,
    
    CASE 
    WHEN a<10 then 'small'
    WHEN 10<=a AND a<100 THEN 'medium'
    ELSE 'large'
    END AS size,

        };

entity Person {
    key ID : Integer;
        address1 : Association to Address;};

};


-- Standard syntax of Table Import Configuration file
// To define a table import from a CSV file to a table you can use the following code.
// In the example the CSV files should exist in sample.test package 
// The number of table columns and their types must match the number of CSV columns and their types. 

import = 
        [
            {
                cdstable = "sample.test::MY_DDL.MY_TABLE";// Target CDS table
                file = "sample.test:mydata.csv";          // Source CSV file
                header = false;                           // CSV file does not include a header first line
                delimField = ";";                         // CSV data value delimiter, default is ","
            },
            {
                hdbtable = "sample.test::MY_TABLE1";     // Target repository table
                file = "sample.test:mydata1.csv";        
                header = false;                          
                delimField = ";";                        
            }

//           ,
//           {
//               table = "MY_TABLE2";                    // Target catalog table
//               schema = "<MY_SAMPLE_SCHEMA>";
//               file = "sample.test:mydata2.csv";     
//               header = false;                       
//               delimField = ";";                     
//           }
         ];