I would like to summarize my change request for the Python package nzpy (the Python driver for the Netezza database).
Source code on GitHub: https://github.com/IBM/nzpy
PYPI: https://pypi.org/project/nzpy/
When loading data into Netezza using EXTERNAL TABLE with the standard encoding "internal" (documented at IBM Documentation), CHAR/VARCHAR attributes must be encoded in Latin9, and NCHAR/NVARCHAR attributes must be encoded in UTF-8. If the target table contains both CHAR/VARCHAR and NCHAR/NVARCHAR attributes, this format is the only valid one (Netezza does not perform implicit conversions).
Problem:
Loading a CSV file with this attribute-dependent encoding (which is admittedly a non-standard csv format that not every editor can handle) is not possible with nzpy, as Python opens the source file as a text file with only one defined encoding (operating system encoding or, from Pull Request 97, UTF-8). To be exact, the data import fails when a Latin9-encoded non-ASCII character appears, because the corresponding byte sequence is invalid for UTF-8.
There has already been a thematically related pull request in November. Someone noticed that no encoding was provided for opening the file for External Tables, which is why the operating system's standard encoding is used.
https://github.com/IBM/nzpy/pull/97 This pull request has been accepted, but the change is not present in the current release.
The proposed solution there is to set the encoding to UTF-8 for all operating systems. This resolves the issue that UTF-8 files are now correctly read on Windows. However, Latin9-encoded CHAR/VARCHAR attributes still cause the aforementioned problem.
Note:
A further correction was recently made regarding export: https://github.com/IBM/nzpy/pull/96 This correction is included in the release. However, this does not solve the above problem.
Reproducing code:
import nzpy
import getpass
import os
working_directory = os.getcwd()
conn = nzpy.connect(
user=input("Enter Netezza user"),
host=input("Enter Netezza host or IP"),
port=int(input("Enter Netezza port")),
password=getpass.getpass("Enter Netezza password"),
database=input("Enter Netezza database")
)
db_schema=input("Enter schema").upper()
db_table=input("Enter table name").upper()
with conn.cursor() as cursor:
cursor.execute(f"DROP TABLE {db_schema}.{db_table} IF EXISTS;")
cursor.execute(f"""
CREATE TABLE {db_schema}.{db_table} (
VARCHAR_ATTRIBUTE VARCHAR(100),
NVARCHAR_ATTRIBUTE NVARCHAR(100)
) DISTRIBUTE ON RANDOM;
""")
with open('demo_import.csv', 'wb') as file:
file.write(b'S\xfc\xdfes oder h\xe4ssliches Encoding?,S\xc3\xbc\xc3\x9fes oder h\xc3\xa4ssliches Encoding?')
with conn.cursor() as cursor:
cursor.execute(f"""INSERT INTO {db_schema}.{db_table} SELECT *
FROM EXTERNAL 'demo_import.csv' SAMEAS {db_schema}.{db_table}
USING (
ENCODING 'internal'
REMOTESOURCE 'python'
DELIMITER ','
LOGDIR '{working_directory}'
);
""")
with conn.cursor() as cursor:
cursor.execute(f"""
CREATE EXTERNAL TABLE 'demo_export.csv' USING (
ENCODING 'internal'
REMOTESOURCE 'python'
DELIMITER ','
LOGDIR '{working_directory}'
) AS SELECT * FROM {db_schema}.{db_table};
""")
My suggestion is:
Read the file fundamentally in binary format and send the corresponding bytes to Netezza.
I see no reason why Python should enforce an encoding of the file—the data should simply be passed through.