3

MySQL Shell 8.0.24 – What’s New?

 2 years ago
source link: https://mysqlserverteam.com/mysql-shell-8-0-24-whats-new/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

The MySQL team is proud to announce the general availability of version 8.0.24 of the MySQL Shell.

In addition to a considerable number of bugs fixed, the following changes were introduced.

Improved Command Line Integration

Integrating the shell functionality in DevOps operations is a key feature and this release has introduced a big improvement on this area being the most remarkable improvements the following:

  • No longer need to execute APIs using the –execute (-e) command line argument: all of the data required for any API available in CLI can be defined through command line arguments (including lists).
  • The data type conversion for the different parameters is based on the API metadata which guarantees a correct data type conversion when needed.
  • Improved help system for CLI calls, the CLI help allows to identify:
    • What objects expose API functions on the CLI interface.
    • Which API functions are available for a specific object.
    • The syntax required to execute an specific function.
  • Support to expose MySQL Shell Plugin operations to the CLI interface.

The format for CLI operations is as follows:

mysqlsh [shell options] -- [CLI Options]

To see which objects expose functionality to the CLI interface (including plugins) use:

mysqlsh -- --help

Example:

$ mysqlsh -- --help
The following objects provide command line operations:
  cluster
    Represents an InnoDB cluster.
    InnoDB cluster and replicaset management functions.
    Plugin to manage the MySQL Database Service on OCI.
    Represents an InnoDB ReplicaSet.
  shell
    Gives access to general purpose functions and properties.
    Global object that groups miscellaneous tools like upgrade
    checker and JSON import.

To see which operations are available for a specific object use:

mysqlsh -- <object> --help

Example:

$ mysqlsh -- dba --help
The following operations are available at 'dba':
  check-instance-configuration
    Validates an instance for MySQL InnoDB Cluster usage.
  configure-instance
    Validates and configures an instance for MySQL InnoDB Cluster
    usage.
  configure-local-instance
    Validates and configures a local instance for MySQL InnoDB
    Cluster usage.

To see the syntax required to call a specific function from CLI use:

mysqlsh -- <object> <function> --help

Example:

$ mysqlsh -- dba check-instance-configuration --help
  check-instance-configuration - Validates an instance for MySQL
  InnoDB Cluster usage.
SYNTAX
  dba check-instance-configuration [<instance>] [<options>]
WHERE
  instance: An instance definition.
RETURNS
  A descriptive text of the operation result.
OPTIONS
  --mycnfPath=<str>
          Optional path to the MySQL configuration file for the
          instance. Alias for verifyMyCnf
  --verifyMyCnf=<str>
          Optional path to the MySQL configuration file for the
          instance. If this option is given, the configuration
          file will be verified for the expected option values, in
          addition to the global MySQL system variables.

The general syntax to execute an operation from CLI is as follows:

mysqlsh [shell options] -- <object> <function> [function arguments]

Example:

$ mysqlsh root:@localhost:3308 -- util dump-schemas sakila world \
                               --output-url=my-dump \
                               --exclude-tables=sakila.payment,sakila.rental \
                               --exclude-tables=sakila.sales_by_film_category \
                               --exclude-tables=sakila.sales_by_store
Acquiring global read lock
Global read lock acquired
Gathering information - done
All transactions have been started
Locking instance for backup
Global read lock has been released
Writing global DDL files
Writing DDL for schema `world`
Writing DDL for table `world`.`countrylanguage`
Preparing data dump for table `world`.`countrylanguage`
Data dump for table `world`.`countrylanguage` will be chunked using column `CountryCode`
Preparing data dump for table `world`.`country`
Data dump for table `world`.`country` will be chunked using column `Code`
Preparing data dump for table `world`.`city`
Data dump for table `world`.`city` will be chunked using column `ID`
Preparing data dump for table `sakila`.`store`
Data dump for table `sakila`.`store` will be chunked using column `store_id`
Preparing data dump for table `sakila`.`city`
Data dump for table `sakila`.`city` will be chunked using column `city_id`
Preparing data dump for table `sakila`.`country`
Data dump for table `sakila`.`country` will be chunked using column `country_id`
Preparing data dump for table `sakila`.`customer`
Data dump for table `sakila`.`customer` will be chunked using column `customer_id`
Preparing data dump for table `sakila`.`actor`
Data dump for table `sakila`.`actor` will be chunked using column `actor_id`
Preparing data dump for table `sakila`.`category`
Data dump for table `sakila`.`category` will be chunked using column `category_id`
Preparing data dump for table `sakila`.`film_text`
Data dump for table `sakila`.`film_text` will be chunked using column `film_id`
Preparing data dump for table `sakila`.`film`
Data dump for table `sakila`.`film` will be chunked using column `film_id`
Preparing data dump for table `sakila`.`address`
Data dump for table `sakila`.`address` will be chunked using column `address_id`
Preparing data dump for table `sakila`.`language`
Data dump for table `sakila`.`language` will be chunked using column `language_id`
Preparing data dump for table `sakila`.`film_category`
Data dump for table `sakila`.`film_category` will be chunked using column `film_id`
Preparing data dump for table `sakila`.`staff`
Data dump for table `sakila`.`staff` will be chunked using column `staff_id`
Preparing data dump for table `sakila`.`film_actor`
Data dump for table `sakila`.`film_actor` will be chunked using column `actor_id`
Preparing data dump for table `sakila`.`inventory`
Data dump for table `sakila`.`inventory` will be chunked using column `inventory_id`
Writing DDL for table `world`.`city`
Writing DDL for table `world`.`country`
Running data dump using 4 threads.
NOTE: Progress information uses estimated values and may not be accurate.
Writing DDL for schema `sakila`
Writing DDL for view `sakila`.`staff_list`
Writing DDL for view `sakila`.`nicer_but_slower_film_list`
Writing DDL for view `sakila`.`actor_info`
Writing DDL for view `sakila`.`film_list`
Writing DDL for view `sakila`.`customer_list`
Writing DDL for table `sakila`.`store`
Writing DDL for table `sakila`.`city`
Writing DDL for table `sakila`.`country`
Writing DDL for table `sakila`.`customer`
Writing DDL for table `sakila`.`actor`
Writing DDL for table `sakila`.`category`
Writing DDL for table `sakila`.`film_text`
Writing DDL for table `sakila`.`film`
Writing DDL for table `sakila`.`address`
Writing DDL for table `sakila`.`language`
Writing DDL for table `sakila`.`film_category`
Writing DDL for table `sakila`.`staff`
Writing DDL for table `sakila`.`film_actor`
Writing DDL for table `sakila`.`inventory`
Data dump for table `world`.`city` will be written to 1 file
Data dump for table `world`.`countrylanguage` will be written to 1 file
Data dump for table `world`.`country` will be written to 1 file
Data dump for table `sakila`.`city` will be written to 1 file
Data dump for table `sakila`.`store` will be written to 1 file
Data dump for table `sakila`.`customer` will be written to 1 file
Data dump for table `sakila`.`country` will be written to 1 file
Data dump for table `sakila`.`actor` will be written to 1 file
Data dump for table `sakila`.`category` will be written to 1 file
Data dump for table `sakila`.`film_text` will be written to 1 file
Data dump for table `sakila`.`address` will be written to 1 file
Data dump for table `sakila`.`language` will be written to 1 file
Data dump for table `sakila`.`film` will be written to 1 file
Data dump for table `sakila`.`film_category` will be written to 1 file
Data dump for table `sakila`.`staff` will be written to 1 file
Data dump for table `sakila`.`inventory` will be written to 1 file
Data dump for table `sakila`.`film_actor` will be written to 1 file
1 thds dumping - 100% (20.48K rows / ~20.45K rows), 0.00 rows/s, 0.00 B/s uncompressed, 0.00 B/s compressed
Duration: 00:00:00s                                                                                        
Schemas dumped: 2                                                                                          
Tables dumped: 17                                                                                          
Uncompressed data size: 1.03 MB                                                                            
Compressed data size: 117.59 KB                                                                            
Compression ratio: 8.7                                                                                    
Rows written: 20482                                                                                        
Bytes written: 117.59 KB                                                                                  
Average uncompressed throughput: 1.03 MB/s                                                                
Average compressed throughput: 117.59 KB/s    

For detailed information aobut this topic refer to the User Guide.

Dump & Load Enhancements

This time new compatibility options were added to allow de user determining what should be done on when a table without primary keys (which is a requirement for tables in MDS) are found on an On-Premise instance:

  • ignore_missing_pks: will enable the dump process to skip the primary key verification
  • create_invisible_pks: will cause the load process to automatically create the missing primary keys

Example:

mysql-py> util.dump_instance(prefix, {"osBucketName":"my-bucket",
                            "osNamespace": "testing",
                            "ocimds": True,
                            "compatibility":["strip_restricted_grants",
                                             "ignore_missing_pks"]})
Acquiring global read lock
Global read lock acquired
Gathering information - done
All transactions have been started
Locking instance for backup
Global read lock has been released
Checking for compatibility with MySQL Database Service 8.0.24
NOTE: One or more tables without Primary Keys were found.
      This issue is ignored.
      This dump cannot be loaded into an MySQL Database Service instance with High Availability.

For additional details refer to the compatibility options for the dump utilities in the User Guide.

Improved Logging

Troubleshooting execution of SQL statements also got an improvement by adding a new option that makes SQL operations to be logged into the system log:

  • syslog on unix environments
  • event log in windows environments

To enable this feature either set the shell option history.sql.syslog=true or start the MySQL Shell with the –syslog command line argument.

For additional details about this feature refer to the User Guide.

Resources

For details about the MySQL Shell please take a look at the MySQL Shell User Guide.

For additional details about features introduced by this version and the full list of fixed bugs take a look at the Release Notes.

Don’t forget to download it and give it a try:

Your feedback is very welcome!

You can reach us at #shell channel in https://mysqlcommunity.slack.com/

d05aad0517f4cd71e3b87b3bab4c2a44?s=74&d=mm&r=g

About Rene Ramirez

Juan Rene Ramirez Monarrez (a.k.a. rennox) is a Software Engineer with a master degree in Computer Science residing in Mexico.

He started his career at IBM in application development, had a pass on Automation at Freescale Semiconductor (now MXP) and landed at the MySQL world in 2011.

At MySQL he started working at the Workbench team and now is leading the efforts to make the MySQL Shell the client by excellence for different MySQL products. View all posts by Rene Ramirez →


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK