Can I share the query log file to local computer or ftp via automated script?

You probably shouldn't use an NFS share - SQLite3's documentation for hosting database files on NFS shares currently reads as follows:

Your best defense is to not use SQLite for files on a network filesystem.

You could create a clean copy of Pi-hole's long-term query database to a new file with the following command:

pihole-FTL sqlite3 /etc/pihole/pihole-FTL.db "VACUUM INTO '/path/to/pihole-FTL.backup.db'"

Replace /path/to/ with an existing location as required (e.g. with a network mount).
This results in a potentially smaller file than the direct copy, and it can be executed while your Pi-hole is running.

However, note that Pi-hole's long-term query db can grow quite large, and copying it to a remote machine every minute may consume considerable amounts of bandwidth.

For an alternative approach, you should be aware that Pi-hole also writes to an actual log file at /var/log/pihole/pihole.log.

It may be easier and less bandwidth intensive to send entries to that log file to a remote location via rsyslog, as some users have succeeded with, see e.g. REQUEST: Option to send logs to a remote logserver - #27 by Tim_Dockter.

Of course, you'd have to parse log files instead of querying a database, but you could always build your own database from those logs.

I am also aware of a user effort to collect and visualise Pi-hole's log data via Kibana/Elasticsearch, employing FileBeat to send logs, which may be similar to what you try to achieve - see GitHub - nin9s/elk-hole: elasticsearch, logstash and kibana configuration for pi-hole visualiziation.
It's somewhat dated, so it may or may not work with current Pi-hole releases, but you probably could still glance some inspiration from it, or get in contact with that third-party maintainer to exchange ideas.