Is it normal for a CREATE MATERIALIZED VIEW command to take over an hour on a large dataset?

I'm working with a dataset that has tens of millions of rows, and I'm trying to create a materialized view by parsing some JSON data. After running the CREATE MATERIALIZED VIEW command, the table didn't get created for over an hour, and re-issuing the command results in an error message saying internal error: table is in creating procedure. I'm concerned whether this is normal behavior or if something has gone wrong. I found a related issue on GitHub which suggests that the process might be stuck. Additionally, I'm seeing warnings in the compute nodes related to write limits. Here's an example of the warning message:

    at src/storage/src/hummock/
    in risingwave_stream::common::table::state_table::state_table_commit
    in risingwave_stream::executor::wrapper::trace::executor with "MaterializeExecutor 61AD00000002 (actor 25005, operator 2)", actor_id: 25005
    in risingwave_stream::executor::actor::actor with "Actor 25005", actor_id: 25005, prev_epoch: 4823540048723968, curr_epoch: 4823540114325504```

Is this expected, or should I be looking into potential issues with the creation process?

Bahador Nooraei

Asked on Jul 31, 2023

It's not necessarily normal for a CREATE MATERIALIZED VIEW command to take over an hour, but it can happen depending on the complexity of the computation and the size of the data. It's possible that the system is still processing the data, especially if you're dealing with a large dataset and complex JSON parsing. The warnings you're seeing about the write limiter being updated and the table write being blocked due to too many L0 sub levels suggest that there might be an issue with the storage layer's ability to keep up with the write load. This could be a sign that the system is indeed stuck or experiencing performance bottlenecks. To investigate further, you can:

  1. Check the DDL progress using the rw_catalog.rw_ddl_progress query to see if the materialized view creation is still in progress.
  2. Monitor system metrics, such as CPU and memory usage, to determine if there are any abnormal patterns that could indicate a bottleneck.
  3. Review the system's logs for any additional error messages or warnings that could provide more insight into the issue.
  4. If you haven't already, consider setting up monitoring tools like Grafana to get a better understanding of the system's performance over time.

If the issue persists, you may need to reach out to the support community or the developers of the database system you're using for more targeted assistance.

Aug 02, 2023Edited by