Export and Save: Working with Generated Time Series
Phoenix offers multiple ways to persist and export your generated time series data, enabling integration with SENTINEL, external analysis tools, and long-term storage.
Two Workflows: Save vs. Download
Phoenix provides two distinct workflows for working with generated data:
Save to Database
- Permanent storage in Kronts database
- Tracked under your user account
- Accessible in SENTINEL, FORGE, CEREBRO
- Regenerable with same parameters
- Subject to limits: 3 series per user, 10,000 points max
Download to File
- Immediate export to local file
- No storage in database
- Portable for external tools
- No user limits on number of downloads
- Three formats: CSV, Excel, JSON
Saving Time Series to Database
When to Save
Save When: - You'll use data in SENTINEL for analysis - You want to preserve generation parameters - You'll reference this data in FORGE/CEREBRO - You need team access to the dataset - You want to regenerate with slight variations later
Don't Save When: - Quick one-time testing - Already have 3 saved series (limit reached) - Just need data in external tool (use download instead) - Exceeds 10,000 point limit
Save Workflow
Step 1: Generate and Preview
Before saving, always preview to verify the time series looks correct:
- Configure parameters
- Click "Preview"
- Verify chart and statistics
- Check for warnings (aliasing, etc.)
Important: Never save without previewing first!
Step 2: Click "Save Time Series"
Locate the "Save Time Series" button: - Top of sidebar (above configuration) - Bottom of sidebar (after degradation options) - OR scroll to chart area (below chart)
Click the button to open the save modal.
[Screenshot Required: Save Button Location] 1. After previewing a time series 2. Capture: Full interface showing save button locations 3. Annotate: Circle the save buttons 4. Purpose: Help users find save functionality
Step 3: Fill Save Modal
The save modal appears with three fields:
Time Series Name (Required) - Descriptive name for this dataset - Will appear in lists and dashboards - Best practice: Include key parameters
Good Names:
"Motor_1800RPM_30Hz_Clean"
"Temp_DailyCycle_5deg_2weeks"
"3Axis_Vibration_Correlated_0.85"
"Pressure_Trend_With_10pct_Outliers"
Poor Names:
"Test"
"Data1"
"My Time Series"
"Series_20240115" (without context)
Description (Optional but Recommended) - Detailed notes about the time series - Generation parameters - Intended use case - Degradation details if applicable
Good Description:
"3-channel temperature sensor array, high correlation (0.88-0.92),
daily oscillation (5°C amplitude), 2% missing data, 1% outliers at 9999.
For testing sensor fusion algorithm in FORGE."
Open in Sentinel (Checkbox, Optional) - If checked: Immediately redirects to SENTINEL with this time series - If unchecked: Returns to Phoenix generation page
[Screenshot Required: Save Modal] 1. Click "Save Time Series" 2. Capture: Modal dialog with all fields visible 3. Purpose: Show save interface and fields
Step 4: Save and Confirm
Click "Save" button in modal.
Phoenix will: 1. Validate user limits (max 3 series) 2. Regenerate time series (ensures consistency) 3. Validate point count (max 10,000) 4. Store in database with metadata 5. Show success message 6. Redirect (if "Open in Sentinel" checked)
Success: Confirmation message appears Failure: Error message explains issue (see Troubleshooting)
What Gets Saved
Data Stored: - All data points (timestamps + values) - Channel names and units (multi-channel) - Time series name and description - Generation parameters (full configuration) - Creation timestamp - User association
Generation Parameters Include: - Time configuration (duration, sampling frequency) - Base signal (mean, noise, trend) - All oscillations (frequency, amplitude, phase) - Channel configurations (multi-channel) - Correlations (multi-channel) - Degradation settings (removal, outliers)
Why Parameters Matter: - Reproduce same signal exactly - Generate variations (tweak one parameter) - Document methodology - Audit trail
Accessing Saved Time Series
In Phoenix:
- Navigate to /phoenix/ (home page)
- Your saved series listed
- Click series to regenerate with same parameters
- Edit parameters and preview modifications
In SENTINEL: - Select saved time series from dashboard - Analyze data quality automatically - View insights and recommendations
In FORGE/CEREBRO: - Import from saved Phoenix series - Process or label as needed
Regenerating from Saved Series
Why Regenerate?
Use Cases: - Create variations (change one parameter) - Generate longer/shorter versions - Add/remove degradation - Test parameter sensitivity - Create family of related datasets
Regeneration Workflow
- Navigate to
/phoenix/(home page) - Find saved time series in list
- Click time series name or "Regenerate" button
- Phoenix loads generation form with all original parameters
- Make desired changes
- Preview new version
- Save as new series (or download)
Example: Creating Parameter Sweep
Original Series: "Motor_1800RPM_Clean" - Motor at 1800 RPM (30 Hz)
Variations: 1. Regenerate → Change to 1500 RPM (25 Hz) → Save as "Motor_1500RPM_Clean" 2. Regenerate → Change to 2100 RPM (35 Hz) → Save as "Motor_2100RPM_Clean" 3. Regenerate → Add 5% outliers → Save as "Motor_1800RPM_5pct_Outliers"
Now you have a family of related test datasets!
Managing Saved Series
Viewing: - Home page lists all your saved series - Shows name, creation date, point count
Deleting: - Click delete button (⚠️ No undo!) - Frees up limit slot (for users at 3-series limit) - Removes from database permanently
Editing Metadata: - Currently not supported - Workaround: Regenerate and save with new name/description - Delete old version if needed
Downloading Time Series Files
When to Download
Download When: - Analyzing in Python, R, MATLAB, Excel - Sharing with external collaborators - Archiving for long-term storage - Already at save limit (3 series) - No need for Kronts integration
Download Formats
Phoenix offers three export formats:
CSV (Comma-Separated Values)
Best For: - Python, R, MATLAB analysis - Excel/Google Sheets import - Universal compatibility - Simple text format
Structure (Single-Channel):
Timestamp,value
2024-01-15 10:00:00,100.5
2024-01-15 10:00:01,101.2
2024-01-15 10:00:02,99.8
Structure (Multi-Channel):
Timestamp,X-Axis,Y-Axis,Z-Axis
2024-01-15 10:00:00,0.5,0.3,-9.8
2024-01-15 10:00:01,0.6,0.4,-9.7
2024-01-15 10:00:02,0.4,0.2,-9.9
File Naming: time_series_YYYY-MM-DD_HH-MM-SS.csv
Excel (XLSX)
Best For: - Direct Excel analysis - Non-technical users - Formatted spreadsheets - Charts in Excel
Structure: Same data as CSV, but in Excel format
Advantages: - Preserves data types - Can include multiple sheets (future enhancement) - Better for Windows environments
File Naming: time_series_YYYY-MM-DD_HH-MM-SS.xlsx
JSON
Best For: - API integration - Structured data with metadata - Programming language compatibility - Includes generation parameters
Structure:
{
"metadata": {
"name": "Generated Time Series",
"created_at": "2024-01-15T10:00:00Z",
"sampling_frequency": 1.0,
"channels": ["value"]
},
"statistics": {
"value": {
"min": 85.2,
"max": 114.8,
"mean": 99.9,
"count": 3600
}
},
"data": [
{"timestamp": "2024-01-15T10:00:00", "value": 100.5},
{"timestamp": "2024-01-15T10:00:01", "value": 101.2},
...
]
}
Advantages: - Includes metadata and statistics - Self-documenting - Easy to parse in any language - Structured format
File Naming: time_series_YYYY-MM-DD_HH-MM-SS.json
Download Workflow
Step 1: Generate and Preview
Always preview before downloading: 1. Configure parameters 2. Click "Preview" 3. Verify chart looks correct
Step 2: Click Download Dropdown
Locate the "Download" button: - Near the "Save" button - Shows dropdown arrow (▼)
Click to reveal format options.
[Screenshot Required: Download Dropdown] 1. After previewing a time series 2. Click download button to show dropdown 3. Capture: Dropdown menu showing CSV, XLSX, JSON options 4. Purpose: Show download options
Step 3: Select Format
Click desired format: - CSV - Most common, universal - Excel (XLSX) - For Excel users - JSON - For API/programming
File downloads immediately to your browser's download folder.
Step 4: Import to External Tool
Python (pandas):
import pandas as pd
# CSV
df = pd.read_csv('time_series_2024-01-15_10-00-00.csv', parse_dates=['Timestamp'])
df.set_index('Timestamp', inplace=True)
# Excel
df = pd.read_excel('time_series_2024-01-15_10-00-00.xlsx', parse_dates=['Timestamp'])
df.set_index('Timestamp', inplace=True)
# JSON
df = pd.read_json('time_series_2024-01-15_10-00-00.json', orient='records')
R:
# CSV
df <- read.csv('time_series_2024-01-15_10-00-00.csv')
df$Timestamp <- as.POSIXct(df$Timestamp)
# Excel
library(readxl)
df <- read_excel('time_series_2024-01-15_10-00-00.xlsx')
MATLAB:
% CSV
df = readtable('time_series_2024-01-15_10-00-00.csv');
% Excel
df = readtable('time_series_2024-01-15_10-00-00.xlsx');
Excel: - Open file directly - Data → From Text/CSV (for CSV import) - File → Open (for XLSX)
Integration with SENTINEL
Direct Integration: "Open in Sentinel"
When saving a time series, check "Open in Sentinel" to: 1. Save time series to database 2. Immediately redirect to SENTINEL 3. Auto-load the time series for analysis
SENTINEL Features: - Automatic data quality dashboard - Statistical analysis - Data characteristics visualization - Quality recommendations
Use Case: - Generate test data with known characteristics - Validate SENTINEL's quality detection - Benchmark quality metrics - Train users on interpreting quality reports
Workflow: Phoenix → SENTINEL → FORGE
Complete Pipeline:
- Phoenix: Generate clean data → Save
- Phoenix: Generate degraded version → Save
- SENTINEL: Analyze degraded data → Identify issues
- FORGE: Clean data based on recommendations
- Compare: Cleaned data vs. original clean data
- Measure: Cleaning effectiveness
This validates your entire data quality workflow!
User Limits and Quotas
Save Limits
Regular Users: - Maximum 3 saved time series - Must delete old series to save new ones - No limit on downloads
Superusers/Admins: - Unlimited saved series - No quotas
Point Limits
All Users: - Maximum 10,000 total data points - Single-channel: 10,000 points per series - Multi-channel: 10,000 total across all channels
Example Calculations:
Single-channel, 1 Hz, 2.78 hours = 10,000 points ✓
3 channels, 1 Hz, 55 minutes = 9,900 points ✓
10 channels, 10 Hz, 100 seconds = 10,000 points ✓
Troubleshooting
"Maximum series limit reached"
Cause: You have 3 saved time series (regular user limit)
Solutions: 1. Delete an old saved series 2. Use download instead of save 3. Contact admin for increased limit
"Data point limit exceeded"
Cause: Trying to save > 10,000 total points
Solutions: 1. Reduce duration 2. Reduce sampling frequency 3. Reduce number of channels (multi-channel) 4. Calculate limit: Duration × Frequency × Channels ≤ 10,000
Example Fix:
Problem: 3 channels × 1 hour × 1 Hz = 10,800 points ✗
Fix: Reduce to 55 minutes: 3 × 3300 × 1 = 9,900 points ✓
Save succeeds but can't find saved series
Check:
1. Navigate to /phoenix/ home page (not /phoenix/generate/)
2. Verify you're logged in as same user
3. Check time series list on home page
4. Series sorted by creation date (newest first)
Download button doesn't work
Check: 1. Preview generated first (download requires preview) 2. Check browser popup blocker settings 3. Verify JavaScript enabled 4. Try different format (CSV vs XLSX vs JSON)
Downloaded file is empty or corrupted
Causes: 1. No data generated (preview first) 2. Browser cancelled download 3. Disk space full
Solutions: - Always preview before downloading - Check browser download status - Verify disk space - Try again
Can't import CSV into Python/R
Common Issues:
Timestamp parsing:
# Add parse_dates parameter
df = pd.read_csv('file.csv', parse_dates=['Timestamp'])
Column names:
# Check exact column names
print(df.columns)
# Use correct name (case-sensitive)
Encoding:
# Try specifying encoding
df = pd.read_csv('file.csv', encoding='utf-8')
JSON format doesn't parse
Check: 1. Valid JSON (use online validator) 2. Correct orientation/structure 3. Large files may need streaming parser
Best Practices
Naming Conventions
Include Key Information: - Signal type - Key parameters - Degradation if applicable - Use case
Format Suggestions:
{Type}_{Parameter}_{Quality}_{Purpose}
Examples:
Vibration_30Hz_Clean_Testing
Temp_DailyCycle_5pct_Outliers_FORGE
Pressure_150kPa_Drift_Validation
Documentation in Descriptions
Template:
Signal: {type}
Parameters: {key settings}
Channels: {single/multi-channel details}
Degradation: {removal%, outlier%}
Purpose: {intended use}
Notes: {additional context}
Example:
Signal: 3-axis accelerometer
Parameters: 30 Hz main oscillation, 2 m/s² amplitude
Channels: X, Y, Z correlated at 0.35
Degradation: 3% removal, 2% outliers (constant 999)
Purpose: Test vibration analysis algorithm robustness
Notes: Simulates motor at 1800 RPM with bearing fault
Version Control
Strategy: Use names with version numbers
Motor_1800RPM_v1_Clean
Motor_1800RPM_v2_Light_Degradation
Motor_1800RPM_v3_Heavy_Degradation
Export for Archival
Workflow: 1. Save to database for immediate use 2. Download JSON for long-term archive 3. JSON includes metadata and parameters 4. Store with documentation
Advanced Workflows
Batch Generation
While Phoenix doesn't support batch generation directly:
Manual Batch Workflow: 1. Generate first variant → Save/Download 2. Regenerate with modified parameter → Save/Download 3. Repeat for all variants 4. Organize saved series with clear naming
External Post-Processing
Workflow: 1. Generate in Phoenix 2. Download CSV 3. Load in Python/R 4. Apply additional processing: - Custom degradation patterns - Complex transformations - Merge with other data 5. Save processed version
Example (Python):
import pandas as pd
# Load Phoenix CSV
df = pd.read_csv('phoenix_output.csv', parse_dates=['Timestamp'])
# Add custom processing
df['processed'] = df['value'].rolling(window=10).mean()
# Add systematic gaps (every 100th point)
df_systematic = df[df.index % 100 != 0]
# Save for reimport
df_systematic.to_csv('processed_data.csv', index=False)
Multi-Format Archiving
Best Practice: Export all formats for flexibility
Workflow: 1. Preview time series 2. Download CSV (for analysis) 3. Download JSON (for metadata preservation) 4. Download XLSX (for stakeholder sharing) 5. Save to database (for Kronts integration)
Summary
Phoenix offers flexible export and save options:
Save to Database: - Permanent storage in Kronts - SENTINEL/FORGE/CEREBRO integration - Regeneration with parameters - User limits: 3 series, 10,000 points
Download Files: - CSV: Universal, simple - XLSX: Excel-friendly - JSON: Metadata-rich, structured - No limits on downloads
Best Practices: - Preview before save/download - Use descriptive names - Document in descriptions - Archive in multiple formats - Validate imports in external tools
Next Steps
- Basic Usage - Review complete generation workflow
- Multi-Channel - Export multi-channel data
- Data Degradation - Add quality issues before export
- Technical Reference - File format specifications
Proper export and save workflows ensure your synthetic time series data integrates seamlessly with analysis tools and the broader Kronts ecosystem.